# Particle Physics Planet

## August 21, 2014

### Jaques Distler - Musings

Golem V

For nearly 20 years, Golem has been the machine on my desk. It’s been my mail server, web server, file server, … ; it’s run Mathematica and TeX and compiled software for me. Of course, it hasn’t been the same physical machine all these years. Like Doctor Who, it’s gone through several reincarnations.

Alas, word came down from the Provost that all “servers” must move (physically or virtually) to the University Data Center. And, bewilderingly, the machine on my desk counted as a “server.”

Obviously, a 27” iMac wasn’t going to make such a move. And, equally obvious, it would have been rather difficult to replace/migrate all of the stuff I have running on the current Golem. So we had to go out shopping for Golem V. The iMac stayed on my desk; the machine that moved to the Data Center is a new Mac Mini

Golem V, all labeled and ready to go
• 2.3 GHz quad-core Intel Core i7 (8 logical cores, via hyperthreading)
• 16 GB RAM
• 480 GB SSD (main drive)
• 1 TB HD (Time Machine backup)
• 1 TB external HD (CCC clone of the main drive)
• Dual 1 Gigabit Ethernet Adapters, bonded via LACP

In addition to the dual network interface, it (along with, I gather, a rack full of other Mac Minis) is plugged into an ATS, to take advantage of the dual redundant power supply at the Data Center.

Not as convenient, for me, as having it on my desk, but I’m sure the new Golem will enjoy the austere hum of the Data Center much better than the messy cacophony of my office.

I did get a tour of the Data Center out of the deal. Two things stood out for me.

1. Most UPSs involve large banks of lead-acid batteries. The UPSs at the University Data Center use flywheels. They comprise a long row of refrigerator-sized cabinets which give off a persistent hum due to the humongous flywheels rotating in vacuum within.
2. The server cabinets are painted the standard generic white. But, for the networking cabinets, the University went to some expense to get them custom-painted … burnt orange.

## August 20, 2014

### The n-Category Cafe

Holy Crap, Do You Know What A Compact Ring Is?

You know how sometimes someone tells you a theorem, and it’s obviously false, and you reach for one of the many easy counterexamples only to realize that it’s not a counterexample after all, then you reach for another one and another one and find that they fail too, and you begin to concede the possibility that the theorem might not actually be false after all, and you feel your world start to shift on its axis, and you think to yourself: “Why did no one tell me this before?”

That’s what happened to me today, when my PhD student Barry Devlin — who’s currently writing what promises to be a rather nice thesis on codensity monads and topological algebras — showed me this theorem:

Every compact Hausdorff ring is totally disconnected.

I don’t know who it’s due to; Barry found it in the book Profinite Groups by Ribes and Zalesskii. And in fact, there’s also a result for rings analogous to a well-known one for groups: a ring is compact, Hausdorff and totally disconnected if and only if it can be expressed as a limit of finite discrete rings. Every compact Hausdorff ring is therefore “profinite”, that is, expressible as a limit of finite rings.

So the situation for compact rings is completely unlike the situation for compact groups. There are loads of compact groups (the circle, the torus, $\mathrm{SO}\left(n\right)SO\left(n\right)$, $U\left(n\right)U\left(n\right)$, ${E}_{8}E_8$, …) and there’s a very substantial theory of them, from Haar measure through Lie theory and onwards. But compact rings are relatively few: it’s just the profinite ones.

I only laid eyes on the proof for five seconds, which was just long enough to see that it used Pontryagin duality. But how should I think about this theorem? How can I alter my worldview in such a way that it seems natural or even obvious?

### Christian P. Robert - xi'an's og

To Susie [by Kerrie Mengersen]

[Here is a poem written by my friend Kerrie for the last ISBA cabaret in Cancun, to Susie who could not make it to a Valencia meeting for the first time... Along with a picture of Susie, Alicia and Jennifer taking part in another ISBA cabaret in Knossos, Crete, in 2000.]

This is a parody of a classic Australian bush poem, ‘The Man from Snowy River’, that talks of an amazing horseman in the rugged mountain bush of Australia, who out-performed the ‘cracks’ and became a legend. That’s how I think of Susie, so this very bad poem comes with a big thanks for being such an inspiration, a great colleague and a good friend.

There was movement in the stats world as the emails caught alight
For the cult from Reverend Bayes had got away
And had joined the ‘ISBA’ forces, and were calling for a fight
So all the cracks had gathered to the fray.

All the noted statisticians from the countries near and far
For the Bayesians love their meetings where the sandy beaches are
And the Fishers snuffed the battle with delight.

There were Jim and Ed and Robert, who were ‘fathers of the Bayes’
They were known as the whiskey drinking crowd
But they’d invented all the theory in those Valencia days
Yes, they were smart, but oh boy were they loud!

And Jose M Bernardo came down to lend a hand
A finer Bayesian never wrote a prior
And Mike West, Duke of Bayesians, also joined the band
And brought down all the graduates he could hire

Sonia and Maria strapped their laptops to the cause
And Anto, Chris and Peter ran – in thongs!
Sirs Adrian and David came with armour and a horse
While Brad and Gareth murdered battle songs

And one was there, a Spaniard, blonde and fierce and proud
With a passion for statistics and for fun
She’d been there with the founders of the nouveau Bayesian crowd
And kept those Fisher stats folk on the run

But Jim’s subjective prior made him doubt her power to fight
Mike Goldstein said, ‘That girl will never do,
In the heat of battle, deary, you just don’t have the might
This stoush will be too rough for such as you.’

But Berger and Bernardo came to Susie’s side
We think we ought to let her in, they said
For we warrant she’ll be with us when the blood has fairly dried
For Susie is Valencia born and bred.

She did her Bayesian training in the classic Spanish way
Where the stats is twice as hard and twice as rough
And she knows nonparametrics, which is useful in a fray
She’s soft outside, but inside, man she’s tough!

She went. They found those Fisher stats folk sunning on the beach
And as they grabbed their laptops from the sand
Jim Berger muttered fiercely, ‘right, twist any head you reach
We cannot let those Fish get out of hand.’

Alicia, grab a Dirichlet and break them with a stick
Chris, it’s easy, just like ABC
And Sylvia, a mixture model ought to do the trick
But just you leave that Ronnie up to me.

Jose battled them with inference and curdled Neyman’s blood
And posteriors lined like beaches like sandbags for a flood
And Jim threw whiskey bottles as they fled.

And when the Bayesians and the Fishers were washed up on the sand
The fight was almost judged to be a tie
But it was Susie who kept going, who led the final charge
For she didn’t want objective Bayes to die

She sent the beach on fire as she galloped through the fray
Hurling P and F tests through the foam
‘til the Fishers raised surrender and called the fight a day
And shut their laptops down and sailed for home.

And now at ISBA meetings where the Bayesians spend their days
To laugh and learn and share a drink or two
A glass is always toasted: to Susie, Queen of Bayes
And the cheering echoes loudly round the crew.

She will be remembered for setting Bayesian stats on fire
For her contributions to the field are long
And her passion and her laughter will continue to inspire
The Bayesian from Valencia lives on!

Filed under: pictures, Statistics, University life Tagged: Cancún, Crete, ISBA, ISBA 2014, Kerrie Mengersen, Knossos, Susie Bayarri, València

### Emily Lakdawalla - The Planetary Society Blog

Canadian Mars Analogue Mission: Field Report, Week 1
Tanya Harrison reports on Canada's efforts to simulate a Mars sample return mission here on Earth.

### ZapperZ - Physics and Physicists

How Long Can You Balance A Pencil
Minute Physics took up a topic that I had discussed previously. It is about the time scale on how long a pencil can be balanced on its tip.

Note that in a previous post, I had pointed out several papers that debunked the fallacy of using quantum mechanics and the HUP to arrive at such time scale. So it seems that this particular topic, like many others, keeps coming back every so often.

Zz.

### arXiv blog

The Next Battleground In The War Against Quantum Hacking

Ever since the first hack of a commercial quantum cryptography device, security specialists have been fighting back. Here’s an update on the battle.

Quantum hacking is the latest fear in the world of information security. Not so long ago, physicists were claiming that they could send information with perfect security using a technique known as quantum key distribution.

### Peter Coles - In the Dark

Riverbed

Yesterday afternoon I skived off the last session of the workshop I’m attending and took the train to the small town of Humlebæk, which is about 35 north of Copenhagen and is the site of the Louisiana Museum of Modern Art. The purpose of my visit was to attend an invitation-only preview of a new installation by Olafur Eliasson called Riverbed. The invitation to this came relatively recently and it was only the coincidence of my being here at this workshop that made it possible for me to attend.

As it turned out, I arrived quite early and the weather was fine, so I took the chance to wander around the sculpture park before the main event. There are many fine works there. This, for example, is by Henry Moore:

This one is by Henri Laurens

And so to Riverbed. This is a large work featuring boulders and gravel, brought all the way from Iceland, which have been used to recreate a section of the landscape of Olafur’s native land. The distinctive colouring and granularity of the raw material produces terrain of a texture that must look very alien to anyone who has never been to Iceland. The installation is contained within a space which is contained within and divided by stark white-painted walls, with rectangular gaps where necessary to let the water through from room to room. These boundaries, with their geometrically precise edges, affect the experience of the naturalistic landscape in a very interesting way. The Riverbed itself may look “natural” but the structures surrounding it constantly remind you that it isn’t. Viewers are permitted to wander through the piece wherever they like and interact however they please, sitting down on a boulder, paddling in the stream or even just watching the other people (which is mainly what I did). I don’t know what’s more interesting, the work itself or the way people behave when inside it!

Here are some pictures I took, just to give you a flavour:

Anyway, after that we adjourned for a drinks reception and a splendid dinner in the Boat House, which part of the Louisiana complex. Being neither an artist nor an art critic I felt a bit of an outsider, but I did get the chance to chat to quite a few interesting people including, by sheer coincidence, a recent graduate of the University of Sussex. The Boat House looks out towards the island of Hven, home of the observatory of Tycho Brahe, so naturally I took the opportunity to drink a toast to his memory:

After that I had to return to Copenhagen to write my talk, as I was on first this morning at 9.30. This afternoon we have a bit of a break before the conference excursion and dinner this evening. The excursion happens to be to Louisiana Museum of Modern Art (although we’re all going by bus this time) and the dinner is in the Boat House….

### Lubos Motl - string vacua and pheno

Natalie Wolchover wrote a good article for the Simons Foundation,
At Multiverse Impasse, a New Theory of Scale
about Agravity, a provoking paper by Alberto Salvio and Alessandro Strumia. Incidentally, has anyone noticed that Strumia is Joe Polchinski's twin brother? The similarity goes beyond the favorite color of the shirt and pants.

At any rate, the system of ideas known as "naturalness" seems to marginally conflict with the experiments and things may be getting worse. Roughly speaking, naturalness wants dimensionful parameters (masses) to be comparable unless there is an increased symmetry when they're not comparable. But the Higgs boson is clearly much lighter than the Planck scale and in 2015, the LHC may show (but doesn't have to show!) that there are no light superpartners that help to make the lightness natural.

The "agravity" approach, if true, eliminates these naturalness problems because according to its scheme of things, there is no fundamental scale in Nature. One tries to get all the terms in the Lagrangian with some dimensionful couplings from terms that have no dimensionful couplings. "Agravity" is a different solution to these problems than both "naturalness" and "multiverse" – a third way, if you wish.

Similar things have been tried before, e.g. by William Bardeen in 1995, but Strumia et al. are the first ones who are trying to add gravity. The claim is that one may get the Einstein-Hilbert action by a dynamical process in a theory whose terms only include four-derivative terms such as $$R^2$$.

Aside from a novel solution of the problems with the hierarchies, it is claimed that the scenario may predict inflation with the spectral index and the tensor-to-scalar ratio immensely compatible with the BICEP2 results.

The main obvious problem are the ghosts. The terms like $$R^2$$ may be rewritten as propagating degrees of freedom whose squared normal (sign of the kinetic term) are indefinite – some of them lead to proper positive probabilities while others produce pathological negative probabilities.

I remember a 2001 Santa Barbara talk by Stephen Hawking about "how he befriended ghosts", with some pretty amusing multimedia involving ghosts hugging his wheelchair, so you should be sure that Strumia et al. aren't the first folks who want to befriend ghosts.

At this moment, ghosts look like a lethal flaw. But I can imagine that by some clever technical or conceptual tricks, this flaw could perhaps be cured. The physical probabilities could become positive if one chose some better degrees of freedom, or there could be a new argument why these negative probabilities are ultimately harmless for some reason I can't quite imagine at this moment.

However, my concerns about the theory go beyond the problem with the ghosts. I do think that the Planck scale has been made extremely important by the modern "holographic" research of quantum gravity. The Planck area defines the minimum area where nontrivial information may be squeezed. It seems to be the scale that determines the nonlocalities and breakdown of the normal geometric concepts. The Planck scale is the minimum distance where a dynamical, gravitating space may start to emerge.

So if someone envisions some smooth ordinary spacetime at ultratiny, sub-Planckian distances, he is facing exactly the same difficulties – I would say that many of them are lethal ones – as the difficulties mentioned in the context of Weinberg's asymptotic safety which also envisions a scale-invariant theory underlying gravity at ultrashort distances.

There could be some amazing advance that cures these serious diseases but such a cure remains a wishful thinking at this point. We shouldn't pretend that the diseases have already been cured – even though you may use this proposition as a "working hypothesis" and a "big motivator" whenever you try to do some research related to agravity. That's why I find the existing proposals of scale-invariant underpinnings of quantum gravity, including the agravity meme, to be very unlikely. Hierarchy-like problems including the cosmological constant problem may look rather serious but they're still less serious than predicting negative probabilities of physical processes.

## August 19, 2014

### Symmetrybreaking - Fermilab/SLAC

A whole-Earth approach

Ecologist John Harte applies principles from his former life as a physicist to his work trying to save the planet.

Each summer for the past 25 years, ecologist John Harte has spent his mornings in a meadow on the western slope of the Rocky Mountains. He takes soil samples from a series of experimentally heated plots at the Rocky Mountain Biological Laboratory, using the resulting data to predict how responses of ecosystems to climate change will generate further heating of the climate.

Harte, a former theoretical physicist, studies ecological theory and the relationship between climates and ecosystems. He holds a joint professorship at UC Berkeley’s Energy Resources Group and the university’s Ecosystem Sciences Division. He says he is motivated by a desire to help save the planet and to solve complex ecological problems.

“John is a gifted naturalist and a great birdwatcher,” says Robert Socolow, a colleague and former physicist who transitioned to the environmental field at the same time. “John went into physics to combine his deep love of nature and his talent for mathematical analysis.”

Harte, who loved bird watching and nature as a child, also enjoyed physics and math, which his schoolteachers urged him to pursue. He received his undergraduate degree in physics from Harvard in 1961, and a PhD in theoretical physics from the University of Wisconsin in 1965. He went on to serve as an NSF Postdoctoral Fellow at CERN from 1965-66 and a postdoctoral fellow at Lawrence Berkeley National Laboratory from 1966-68.

It was in the storied summer of 1969 while Harte was teaching physics at Yale that he decided to return to nature studies. He and Socolow spent a month that summer conducting a hydrology study of the Florida Everglades, and their work showed that a proposed new airport would endanger the water supply for hundreds of thousands of people. That work, which Harte and Socolow detailed in one chapter of the book Patient Earth, led to the creation of an immense water conservation area in southwestern Florida.

“With not much more than back-of-the-envelope calculations, we were able to stop the jetport,” Harte says. “I thought, man, that’s cool. I want to do this.”

Harte was already worried about climate change and decided to transition to studying interdisciplinary environmental science. He sought out the wisdom of famous ecologists, such as G. Evelyn Hutchinson, to help him learn the field.

“I was lucky because I made this transition in the late ’60s and ’70s,” Harte says. “It was a novelty back then, and there weren’t a lot of people doing the things I wanted to do.”

He retained his love for physics and used physics concepts in his work.

“Unification is such an important goal in physics,” Harte says. “I came away with the thirst for finding unification in ecology. I also came away empowered that I could master practically any mathematic formula. ”

Viewing many different phenomena through the same lens has been critical to Harte’s work. His big-picture view isn’t always widely accepted by other ecologists, but it has helped him understand and make significant contributions to the natural world.

“John is gifted in non-linear modeling. He’s a physicist doing ecology to this day,” Socolow says.

During his career, Harte has served on six National Academy of Sciences Committees, has published hundreds of papers and has written eight books on topics including biodiversity, climate change and water resources. He has also received numerous awards, including a George Polk award for his work advising a group of graduate journalism students reporting on climate change.

He typically divides his days between fieldwork and theory, teaching courses in theoretical biology and environmental problem solving. He has mentored about 35 graduate students during the years, about 10 of whom have come from physics.

“They saw that I had made this transition, and they thought I’d be a good mentor. Students who want to make that transition come to work with me,” Harte says. “Because I speak the language of physics.”

Like what you see? Sign up for a free subscription to symmetry!

### Christian P. Robert - xi'an's og

hasta luego, Susie!

I just heard that our dear, dear friend Susie Bayarri passed away early this morning, on August 19, in Valencià, Spain… I had known Susie for many, many years, our first meeting being in Purdue in 1987, and we shared many, many great times during simultaneous visits to Purdue University and Cornell University in the 1990’s. During a workshop in Cornell organised by George Casella (to become the unforgettable Camp Casella!), we shared a flat together and our common breakfasts led her to make fun of my abnormal consumption of cereals  forever after, a recurrent joke each time we met! Another time, we were coming from the movie theatre in Lafayette in Susie’ s car when we got stopped for going through a red light. Although she tried very hard, her humour and Spanish verve were for once insufficient to convince her interlocutor.

Susie was a great Bayesian, contributing to the foundations of Bayesian testing in her numerous papers and through the direction of deep PhD theses in Valencia. As well as to queuing systems and computer models. She was also incredibly active in ISBA, from the very start of the Bayesian society, and was one of the first ISBA presidents. She also definitely contributed to the Objective Bayes section of ISBA, especially in the construction of the O’Bayes meetings. She gave a great tutorial on Bayes factors at the last O’Bayes conference in Duke last December, full of jokes and passion, despite being already weak from her cancer…

So, hasta luego, Susie!, from all your friends. I know we shared the same attitude about our Catholic education and our first names heavily laden with religious meaning, but I’d still like to believe that your rich and contagious laugh now resonates throughout the cosmos. So, hasta luego, Susie, and un abrazo to all of us missing her.

Filed under: Statistics, University life Tagged: Bayesian foundations, Cornell University, George Casella, ISBA, O'Bayes, Purdue University, Spain, Susie Bayarri, València

### Peter Coles - In the Dark

On Problems

Since I’m in Denmark I thought I’d put up one of the wonderfully witty little poems written by Danish mathematician Piet Hein. He called each of these verses a “grook” (or actually, in Danish, the word is gruk) and he wrote thousands of them over his long life. I’ve posted one of these before but this one is even shorter but it makes a deep point, the danger of becoming be trapped by your own assumptions. I won’t comment on the relevance of this to the cosmology workshop I’m attending…

Our choicest plans
have fallen through
our airiest castles
tumbled over
because of lines
we neatly drew
and later neatly
stumbled over.

by Piet Hein (1905-1996).

### Emily Lakdawalla - The Planetary Society Blog

Curiosity wheel damage: The problem and solutions
Now that a Tiger Team has assessed the nature and causes of damage to Curiosity's wheels, I can finally answer your frequently-asked questions about what wheel damage means for the mission, and why it wasn't anticipated.

## August 18, 2014

### Christian P. Robert - xi'an's og

on intelligent design…

In connection with Dawkins’ The God delusion, which review is soon to appear on the ‘Og, a poster at an exhibit on evolution in the Harvard Museum of Natural History, which illustrates one of Dawkins’ points on scientific agosticism. Namely, that refusing to take a stand on the logical and philosophical opposition between science and religion(s) is not a scientific position. The last sentence in the poster is thus worse than unnecessary…

Filed under: Books, Kids, Travel Tagged: "intelligent" design, agnosticism, atheism, Cambridge, creationism, evolution, Harvard, Harvard Museum of Natural History

### Quantum Diaries

Dark Energy Survey kicks off second season cataloging the wonders of deep space

This Fermilab press release came out on Aug. 18, 2014.

This image of the NGC 1398 galaxy was taken with the Dark Energy Camera. This galaxy lives in the Fornax cluster, roughly 65 million light-years from Earth. It is 135,000 light-years in diameter, just slightly larger than our own Milky Way galaxy, and contains more than 100 billion stars. Credit: Dark Energy Survey

On Aug. 15, with its successful first season behind it, the Dark Energy Survey (DES) collaboration began its second year of mapping the southern sky in unprecedented detail. Using the Dark Energy Camera, a 570-megapixel imaging device built by the collaboration and mounted on the Victor M. Blanco Telescope in Chile, the survey’s five-year mission is to unravel the fundamental mystery of dark energy and its impact on our universe.

Along the way, the survey will take some of the most breathtaking pictures of the cosmos ever captured. The survey team has announced two ways the public can see the images from the first year.

Today, the Dark Energy Survey relaunched Dark Energy Detectives, its successful photo blog. Once every two weeks during the survey’s second season, a new image or video will be posted to www.darkenergydetectives.org, with an explanation provided by a scientist. During its first year, Dark Energy Detectives drew thousands of readers and followers, including more than 46,000 followers on its Tumblr site.

Starting on Sept. 1, the one-year anniversary of the start of the survey, the data collected by DES in its first season will become freely available to researchers worldwide. The data will be hosted by the National Optical Astronomy Observatory. The Blanco Telescope is hosted at the National Science Foundation’s Cerro Tololo Inter-American Observatory, the southern branch of NOAO.

In addition, the hundreds of thousands of individual images of the sky taken during the first season are being analyzed by thousands of computers at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, Fermi National Accelerator Laboratory (Fermilab), and Lawrence Berkeley National Laboratory. The processed data will also be released in coming months.

Scientists on the survey will use these images to unravel the secrets of dark energy, the mysterious substance that makes up 70 percent of the mass and energy of the universe. Scientists have theorized that dark energy works in opposition to gravity and is responsible for the accelerating expansion of the universe.

“The first season was a resounding success, and we’ve already captured reams of data that will improve our understanding of the cosmos,” said DES Director Josh Frieman of the U.S. Department of Energy’s Fermi National Accelerator Laboratory and the University of Chicago. “We’re very excited to get the second season under way and continue to probe the mystery of dark energy.”

While results on the survey’s probe of dark energy are still more than a year away, a number of scientific results have already been published based on data collected with the Dark Energy Camera.

The first scientific paper based on Dark Energy Survey data was published in May by a team led by Ohio State University’s Peter Melchior. Using data that the survey team acquired while putting the Dark Energy Camera through its paces, they used a technique called gravitational lensing to determine the masses of clusters of galaxies.

In June, Dark Energy Survey researchers from the University of Portsmouth and their colleagues discovered a rare superluminous supernova in a galaxy 7.8 billion light years away. A group of students from the University of Michigan discovered five new objects in the Kuiper Belt, a region in the outer reaches of our solar system, including one that takes over a thousand years to orbit the Sun.

In February, Dark Energy Survey scientists used the camera to track a potentially hazardous asteroid that approached Earth. The data was used to show that the newly discovered Apollo-class asteroid 2014 BE63 would pose no risk.

Several more results are expected in the coming months, said Gary Bernstein of the University of Pennsylvania, project scientist for the Dark Energy Survey.

The Dark Energy Camera was built and tested at Fermilab. The camera can see light from more than 100,000 galaxies up to 8 billion light-years away in each crystal-clear digital snapshot.

“The Dark Energy Camera has proven to be a tremendous tool, not only for the Dark Energy Survey, but also for other important observations conducted year-round,” said Tom Diehl of Fermilab, operations scientist for the Dark Energy Survey. “The data collected during the survey’s first year — and its next four — will greatly improve our understanding of the way our universe works.”

The Dark Energy Survey Collaboration comprises more than 300 researchers from 25 institutions in six countries. For more information, visit http://www.darkenergysurvey.org.

Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance, LLC. Visit Fermilab’s website at www.fnal.gov and follow us on Twitter at @FermilabToday.

The DOE Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

The National Optical Astronomy Observatory (NOAO) is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under cooperative agreement with the National Science Foundation.

### Peter Coles - In the Dark

Copenhagen, Cosmology and Coleman Hawkins

Now that I’ve finally checked into my hotel in the wonderful city of Copenhagen I thought I’d briefly check in on the old blog as well. I’m here once again for a meeting, this time as an invited speaker at the 2nd NBIA-APCTP Workshop on Cosmology and Astroparticle Physics; NBIA being the Niels Bohr International Academy (based in Denmark) and APTCTP being the Asia Pacific Centre for Theoretical Physics (based in Korea). This is the kind of meeting I actually like, with relatively few participants and lots of time for discussion; as a welcome gesture for the first day there was also free beer!

I decided for some reason to try an experimental route getting here. There wasn’t a flight at a convenient date and  time from Gatwick, the nearest airport to my Brighton residence, so I decided to get an early morning flight from Heathrow instead. The departure time of 06:40, however, left me with the difficulty of getting there in time by public transport as the relevant trains don’t run overnight. I toyed with the idea of booking an airport hotel for the night, but decided that would be extravagant so instead opted to get a coach from Brighton; this was cheap and comfortable – only a handful of other passengers got on the bus – and got me there right on schedule. The downside was that I had to catch the 01:40 from Brighton Coach Station, which arrived at about 4am at Heathrow Terminal 3. It was quite interesting finding the normally busy terminal almost deserted but although I did a self-service check-in straight away the bag drops didn’t open until almost 5am. None of the cafes in the check-in area were open, so I had to hang around for an hour before finally getting rid of my luggage and passing through to the airside whereupon I nabbed some coffee and a bite to eat.

The flight was almost uneventful. Unfortunately, however, as we came in to land at Copenhagen’s Kastrup airport, a young person sitting behind me vomited uncontrollably and at considerable length, producing a steady flow both of chunder and unpleasant noises. The aftermath was quite unpleasant, so I was quick out of the blocks when the plane finally came to a stop at the gate. An aisle seat turned out to have been a wise choice.

Assuming it would be too early to check into the hotel that had been booked for me, I decided to go straight to the meeting but got to the Niels Bohr Institute’s famous Auditorium A near the end of the first talk, about the Imprint of Radio Loops on the CMB (a subject I’ve blogged about), which is a shame because (a) its interesting and (b) some of my own work was apparently discussed. That happens so rarely these days I’m sorry I missed it.

I was a bit tetchy as a result of my sleepless night, though I limited the expression of this to a  couple of rants about frequentist statistics during the discussions.

After the free beer I finally made my way to the hotel and checked in. It’s not bad, actually. There can’t be that many hotel rooms that have a picture of the great tenor saxophonist Coleman Hawkins on the wall:

Anyway, I was due to give the conference summary on Friday but I’ve been moved forward to Wednesday so I’d better think of something to say. Maybe in the morning though, I could do with an early night…

### astrobites - astro-ph reader's digest

Herschel’s View of a Neighboring Planetary System

Tau Ceti has long captured our imagination, and is featured in many science fiction books, movies, and games (e.g. Figure 1). This star has similar mass and luminosity as the Sun and is only 3.65 pc away, making it the second closest G type star to our own (after Alpha Centauri). To fuel the imagination of sci-fi enthusiasts even further, in 2013 five planets orbiting close to the star were tentatively detected with the radial velocity technique. Additionally, astronomers have known since the 1980′s that Tau Ceti hosts a debris disk. The authors of today’s paper take a closer look at this debris disk with the Herschel Space Telescope.

Figure 1: A Tau Ceti Video Game from 1986. Image from Wikipedia.

The dust in the Tau Ceti debris disk emits only in the far-infrared and the sub-mm wavelength regimes, meaning the dust is fairly cold and relatively far from the star. This makes a perfect target for Herschel, which is sensitive to long wavelengths. The Herschel observations resolve the disk at 70 and 160 microns, constraining the outer extent of the disk.

The authors fit model disks to the Herschel images (see Figure 2) and to the spectral energy distribution (SED) of the disks emission. They find that the disk is very broad, extending from somewhere between 1 and 10 AU out to around 55 AU.

Figure 2: The Herschel images (left column), best-fitting disk models (center two columns), and residuals (right column) of the Tau Ceti disk. The top row is at 70 microns and the bottom row is at 160 microns.

The authors then use the inferred properties of the debris disk to study the planets in the system. The radial velocity method can only measure the minimum mass of a planet, as only the radial component of the planet’s orbital motion can be detected and the inclination of the orbit is generally not known. In the case of Tau Ceti, however, the inclination of the debris disk can be determined from the Herschel images, and it is a decent bet that the star, planets, and debris disk all rotate in the same plane. The author’s find that the system inclination is 30 degrees, and the planets have masses of  4.0, 6.2, 7.2, 8.6, and 13.2 times the mass of the Earth on orbits of 0.11, 0.20, 0.37, 0.55, and 1.35 AU, respectively.

We know that planets will perturb and sculpt a debris disks, as well as scatter each other gravitationally. Is this model of Tau Ceti’s planetary system stable? The authors test this by running dynamical simulations of the planets and disk (see Figure 3). They find that the five planets are very stable (as long as they have relatively low eccentricities) and that debris can exist as close to the star as 1.45 AU, with an additional stable region between the two outermost planets.

Figure 3: Results of the author’s dynamical stability analysis of the Tau Ceti system. Black lines are the initial orbits of the five planets, and gray lines show how the orbits vary over the course of the simulation. Orange lines show the regions where debris particles are stable.

Future observations of this disk at higher resolution (with ALMA, for instance) will better constrain its inner edge. If the edge is farther out, there may be additional planets in the system with larger orbits, but if the edge is found to be within 1.35 AU, the existence of the original five planets (which were only tentatively detected) would be in question. Science fictions fans will have to wait a little longer to learn the truth about one of their favorite planetary systems.

### Andrew Jaffe - Leaves on the Line

&ldquo;Public Service Review&rdquo;?

A few months ago, I received a call from someone at the “Public Service Review”, supposedly a glossy magazine distributed to UK policymakers and influencers of various stripes. The gentleman on the line said that he was looking for someone to write an article for his magazine giving an example of what sort of space-related research was going on at a prominent UK institution, to appear opposite an opinion piece written by Martin Rees, president of the Royal Society.

This seemed harmless enough, although it wasn’t completely clear what I (or the Physics Department, or Imperial College) would get out of it. But I figured I could probably knock something out fairly quickly. However, he told me there was a catch: it would cost me £6000 to publish the article. And he had just ducked out of his editorial meeting in order to find someone to agree to writing the article that very afternoon. Needless to say, in this economic climate, I didn’t have an account with an unused £6000 in it, especially for something of dubious benefit. (On the other hand, astrophysicists regularly publish in journals with substantial page charges.) It occurred to me that this could be a scam, although the website itself seems legitimate (although no one I spoke to knew anything about it).

I had completely forgotten about this until this week, when another colleague in our group at Imperial told me had received the same phone call, from the same organization, with the same details: article to appear opposite Lord Rees’; short deadline; large fee.

So, this is beginning to sound fishy. Has anyone else had any similar dealings with this organization?

Update: It has come to my attention that one of the comments below was made under a false name, in particular the name of someone who actually works for the publication in question, so I have removed the name, and will possibly likely the comment unless the original write comes forward with more and truthful information (which I will not publish without permission). I have also been informed of the possibility that some other of the comments below may come from direct competitors of the publication. These, too, may be removed in the absence of further confirming information.

Update II: In the further interest of hearing both sides of the discussion, I would like to point out the two comments from staff at the organization giving further information as well as explicit testimonials in their favor.

### Emily Lakdawalla - The Planetary Society Blog

New Postcards from Mars
The latest snaphots from the "Mars Webcam" include something special.

### Emily Lakdawalla - The Planetary Society Blog

Field Report From Mars: Sol 3753 – August 15, 2014
Opportunity just completed its first drives upslope on its long journey toward the crest of the highest rim segment of Endeavour crater, “Cape Tribulation.” Larry Crumpler gives us an update on what to expect next from the little rover that could.

### arXiv blog

The Emerging Pitfalls Of Nowcasting With Big Data

Statisticians have boasted of the benefits of big data. Now they’re discovering the weaknesses.

Earlier this year, the European Central Bank held a two-day workshop on big data and how it can be used for forecasting. The headline speaker was Hal Varian, chief economist at Google and a number cruncher of rock star status.

### Symmetrybreaking - Fermilab/SLAC

Dark Energy Survey kicks off second season

In September, DES will make data collected in its first season freely available to researchers.

On August 15, with its successful first season behind it, the Dark Energy Survey collaboration began its second year of mapping the southern sky in unprecedented detail. Using the Dark Energy Camera, a 570-megapixel imaging device built by the collaboration and mounted on the Victor M. Blanco Telescope in Chile, the survey’s five-year mission is to unravel the fundamental mystery of dark energy and its impact on our universe.

Along the way, the survey will take some of the most breathtaking pictures of the cosmos ever captured. The survey team has announced two ways the public can see the images from the first year.

Today, the Dark Energy Survey relaunched its photo blog, Dark Energy Detectives. Once every two weeks during the survey’s second season, a new image or video will be posted to www.darkenergydetectives.org with an explanation provided by a scientist. During its first year, Dark Energy Detectives drew thousands of readers and followers, including more than 46,000 followers on its Tumblr site.

Starting on September 1, the one-year anniversary of the start of the survey, the data collected by DES in its first season will become freely available to researchers worldwide. The data will be hosted by the National Optical Astronomy Observatory. The Blanco Telescope is hosted at the National Science Foundation's Cerro Tololo Inter-American Observatory, the southern branch of NOAO.

In addition, the hundreds of thousands of individual images of the sky taken during the first season are being analyzed by thousands of computers at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, Fermi National Accelerator Laboratory and Lawrence Berkeley National Laboratory. The processed data will also be released in coming months.

Scientists on the survey will use these images to unravel the secrets of dark energy, the mysterious substance that makes up 70 percent of the mass and energy of the universe. Scientists have theorized that dark energy works in opposition to gravity and is responsible for the accelerating expansion of the universe.

“The first season was a resounding success, and we’ve already captured reams of data that will improve our understanding of the cosmos,” says DES Director Josh Frieman of Fermilab and the University of Chicago. “We’re very excited to get the second season under way and continue to probe the mystery of dark energy.”

While results on the survey’s probe of dark energy are still more than a year away, a number of scientific results have already been published based on data collected with the Dark Energy Camera.

The first scientific paper based on Dark Energy Survey data was published in May by a team led by Ohio State University’s Peter Melchior. Using data that the survey team acquired while putting the Dark Energy Camera through its paces, they used a technique called gravitational lensing to determine the masses of clusters of galaxies.

In June, Dark Energy Survey researchers from the University of Portsmouth and their colleagues discovered a rare superluminous supernova in a galaxy 7.8 billion light years away. A group of students from the University of Michigan discovered five new objects in the Kuiper Belt, a region in the outer reaches of our solar system, including one that takes over a thousand years to orbit the Sun.

In February, Dark Energy Survey scientists used the camera to track a potentially hazardous asteroid that approached Earth. The data was used to show that the newly discovered Apollo-class asteroid 2014 BE63 would pose no risk.

Several more results are expected in the coming months, says Gary Bernstein of the University of Pennsylvania, project scientist for the Dark Energy Survey.

The Dark Energy Camera was built and tested at Fermilab. The camera can see light from more than 100,000 galaxies up to 8 billion light-years away in each crystal-clear digital snapshot.

“The Dark Energy Camera has proven to be a tremendous tool, not only for the Dark Energy Survey, but also for other important observations conducted year-round,” says Tom Diehl of Fermilab, operations scientist for the Dark Energy Survey. “The data collected during the survey’s first year—and its next four—will greatly improve our understanding of the way our universe works.”

Like what you see? Sign up for a free subscription to symmetry!

### Tommaso Dorigo - Scientificblogging

Tight Constraints On Dark Matter From CMS
Although now widely accepted as the most natural explanation of the observed features of the universe around us, dark matter remains a highly mysterious entity to this day. There are literally dozens of possible candidates to explain its nature, wide-ranging in size from subnuclear particles all the way to primordial black holes and beyond. To particle physicists, it is of course natural to assume that dark matter IS a particle, which we have not detected yet. We have a hammer, and that looks like a nail.

### John Baez - Azimuth

El Nino Project (Part 7)

So, we’ve seen that Ludescher et al have a way to predict El Niños. But there’s something a bit funny: their definition of El Niño is not the standard one!

Precisely defining a complicated climate phenomenon like El Niño is a tricky business. Lots of different things tend to happen when an El Niño occurs. In 1997-1998, we saw these:

But what if just some of these things happen? Do we still have an El Niño or not? Is there a right answer to this question, or is it partially a matter of taste?

A related puzzle: is El Niño a single phenomenon, or several? Could there be several kinds of El Niño? Some people say there are.

Sometime I’ll have to talk about this. But today let’s start with the basics: the standard definition of El Niño. Let’s see how this differs from Ludescher et al’s definition.

### The standard definition

The most standard definitions use the Oceanic Niño Index or ONI, which is the running 3-month mean of the Niño 3.4 index:

• An El Niño occurs when the ONI is over 0.5 °C for at least 5 months in a row.

• A La Niña occurs when the ONI is below -0.5 °C for at least 5 months in a row.

Of course I should also say exactly what the ‘Niño 3.4 index’ is, and what the ‘running 3-month mean’ is.

The Niño 3.4 index is the area-averaged, time-averaged sea surface temperature anomaly for a given month in the region 5°S-5°N and 170°-120°W:

Here anomaly means that we take the area-averaged, time-averaged sea surface temperature for a given month — say February — and subtract off the historical average of this quantity — that is, for Februaries of other years on record.

If you’re clever you can already see room for subtleties and disagreements. For example, you can get sea surface temperatures in the Niño 3.4 region here:

Niño 3.4 data since 1870 calculated from the HadISST1, NOAA. Discussed in N. A. Rayner et al, Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century, J. Geophys. Res. 108 (2003), 4407.

However, they don’t actually provide the Niño 3.4 index.

You can get the Niño 3.4 index here:

You can also get it from here:

Monthly Niño 3.4 index, Climate Prediction Center, National Weather Service.

The actual temperatures in Celsius on the two websites are quite close — but the anomalies are rather different, because the second one ‘subtracts off the historical average’ in a way that takes global warming into account. For example, to compute the Niño 3.4 index in June 1952, instead of taking the average temperature that month and subtracting off the average temperature for all Junes on record, they subtract off the average for Junes in the period 1936-1965. Averages for different periods are shown here:

You can see how these curves move up over time: that’s global warming! It’s interesting that they go up fastest during the cold part of the year. It’s also interesting to see how gentle the seasons are in this part of the world. In the old days, the average monthly temperatures ranged from 26.2 °C in the winter to 27.5 °C in the summer — a mere 1.3 °C fluctuation.

Finally, to compute the ONI in a given month, we take the average of the Niño 3.4 index in that month, the month before, and the month after. This definition of running 3-month mean has a funny feature: we can’t know the ONI for this month until next month!

You can get a table of the ONI here:

Cold and warm episodes by season, Climate Prediction Center, National Weather Service.

### Ludescher et al

Now let’s compare Ludescher et al. They say there’s an El Niño when the Niño 3.4 index is over 0.5°C for at least 5 months in a row. By not using the ONI — by using the Niño 3.4 index instead of its 3-month running mean — they could be counting some short ‘spikes’ in the Niño 3.4 index as El Niños, that wouldn’t count as El Niños by the usual definition.

I haven’t carefully checked to see how much changing the definition would affect the success rate of their predictions. To be fair, we should also let them change the value of their parameter θ, which is tuned to be good for predicting El Niños in their setup. But we can see that there could be some ‘spike El Niños’ in this graph of theirs, that might go away with a different definition. These are places where the red line goes over the horizontal line for more than 5 months, but no more:

Let’s see look at the spike around 1975. See that green arrow at the beginning of 1975? That means Ludescher et al are claiming to successfully predict an El Niño sometime the next calendar year. We can zoom in for a better look:

The tiny blue bumps are where the Niño 3.4 index exceeds 0.5.

Let’s compare the ONI as computed by the National Weather Service, month by month, with El Niños in red and La Niñas in blue:

1975: 0.5, -0.5, -0.6, -0.7, -0.8, -1.0, -1.1, -1.2, -1.4, -1.5, -1.6, -1.7

1976: -1.5, -1.1, -0.7, -0.5, -0.3, -0.1, 0.2, 0.4, 0.6, 0.7, 0.8, 0.8

1977: 0.6, 0.6, 0.3, 0.3, 0.3, 0.4, 0.4, 0.4, 0.5, 0.7, 0.8, 0.8

1978: 0.7, 0.5, 0.1, -0.2, -0.3, -0.3, -0.3, -0.4, -0.4, -0.3, -0.1, -0.1

So indeed an El Niño started in September 1976. The ONI only stayed above 0.5 for 6 months, but that’s enough. Ludescher and company luck out!

Just for fun, let’s look at the National Weather service Niño 3.4 index to see what that’s like:

1975: -0.33, -0.48, -0.72, -0.54, -0.68, -1.17, -1.07, -1.19, -1.36, -1.69 -1.45, -1.76

1976: -1.78, -1.10, -0.55, -0.53, -0.33, -0.10, 0.20, 0.39, 0.49, 0.88, 0.85, 0.63

So, this exceeded 0.5 in October 1976. That’s when Ludescher et al would say the El Niño starts, if they used the National Weather Service data.

Let’s also compare the NCAR Niño 3.4 index:

1975: -0.698, -0.592, -0.579, -0.801, -1.025, -1.205, -1.435, -1.620, -1.699 -1.855, -2.041, -1.960

1976: -1.708, -1.407, -1.026, -0.477, -0.095, 0.167, 0.465, 0.805, 1.039, 1.137, 1.290, 1.253

It’s pretty different! But it also gives an El Niño in 1976 according to Ludescher et al’ definition: the Niño 3.4 index exceeds 0.5 starting in August 1976.

### For further study

This time we didn’t get into the interesting question of why one definition of El Niño is better than another. For that, try:

• Kevin E. Trenberth, The definition of El Niño, Bulletin of the American Meteorological Society 78 (1997), 2771–2777.

There could also be fundamentally different kinds of El Niño. For example, besides the usual sort where high sea surface temperatures are centered in the Niño 3.4 region, there could be another kind centered farther west near the International Date Line. This is called the dateline El Niño or El Niño Modoki. For more, try this:

• Nathaniel C. Johnson, How many ENSO flavors can we distinguish?, Journal of Climate 26 (2013), 4816-4827.

which has lots of references to earlier work. Here, to whet your appetite, is his picture showing the 9 most common patterns of sea surface temperature anomalies in the Pacific:

At the bottom of each is a percentage showing how frequently that pattern has occurred from 1950 to 2011. To get these pictures Johnson used something called a ‘self-organizing map analysis’ – a fairly new sort of cluster analysis done using neural networks. This is the sort of thing I hope we get into as our project progresses!

### The series so far

Just in case you want to get to old articles, here’s the story so far:

El Niño project (part 1): basic introduction to El Niño and our project here.

El Niño project (part 2): introduction to the physics of El Niño.

El Niño project (part 3): summary of the work of Ludescher et al.

El Niño project (part 4): how Graham Jones replicated the work by Ludescher et al, using software written in R.

El Niño project (part 5): how to download R and use it to get files of climate data.

El Niño project (part 6): Steve Wenner’s statistical analysis of the work of Ludescher et al.

El Niño project (part 7): the definition of El Niño.

## August 17, 2014

### Sean Carroll - Preposterous Universe

Single Superfield Inflation: The Trailer

This is amazing. (Via Bob McNees and Michael Nielsen on Twitter.)

Backstory for the puzzled: here is a nice paper that came out last month, on inflation in supergravity.

Inflation in Supergravity with a Single Chiral Superfield

We propose new supergravity models describing chaotic Linde- and Starobinsky-like inflation in terms of a single chiral superfield. The key ideas to obtain a positive vacuum energy during large field inflation are (i) stabilization of the real or imaginary partner of the inflaton by modifying a Kahler potential, and (ii) use of the crossing terms in the scalar potential originating from a polynomial superpotential. Our inflationary models are constructed by starting from the minimal Kahler potential with a shift symmetry, and are extended to the no-scale case. Our methods can be applied to more general inflationary models in supergravity with only one chiral superfield.

Supergravity is simply the supersymmetric version of Einstein’s general theory of relativity, but unlike GR (where you can consider just about any old collection of fields to be the “source” of gravity), the constraints of supersymmetry place quite specific requirements on what counts as the “stuff” that creates the gravity. In particular, the allowed stuff comes in the form of “superfields,” which are combinations of boson and fermion fields. So if you want to have inflation within supergravity (which is a very natural thing to want), you have to do a bit of exploring around within the allowed set of superfields to get everything to work. Renata Kallosh and Andrei Linde, for example, have been examining this problem for quite some time.

What Ketov and Terada have managed to do is boil the necessary ingredients down to a minimal amount: just a single superfield. Very nice, and worth celebrating. So why not make a movie-like trailer to help generate a bit of buzz?

Which is just what Takahiro Terada, a PhD student at the University of Tokyo, has done. The link to the YouTube video appeared in an unobtrusive comment in the arxiv page for the revised version of their paper. iMovie provides a template for making such trailers, so it can’t be all that hard to do — but (1) nobody else does it, so, genius, and (2) it’s a pretty awesome job, with just the right touch of humor.

I wouldn’t have paid nearly as much attention to the paper without the trailer, so: mission accomplished. Let’s see if we can’t make this a trend.

### Peter Coles - In the Dark

Newcastle Joins the Resurgence of UK Physics

I’ve posted a couple of times about how Physics seems to undergoing a considerable resurgence in popularity at undergraduate level across the United Kingdom, with e.g. Lincoln University setting up a new programme. Now there’s further evidence in that Newcastle University has now decided to re-open its Physics course for 2015 entry.

The University of Newcastle had an undergraduate course in Physics until 2004 when it decided to close it down, apparently owing to lack of demand. They did carry on doing some physics research (in nanoscience, biophysics, optics and astronomy) but not within a standalone physics department. The mid-2000s were tough for UK physics,  and many departments were on the brink at that time. Reading, for example, closed its Physics department in 2006; there is talk that they might be starting again too.

The background to the Newcastle decision is that admissions to physics departments across the country are growing at a healthy rate, a fact that could not have been imagined just ten years ago. Times were tough here at Sussex until relatively recently, but now we’re expanding on the back of increased student numbers and research successes. Indeed having just been through a very busy clearing and confirmation period at Sussex University, it is notable that its the science Schools that have generally done best.  Sussex has traditionally been viewed as basically a Liberal Arts College with some science departments; over 70% of the students here at present are not studying science subjects. With Mathematics this year overtaking English as the most popular A-level choice, this may well change the complexion of Sussex University relatively rapidly.

I’ve always felt that it’s a scandal that there are only around 40 UK “universities” with physics departments Call me old-fashioned, but I think a university without a physics department is not a university at all; it’s particularly strange that a Russell Group university such as Newcastle should not offer a physics degree. I believe in the value of physics for its own sake as well as for the numerous wider benefits it offers society in terms of new technologies and skills. Although the opening of a new physics department will create more competition for the rest of us, I think it’s a very good thing for the subject and for the Higher Education sector general.

That said, it won’t be an easy task to restart an undergraduate physics programme in Newcastle, especially if it is intended to have as large an intake as most successful existing departments (i.e. well over 100 each year). Students will be applying in late 2014 or early 2015 for entry in September 2015. The problem is that the new course won’t figure in any of the league tables on which most potential students based their choice of university. They won’t have an NSS score either. Also their courses  will probably need some time before it can be accredited by the Institute of Physics (as most UK physics courses are).

There’s a lot of ground to make up, and my guess is that it will take some years to built up a significant intake.The University bosses will therefore have to be patient and be prepared to invest heavily in this initiative until it can break even. The decision a decade ago to cut physics doesn’t exactly inspire confidence that they will be prepared to do this, but times have changed and so have the people at the helm so maybe that’s an unfair comment.

There are also difficulties on the research side (which is also vital for a proper undergraduate teaching programme), there are also difficulties. Grant funding is already spread very thin, and there is little sign of any improvement for the foreseeable future  in the “flat cash” situation we’re currently in. There’s also the stifling effect of theResearch Excellence Framework I’ve blogged about before. I don’t know whether Newcastle University intends to expand its staff numbers in Physics or just to rearrange existing staff into a new department, but if they do the former they will have to succeed against well-established competitors in an increasingly tight funding regime. A great deal of thought will have to go into deciding which areas of research to develop, especially as their main regional competitor, Durham University, is very strong in physics.

On the other hand, there are some positives, not least of which is that Newcastle is and has always been a very popular city for students (being of course the finest city in the whole world). These days funding follows students, so that could be a very powerful card if played wisely.

Anyway, these are all problems for other people to deal with. What I really wanted to do was to wish this new venture well and to congratulate Newcastle on rejoining the ranks of proper universities (i.e. ones with physics departments). Any others thinking of joining the club?

## August 16, 2014

### Peter Coles - In the Dark

Sussex and the World Premier League of Physics

In the office again busy finishing off a few things before flying off for another conference (of which more anon).

Anyway, I thought I’d take a short break for a cup of tea and a go on the blog.

Today is the first day of the new Premiership season and , coincidentally, last week saw some good news about the Department of Physics and Astronomy at the University of Sussex in a different kind of league table.

The latest (2014) Academic Rankings of World Universities (often called the “Shanghai Rankings”) are out so, as I suspect many of my colleagues also did, I drilled down to look at the rankings of Physics departments.

Not surprisingly the top six (Berkeley, Princeton, MIT, Harvard, Caltech, & Stanford) are all based in the USA. The top British university is, also not surprisingly, Cambridge in 9th place. That’s the only UK university in the top ten for Physics. The other leading UK physics departments are: Manchester (13th), Imperial (15th), Edinburgh (20th), Durham (28th), Oxford (39th) and UCL (47th). I don’t think there will be any surprise that these all made it into the top 50 departments worldwide.

Just outside the top 50 in joint 51st place in the world is the Department of Physics & Astronomy at the University of Sussex. For a relatively small department in a relatively small university this is a truly outstanding result. It puts the Department  clear in 8th place in the UK, ahead of Birmingham, Bristol, Leicester, Queen Mary, Nottingham, Southampton,  St Andrews, Lancaster, Glasgow, Sheffield and Warwick, all of whom made the top 200 in the world.

Incidentally, two of the other departments tied in 51st place are at Nagoya University in Japan (where I visited in January) and Copenhagen University in Denmark (where I’m going next week).

Although I have deep reservations about the usefulness of league tables, I’m not at all averse to using them as an excuse for a celebration and to help raise the profile of Physics and Astronomy at Sussex generally.  I’d therefore like to take the opportunity to offer hearty congratulations to the wonderful staff of the Department of Physics & Astronomy on their achievement.

With the recent investments we’ve had and further plans for growth I hope over the next few years we can move even further up the rankings. Unless of course the methodology changes or we’re subect to a “random” (ie downward) fluctuation…

## August 15, 2014

### Emily Lakdawalla - The Planetary Society Blog

Interstellar Dust Grains Found by Stardust@home
Seven possible interstellar dust grains have been found by Stardust@home, a citizen scientist project that The Planetary Society helped out early on. The dust grains would be the first ever examples of contemporary interstellar dust.

### Quantum Diaries

Coffee and code: Innovation at the CERN Webfest

The Particle Clicker team working late into the night.

This weekend CERN hosted its third Summer Student Webfest, a three-day caffeine-fuelled coding event at which participants worked in small teams to build innovative projects using open-source web technologies.

There were a host of projects to inspire the public to learn about CERN and particle physics, and others to encourage people to explore web-based solutions to humanitarian disasters with CERN’s partner UNOSAT.

The event opened with a session of three-minute pitches: participants with project ideas tried to recruit team members with particular skills, from software development and design expertise to acumen in physics. Projects crystallised, merged or floundered as 14 pitches resulted in the formation of eight teams. Coffee was brewed and the hacking commenced…

Members of the Run Broton Run team help each other out at the CERN Summer Student Webfest 2014 (Image: James Doherty)

The weekend was interspersed with mentor-led workshops introducing participants to web technologies. CERN’s James Devine detailed how Arduino products can be used to build cosmic-ray detectors or monitor LHC operation, while developers from PyBossa provided an introduction to building crowdsourced citizen science projects on crowdcrafting.org. (See a full list of workshops).

After three days of hard work and two largely sleepless nights, the eight teams were faced with the daunting task of presenting their projects to a panel of experts, with a trip to the Mozilla Festival in London up for grabs for one member of the overall winning team. The teams presented a remarkable range of applications built from scratch in under 48 hours.

Students had the opportunity to collaborate with Ben Segal (middle), inductee of the Internet Hall of Fame.

Prizes were awarded as follows:

Best Innovative Project: Terrain Elevation

A mobile phone application that accurately measures elevation. Designed as an economical method of choosing sites with a low risk of flooding for refugee camps.

Find out more.

Best Technology Project: Blindstore

A private query database with real potential for improving online privacy.

Find out more here.

Best Design Project: GeotagX and PyBossa

An easy-to-use crowdsourcing platform for NGOs to use in responding to humanitarian disasters.

Find out more here and here.

Best Educational Project: Run Broton Run

An educational 3D game that uses Kinect technology.

Find out more here.

Overall Winning Project: Particle Clicker

Particle Clicker is an elegantly designed detector-simulation game for web.

Play here.

“It’s been an amazing weekend where we’ve seen many impressive projects from different branches of technology,” says Kevin Dungs, captain of this year’s winning team. “I’m really looking forward to next year’s Webfest.”

Participants of the CERN Summer Student Webfest 2014 in the CERN Auditorium after three busy days’ coding.

The CERN Summer Student Webfest was organised by François Grey, Ben Segal and SP Mohanty, and sponsored by the Citizen Cyberlab, Citizen Cyberscience Centre, Mozilla Foundation and The Port. Event mentors were from CERN, PyBossa and UNITAR/UNOSTAT. The judges were Antonella del Rosso (CERN Communications), Bilge Demirkoz (CERN Researcher) and Fons Rademakers (CTO of CERN Openlab).

### Clifford V. Johnson - Asymptotia

West Maroon Valley Wild Flowers
I promised two things in a previous post. One was the incomplete sketch I did of Crater lake and West Maroon Valley (not far from Aspen) that I started before the downpour began, last weekend. It is on the left (click to enlarge.) The other is a collection of the wild flowers and other pretty things that I picked for you (non-destructively) from my little hike in the West Maroon valley. There's Columbine, Indian Paintbrush, and so forth, along with [...] Click to continue reading this post

### Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

### arXiv blog

Pattern Recognition Algorithm Recognizes When Drivers Are on the Phone

Using a mobile phone while driving can significantly increase the chances of an accident. Now a dashboard cam can spot when drivers pick up the phone.

By some estimates, 85 percent of drivers in America use a mobile phone while at the wheel. The National Highway Traffic Safety Administration estimates that during daylight hours, 5 percent of cars are being driven by people making phone calls.

### ZapperZ - Physics and Physicists

Cuddly Plushes At Synchrotron Beamlines
I mentioned of my final visit to the NSLS before its impending shutdown. I had to stop and chuckle at one of the UV beamlines during my casual tour around the place. There were cuddly plushes strategically placed along the beamlines, including Kermit and Miss Piggy in a rather amorous position (not that there's anything wrong with it).

And yes, I took a few photos.

I hope they remember to rescue these guys before the wrecking ball arrives.

Zz.

### The Great Beyond - Nature blog

Oceans need saving before science is nailed

Mohammed Al Momany/NOAA

Don’t just gather data, do something. Scientists need to stop using a lack of knowledge as an excuse for not doing more to protect threatened species, a major gathering of marine conservationists has been warned.

“Science matters deeply, but we can’t let ourselves be trapped by the need to gather more data,” Amanda Vincent, a marine researcher at the University of British Columbia, told delegates at the opening of the International Marine Conservation Congress, which kicked off on 14 August in Glasgow, UK.

Vincent’s work with seahorses has involved fighting for better control of the international trade in these animals, many of which are endangered. Trade in seahorses is now restricted under the Convention on International Trade in Endangered Species (CITES). If scientists had waited until they knew everything about every species – or even until they had enough data to propose detailed plans for managing catches in individual countries – this protection would never have arrived, she says.

Vincent told the meeting that every speaker who called for more data on a conservation issue should also be prepared to present a recommendation for something that could actually be done now.

Making an analogy with the medical profession, she told the meeting that doctors use all available evidence when deciding how to treat their patients, but when there is a lack of evidence for a particular condition they don’t generally stand by and do nothing. The oceans are under threat, says Vincent, and “you don’t do research while your patient is dying”.

She warned the gathering of conservation researchers that “we’re a bit weasely sometimes in hiding behind our lack of knowledge” and told them to “just get going”.

### astrobites - astro-ph reader's digest

A white dwarf eating a debris disk

Figure 1. Artist’s impression of a debris disk around a white dwarf. Credit: NASA, ESA, STScI, and G. Bacon (STScI)

White dwarfs are dense stellar remnants, roughly the mass of the sun and the radius of the earth. They’re the hot core of a star left over after nuclear fusion has stopped and the star has expelled its outer layers into a planetary nebula. They are also odd but interesting places to learn about exoplanets. In this case, the authors were looking not at a planet itself, but at a debris disk around the white dwarf J0959 0200, which was likely formed from tidally disrupted planetesimals.

The authors were comparing WISE and Spitzer IRAC infrared observations of white dwarfs, and found an infrared excess. This extra infrared emission is most likely caused by dust in a debris disk around the white dwarf absorbing the light from the white dwarf and re-emitting it at infrared wavelengths. This data was from 2010. Looking to learn more, they obtained updated Spitzer measurements in 2014. They found that the source had decreased in brightness in the 2 IRAC bands at (3.6 and 4.5 microns) by almost 35%. They also had J, H, and K band data from 2005, so they obtained updated measurements in 2014, and found that the source decreased in brightness in the K band (~4 microns), while remaining unchanged in J and H bands (~2 and 3 microns, respectively). This is consistent with the results from Spitzer, and it means the cooler, dusty component of the system is decreasing in brightness, while the hotter white dwarf itself remains unchanged.

So if the disk has decreased in brightness, what does that mean? Well, the infrared excess is due to a disk of dusty material around the white dwarf. How much infrared excess you see depends on the width of the disk (the radius of its inner and outer edges) and its inclination. If you have a disk that is edge-on to the line of sight, then you don’t observe much of the disk, or therefore much infrared excess. But if the disk is face-on to the line of sight, you see the whole disk, and lots of infrared emission. And obviously the more surface area the disk covers–the closer, or smaller the inner radius and the bigger the outer radius–the more excess you see.

For the data taken in 2010 and earlier, the observers measure so much infrared excess that the disk must be face-on and have a very narrow inner edge (only 10.5 times the radius of the white dwarf, which itself only has a radius about that of the earth!), so that it is thoroughly heated by the white dwarf. The outer edge is less well constrained, because it is cooler and best constrained by 8 micron observations, which were not available for this project.

What then, accounts for the drop in flux observed between 2010 and 2014? The model that best describes the observations and makes sense with the 2010 observations is that the inner edge of the disk moved out from 10.5 times the white dwarf’s radius in 2010 to 14 times the radius in 2014. Essentially, the disk was destroyed from the white dwarf’s side. See Figure 1 for details. But what can cause this? The authors offer two suggestions.

Figure 2. SED fits to the observations. The blue line is fitted to the data prior to March 2010, and the red line to data taken after March 2010. The only difference between the two models is the inner disk radius, which increases from 10.5 to 14 times the radius of the white dwarf. Credit: Xu et al. 2014.

The first is that an asteroid could have impacted the disk and disrupted it. But it would have to be an anomalously large asteroid to cause the large increase in the inner disk radius observed. The other explanation relies on the fact that prior to 2010, a very hot inner disk temperature was required to explain the observations. Such a high temperature makes the inner disk unstable, and dusty/rocky material could interact with the viscous gas and cause sudden events of high rates of accretion onto the star–high enough to dissolve something like 3% of the disk’s mass within ~300 days, in a disk that would otherwise stick around for roughly a million years. If this second case is what happened, that level of accretion will be observable similar to a nova event. The authors advise observing the star on shorter timescales in order to catch a future such event in action and answer definitively what’s going on around this dusty–now slightly less dusty–white dwarf.

### Symmetrybreaking - Fermilab/SLAC

LHC research, presented in tangible tidbits

Students working on their PhDs at the Large Hadron Collider explain their research with snacks, board games and Legos.

Concepts in particle physics can be hard to visualize. But a series of videos on the US LHC YouTube channel endeavors to make the abstract and complex concepts of particle physics easier to grasp.

In the videos, PhD students on experiments at the Large Hadron Collider each explain a basic concept from their research in three minutes or less using a household object. Check out the playlist to find out how LHC data is like trail mix, the Standard Model is like the Settlers of Catan board game, and the fundamental particles are like Lego blocks.

Like what you see? Sign up for a free subscription to symmetry!

### Tommaso Dorigo - Scientificblogging

The Periodic Diet
It is a well-known fact that given the availability of food, we eat far more than what would be healthy for our body. Obesity has become a plague in many countries, and the fact that it correlates very tightly with a decreased life expectancy is not a random chance but the demonstrated result of increased risk of life-threatening conditions connected with excess body fat.

Yet we eat, and drink, and eat. We look like self-pleasing monkeys trained to press a button to self-administer a drug. To make matters worse, many of the foods and drinks we consume contain substances purposely added to increase our addiction. So it takes a strong will to control our body weight.

### Lubos Motl - string vacua and pheno

If you want to assure yourself that you're capable of doing all the work that is done by people at CERN – from the Director General to the PhD advisers, detector technicians, statisticians etc., you may simply open this CERN game:
CERN particle clicker
You must click at buttons to discover the CP violation and do many other things.

Heuer and the folks at ATLAS, CMS, and other collaborations aren't doing too different things.

If you're going to click vigorously enough, you will be receiving packages with beer, coffee, graduate students, postdocs, research fellows, reputation, media hype, grants, and other commodities.

The first TRF reader who discovers some beyond-the-Standard-Model physics (or hires the board of Google as graduate students) and proves that he's better than all the CERN employees combined :-) should proudly boast about her achievements.

The reputation and funding etc. grew sort of exponentially, with the new units' having new names, and after 2 hours when it was running in a window, mostly without clicking ;-), I had 50 out of 105 achievements.

## August 14, 2014

### ZapperZ - Physics and Physicists

Saying Goodbye To NSLS
I had a chance recently to visit my old romping ground, the National Synchrotron Light Source (NSLS) at Brookhaven Lab. I spent 3 years there doing my postdoc work, and the facility is about to be shut down at the end of Sept. 2014 as the new facility, the NSLS II just right next door, will take over. The old lady is still running, but you can tell that she's old, decrepit, with lots of aches and pains, and about ready to retire. One can tell that this place is about ready to be shut down when even the vendors no longer refill the vending machines!

I was there on the day that Long Island, NY received 13 inches of rain within a 12 hour period, and walking in the next day, I saw leaks and a few water issues. Oh yeah, the old lady is definitely ready to go. The NSLS was such a workhorse during her glory years. To say that she was over-subscribed is an understatement. The place was packed with users on top of each other. The presence of two separate rings, one for the x-ray and the other for the UV/IR/low energy photons, made it quite unique and useful for many applications and studies.

Across the street from her is the new lady on the block, the NSLS II. She's huge when compared to the old lady, she's shiny and new, more powerful and sleeker. I look forward to visiting her when she's in operation, but I'll never forget the one I spent a lot of days and nights with. She gave me good data. How many dates have you been on where you can say that?

So long, NSLS!

Zz.

### The Great Beyond - Nature blog

Indian universities ordered to cut length of science courses

Posted on behalf of T. V. Padma

Thousands of students and staff at some of India’s leading universities, including the prestigious Indian Institute of Science (IISc), Bangalore, have been left in turmoil after the institutions were ordered to cut the length of their undergraduate science courses to fall in line with national policy.

The IISc was last week told by Smriti Irani, the new minister for Human Resource Development, that it must immediately shorten all ongoing and planned four-year courses by a year. The decision came just weeks after Delhi University, one of India’s biggest, was told it must cancel its four-year programmes, which were only introduced last year. Several private universities have also been told to roll back their undergraduate course length.

The move has caused significant confusion and upheaval. The IISc attempted to agree a compromise deal with the education authorities that would see it give undergraduates the option of leaving its science courses with a non-honours degree after three years; and rename the four-year course. But this has now caused uproar among students, who accuse the IISc management of bowing to government pressure.

Delhi University is complying with the measure, leaving it struggling to reconfigure courses that have already started while rescheduling those due to begin this year. As many as 25,000 students and staff will be affected.

“Such moves could turn the brightest students of India away from a science career; and threaten innovation in higher education, which is in bad need of an overhaul,” says Vishwesa Guttal, assistant professor at IISc’s Centre for Ecological Sciences.

Traditionally, most Indian universities follow a three-year undergraduate programme for both science and arts, modelled on the UK system. But in 2008, three top science academies, the Indian Academy of Sciences in Bangalore, Indian National Science Academy in Delhi and National Academy of Sciences in Allahabad, prepared a position paper on higher education in science, in which they recommended a four-year programme. Their report highlighted some of the major drawbacks in undergraduate science education in India, including compartmentalised teaching of some sub-disciplines, inefficient admissions systems and repetition of topics at BSc and MSc levels. Other deficiencies included poor laboratory facilities, little exposure to research methodologies and limited options for movement between science and technology streams.

In recent years some institutions, including the Indian Institute of Technology, Kanpur (IIT) and several private universities, have introduced longer courses. The IISc began offering its own four-year undergraduate science programmes in 2011, with a focus-placed on equipping students with research skills in the final year, and building its brand. Some courses at the Indian Institutes of Science Education and Research (IISERs) also adopted four-year programmes.

And in 2013, Dinesh Singh, Delhi University’s vice-chancellor, pushed through a four-year undergraduate programme to replace the three-year one despite strong resistance from teachers and students who were unprepared for the change. It was designed to better prepare students for academic and job market requirements, as well as bring courses more in line with those offered in the United States.

But in June, following the swearing in of a new government under prime minister Narendra Modi, Irani rolled back Delhi University’s four-year programme. The University Grants Commission (UGC) said it “was not in consonance” with the national policy on education. This left thousands of students who enrolled in 2013 in the lurch, and delayed admissions for 2014 as the university struggled to accommodate the changes.

Then, on 6 August, Irani told the Indian Parliament that the government planned to ask the IISc and two private universities to discontinue their four-year undergraduate science programmes. The statement sent shockwaves through the IIS campus, and director Anurag Kumar constituted a committee to look into how the institution could “align our programme with the UGC guidelines”.

Kumar told Nature that the institute would like to retain the four-year programme with its unique strong research component. “The novelty of the IISc four-year programme is that it is creating a small number of researchers trained by some of the top scientists in the country,” he says.

It is understood that the institute has now agreed a compromise deal that would enable students to leave IISc courses after three years with a Bachelor of Science degree, or continue to study for another year and gain a new Bachelor of Science (research) degree. But students and a section of faculty are unhappy at what they see as the institute ‘caving in’ to the UGC’s demands.

There is a sharp divide at Delhi University over the benefits of a four-year programme. Shobhit Mahajan, professor at Delhi University’s faulty of physics, says “there were problems from the word go” with the manner of implementing the change to a four-year system. “It was hare-brained and not based on ground realties of students, teachers and infrastructure.” Besides, adds Mahajan, the main stakeholders – the teachers who were going to teach the new course – were left out of the discussions and decisions.

Tapasya Srivastava, assistant professor at the department of genetics at the University of Delhi, says that while the four-year undergraduate programme may be well established in the US and other countries, “its success in India would require structural changes in not only the Master’s [degrees] but also the preceding school programme. The rudimentary foundation courses seem to make a mockery of the intensive school coursework that a student is put through.”

But others say the current undergraduate system needed an overhaul. Deepak Pental, former vice-chancellor of Delhi University and one of India’s top genetically modified crop scientists, believes a “radical shift” is required. “We have had a one-track system for the past 30 years and were not creating inter-disciplinary studies,” he says. But with the “mucked up” implementation of a four-year programme in Delhi University, “we have lost an opportunity to improve our undergraduate programme”, he adds.

## August 13, 2014

### astrobites - astro-ph reader's digest

Migrating Super-Earths vs. Terrestrial Planets

Title: Terrestrial Planet Formation in the Presence of Migrating Super-Earths
Authors: André Izidoro, Alessandro Morbidelli, Sean N. Raymond
First Author’s Institution: University of Nice-Sophia Antipolis, CNRS, Observatoire de la Côte d’Azure, Labratoire Lagrange
Paper Status: Accepted for publication in The Astrophysical Journal

Super-Earths: Not So Earth-Like

Of all the kinds of planets we’re finding around other stars—hot Jupiters and mini-Neptunes and those dubiously called “Earth-like”—super-Earths orbiting close to their stars are among the most abundant. About half of sun-like stars are thought to host planets with radii one to four times that of Earth’s, with orbital periods of less than 100 days. While planets so close to their stars are poor candidates for habitability, they are important to understanding the possibility of other habitable planets in these seemingly common systems.

There are two theories for the formation of close-in super-Earths: they either formed in-situ (where they are), or they formed farther out and subsequently migrated inward. This paper discusses several flaws with the in-situ formation model: it presupposes an extraordinarily massive and dense protoplanetary disk, and it assumes that orbital migration isn’t influential in planet formation. In fact, there is a very strong case for orbital migration being nearly inevitable. Thus, the authors work on the assumption that super-Earths form farther out in the disk and then migrate inward. (This means that super-Earth composition is likely to be higher in volatiles than terrestrial worlds are; in other words: not rocky.)

Super-Earths vs. Terrestrial Planet Formation

It is thought that, in our system, the big gaseous planets formed more quickly than Earth and its rocky compatriots; it’s reasonable to think, then, that super-Earths will also form more quickly than terrestrial planets in their systems. So a migrating super-Earth, forming out past the habitable zone (HZ) but migrating through the zone to its tight orbit, has the potential to wreak havoc on the formation of Earth-like planets in the HZ. The super-Earth will be fully formed and migrating in while the material that will come to form terrestrial planets—rocky worlds in the HZ—is still in debris, smaller planetesimals, and larger planetary embryos.

So we’ve got a super-Earth migrating in from beyond the HZ to a point closer in to its star, basically barreling through the band of material that could come to form Earth-like planets. Is that a problem?

A big variable is how quickly the super-Earths migrate. And this ends up being the deciding factor in the fate of rocky planets around super-Earths. When super-Earths migrate inward quickly, they do little to disturb the protoplanets and planetary embryos that go on to form terrestrial planets (see figure). However, slow-moving super-Earths push and pull much of that rocky planet fodder with them into their close-in orbits, depleting the areas where terrestrial planets could form.

Some Simulations

The authors of this paper came to this conclusion through a range of situations, with varying migration speeds for the super-Earths and distributions of protoplanetary material in the terrestrial zone. They also tested systems with multiple super-Earths migrating inward in sequence, inspired by the Kepler-11 system, which is home to six super-Earth-type planets.

The simulations had of two phases: The first phase began with a disk of planetesimals and planetary embryos orbiting within the habitable zone, and one or more migrating super-Earths starting farther out; the outcome showed the super-Earth’s effect on the protoplanetary material. Then, in the second phase, the researchers simulated the evolution of the remaining protoplanetary material to see if, after a few million (simulated) years, the habitable zone had enough material left to form any truly Earth-like worlds.

Snapshots of the dynamical evolution of protoplanetary bodies in the presence of a migrating super-Earth. Black dots and outlined circles represent protoplanetary bodies; the big gray circle is the super-Earth. The x-axis measures distance from the star and the y-axis indicates orbital eccentricity. As the super-Earth moves in, the protoplanets remain well-distributed in distance from the star, and only get shaken up into slight eccentricity.  So in this case, a fast-migrating super-Earth does no major damage on the distribution of protoplanetary bodies.

They found that the mass of the migrating super-Earth made little-to-no difference on the outcome. What mattered was the speed. A super-Earth that took a mere hundred thousand years to migrate in from 5 AU to 0.1 AU scattered or accreted the planetesimals and embryos in its orbit, but the debris didn’t scatter far. Once the super-Earth had made its way through the HZ, 75% of the initial rocky matter had survived, and the subsequent simulation showed the familiar pattern of terrestrial planet formation from that material.

A slow super-Earth, on the other hand, does much more shepherding of planetesimals, dragging them with it inward toward the star. The slow migration allows for much of the rocky protoplanetary material to be captured in orbital resonance with the super-Earth, and in toward the star they spiral together. Any migration timescale over a million years leaves much less than one Earth mass of material in the neighborhood of the habitable zone, so much less that in some simulations the HZ was effectively cleaned out.

If super-Earth migration is so common, why are all the big planets in our Solar System so far out? Super-Earths are sometimes called mini-Neptunes, too, after all, and our own Neptune is nowhere near a hundred-day orbit. Saturn and Jupiter may have served as buffers, impeding Uranus and Neptune from migrating. This paper suggests that our Solar System may be atypical. The chance for abundant terrestrial planets in other systems may largely depend on how quickly or slowly those not-so-Earth-like super-Earths migrate.

### Clifford V. Johnson - Asymptotia

Outstanding in Their Fields…
In case you missed it, Maryam Mirzakhani has been awarded the Fields Medal! This is regarded as the most prestigious prize in mathematics. Here's a Guardian article covering it at a general level, and here is the page on all the award winners, with more detail on each, at the International Mathematical Union website. The reason this is a big deal (and why it is newsworthy) is because it is the first time the prize has been awarded to a woman. In a world where, despite the number of excellent women mathematicians out there, there is still a perception problem in the general populace about who (or more to the point, what gender) is associated with achievement in mathematics, it is important to note and celebrate such landmarks. I also note that one of the other 2014 awardees, Artur Avila, is from Brazil! While not covered as much in the press as far as I can see, this is another big [...] Click to continue reading this post

### Axel Maas - Looking Inside the Standard Model

Triviality is not trivial
OK, starting with a pun is probably not the wisest course of action, but there is truth in it as well.

When you followed the various public discussiosn on the Higgs then you will probably have noticed the following: Though finding it, most physicists are not really satisfied with it. Some are even repelled by it. In fact, most of us are convinced that the Higgs is only a first step towards something bigger. Why is this so? Well, there are a number of reasons, from purely aesthetic ones to deeply troubling ones. As the latter also affect my own research, I will write about a particular annoying nuisance: The triviality referred to in the title.

To really understand this problem, I have to paint a somewhat bigger picture, before coming back to the Higgs. Let me start: As a theoretician, I can (artificially) distinguish between something I call classical physics, and something I call quantum physics.

Classical physics is any kind of physics which is fully predictive: If I know the start conditions with sufficient precision, I can predict the outcome as precisely as desired. Newton's law of gravity, and even the famous general theory of relativity belong to this class of classical physics.

Quantum physics is different. Quantum phenomena introduce a fundamental element of chance into physics. We do not know why this is so, but it is very well established experimentally. In fact, the computer you use to read this would not work without it. As a consequence, in quantum physics we cannot predict what will happen, even if we know the start as good as possible. The only thing we can do is make very reliable statements of how probable a certain outcome is.

All kinds of known particle physics are quantum physics, and have this element of chance. This is also experimentally very well established.

The connection between classical physics and quantum physics is the following: I can turn any kind of classical system into a quantum system by adding the element of chance, which we also call quantum fluctuations. This does not necessarily go the other way around. We know theories where quantum effects are so deeply ingrained that we cannot remove them without destroying the theory entirely.

Let me return to the Higgs. For the Higgs part in the standard model, we can write down a classical system. When we then want to analyze what happens at a particle physics experiment, we have to add the quantum fluctuations. And here enters the concept of triviality.

Adding quantum fluctuations is not necessarily a small effect. Indeed, quantum fluctuations can profoundly and completely alter the nature of a theory. One possible outcome of adding quantum fluctuations is that the theory becomes trivial. This technical term means the following: If I add quantum fluctuations to a theory, the resulting theory will describe particles which do not interact, no matter how complicated they do in the classical version. Hence, a trivial quantum theory describes nothing interesting. What is really driving this phenomena depends on the theory at hand. The important thing is that it can happen.

For the Higgs part of the standard model, there is the strong suspicion that it is trivial, though we do not have a full proof for (or against) it. Since we cannot solve the theory entirely, we cannot (yet) be sure. The only thing we can say is that if we add only a part of the quantum fluctuations, only a part of the so-called radiative corrections, the theory makes still sense. Hence it is not trivial to decide whether the theory is trivial, to reiterate the pun.

Assuming that the theory is trivial, can we escape it? Yes, this is possible: Adding something to a trivial theory can always make a theory non-trivial. So, if we knew for sure that the Higgs theory is trivial, we would know for sure that there is something else. On the other hand, trivial theories are annoying for a theoretician, because you either have nothing or have to remove artificially part of the quantum fluctuations. This is what annoys me right now with the Higgs. Especially as I have to deal with it in my own research.

Thus, this is one out of the many reasons people would prefer to discover soon more than 'just' the Higgs.

### arXiv blog

How People Consume Conspiracy Theories on Facebook

… in much the same way as mainstream readers consume ordinary news, say computer scientists.

Do you believe that the contrails left by high-flying aircraft contain sildenafil citratum, the active ingredient in Viagra? Or that light bulbs made from uranium and plutonium are more energy-efficient and environmentally friendly? Or that lemons have anti-hypnotic benefits?

### Quantum Diaries

The World’s Largest Detector?

This morning, the @CERN_JOBS twitter feed tells us that the ATLAS experiment is the world’s largest detector:

Weighing over 7,000 tons, 46 meters long, and 25 meters high, ATLAS is without a doubt the particle detector with the greatest volume ever built at a collider. I should point out, though, that my experiment, the Compact Muon Solenoid, is almost twice as heavy at over 12,000 tons:

CMS is smaller but heavier — which may be why we call it “compact.” What’s the difference? Well, it’s tough to tell from the pictures, in which CMS is open for tours and ATLAS is under construction, but the big difference is in the muon systems. CMS has short gaps between muon-detecting chambers, while ATLAS has a lot of space in order to allow muons to travel further and get a better measurement. That means that a lot of the volume of ATLAS is actually empty air! ATLAS folks often say that if you could somehow make it watertight, it would float; as a CMS member, I heartily recommend attempting to do this and seeing if it works.

But the truth is that all this cross-LHC rivalry is small potatoes compared to another sort of detector: the ones that search for neutrinos require absolutely enormous volumes of material to get those ghostlike particles to interact even occasionally! For example, here’s IceCube:

Most of its detecting volume is actually antarctic ice! Does that count? If it does, there may be a far bigger detector still. To follow that story, check out this 2012 post by Michael Duvernois: The Largest Neutrino Detector.

### CERN Bulletin

Maintenance of the CERN telephone exchanges
Maintenance work will be carried out on the CERN telephone exchanges between 8 p.m. and 2 a.m. on 26 August.   Fixed-line telephone and audio-conference services may be disrupted during this intervention. Nevertheless, the CCC and the Fire Brigade will be reachable at any time. Mobile telephony services (GSM) will not be affected by the maintenance work.

## August 12, 2014

### Jester - Resonaances

X-ray bananas
This year's discoveries follow the well-known 5-stage Kübler-Ross pattern: 1) announcement, 2) excitement, 3) debunking, 4) confusion, 5) depression.  While BICEP is approaching the end of the cycle, the sterile neutrino dark matter signal reported earlier this year is now entering stage 3. This is thanks to yesterday's paper entitled Dark matter searches going bananas by Tesla Jeltena and Stefano Profumo (to my surprise, this is not the first banana in a physics paper's title).

In the previous episode, two independent analyses  using public data from XMM and Chandra satellites concluded the presence of an  anomalous 3.55 keV monochromatic emission from galactic clusters and Andromeda. One possible interpretation is a 7.1 keV sterile neutrino dark matter decaying to a photon and a standard neutrino. If the signal could be confirmed and conventional explanations (via known atomic emission lines) could be excluded, it would mean we are close to solving the dark matter puzzle.

It seems this is not gonna happen. The new paper makes two claims:

1. Limits from x-ray observations of the Milky Way center exclude the sterile neutrino interpretation of the reported signal from galactic clusters.
2. In any case, there's no significant anomalous emission line from galactic clusters near 3.55 keV.

Let's begin with the first claim. The authors analyze several days of XMM observations of the Milky Way center. They find that the observed spectrum can be very well fit by known plasma emission lines. In particular, all spectral features near 3.5 keV are accounted for if Potassium XVIII lines at 3.48 and 3.52 keV are included in the fit. Based on that agreement, they can derive strong bounds on the parameters of the sterile neutrino dark matter model: the mixing angle between the sterile and the standard neutrino should satisfy sin^2(2θ) ≤ 2*10^-11. This excludes the parameter space favored by the previous detection of the 3.55 keV line in  galactic clusters.  The conclusions are similar, and even somewhat stronger, as in the earlier analysis using Chandra data.

This is disappointing but not a disaster yet, as there are alternative dark matter models (e.g. axions converting to photons in the magnetic field of a galaxy) that do not predict observable emission lines from our galaxy. But there's one important corollary of the new analysis. It seems that the inferred strength of the Potassium XVIII lines compared to the strength of other atomic lines does not agree well with theoretical models of plasma emission. Such models were an important ingredient in the previous analyses that found the signal. In particular, the original 3.55 keV detection paper assumed upper limits on the strength of the Potassium XVIII line derived from the observed strength of the Sulfur XVI line. But the new findings suggest that systematic errors may have been underestimated.  Allowing for a higher flux of Potassium XVIII, and also including the 3.51 Chlorine XVII line (that was missed in the previous analyses), one can a obtain a good fit to the observed x-ray spectrum from galactic clusters, without introducing a dark matter emission line. Right... we suspected something was smelling bad here, and now we know it was chlorine... Finally, the new paper reanalyses the x-ray spectrum from Andromeda, but it disagrees with the previous findings:  there's a hint of the 3.53 keV anomalous emission line from Andromeda, but its significance is merely 1 sigma.

So, the putative dark matter signals are dropping like flies these days. We urgently need new ones to replenish my graph ;)

Note added: While finalizing this post I became aware of today's paper that, using the same data, DOES find a 3.55 keV line from the Milky Way center.  So we're already at stage 4... seems that the devil is in the details how you model the potassium lines (which, frankly speaking, is not reassuring).

### The Great Beyond - Nature blog

Student may be jailed for posting scientist’s thesis on web

Posted on behalf of Michele Catanzaro

A Colombian biology student is facing up to 8 years in jail and a fine for sharing a thesis by another scientist on a social network.

Diego Gómez Hoyos posted the 2006 work, about amphibian taxonomy, on Scribd in 2011. An undergraduate at the time, he had hoped that it would help fellow students with their fieldwork. But two years later, in 2013, he was notified that the author of the thesis was suing him for violating copyright laws. His case has now been taken up by the Karisma Foundation, a human rights organization in Bogotá, which has launched a campaign called “Sharing is not a crime”.

“It is a really awful, disturbing case, for the complete lack of proportionality of the trial,” says Michael Carroll, director of the Program on Information Justice and Intellectual Property at the American University and member of the board of directors of the Public Library of Science. “In copyright systems all over the world we see authors of extreme claims but most other countries would filter out this case,” he adds.

Gómez graduated in biology at the University of Quindío, in Armenia, Colombia, in 2010. His thesis was a study on population ecology of the local Cauca poison frog. “I shared the thesis because it was useful to identify amphibians in the fieldwork I did with my group at the university,” says Gómez.

But according to prosecutors, the move was criminal. Colombian copyright law was reformed in 2006 to meet the stringent copyright protection requirements of a free trade agreement that the country signed with the United States. Yet while the US has few criminal penalties for copyright infringement, Colombia allows only for a few exceptions.

“Lawmakers in developing countries, in their commitments to these kind of agreements, often don’t strike a balance,” says Carolina Botero, a lawyer at Karisma Foundation. “Reproducing a work without permission is not enough to face a criminal trial: it should have been done for profit, which is not the case,” she says.

Gómez says that he deleted the thesis from the social network as soon as he was notified of the legal proceedings. But the case against him is rolling on, with the most recent hearing taking place in Bogotá in May. He faces between 4 and 8 years in jail if found guilty. The next hearing will be in September.

The student, who is currently studying for a master’s degree in conservation of protected areas at the National University of Costa Rica in Heredia, refuses to reveal who is suing him. He says he does not want to “put pressure on this person”. “My lawyer has tried unsuccessfully to establish contacts with the complainant: I am open to negotiate and get to an agreement to move this issue out of the criminal trial,” he told Nature.

The case has left Gómez feeling disappointed. “I thought people did biology for passion, not for making money,” he says. “Now other scientists are much more circumspect [about sharing publications].”

### Symmetrybreaking - Fermilab/SLAC

Rare isotopes facility underway at Michigan State

In July 140 truckloads of concrete arrived at Michigan State University to begin construction of the Facility for Rare Isotope Beams.

Michigan State University’s campus will soon feature a powerful accelerator capable of producing particles rarely observed in nature.

The under-construction Facility for Rare Isotope Beams at MSU will eventually generate atomic nuclei to be used in nuclear, biomedical, material and soil sciences, among other fields of research. FRIB (pronounced ef-rib) could even help scientists investigate a mystery of particle physics.

FRIB will produce beams of rare isotopes, highly unstable atomic nuclei that decay within fractions of a second after forming.

Nature produces bounteous amounts of rare isotopes in supernovae through a series of nuclear processes that physicists have yet to fully understand. But supernovae explode many light years away. Therefore to study rare isotopes, scientists must produce them in the laboratory.

On July 23, construction trucks poured enough concrete to fill four Olympic-sized swimming pools into a massive rectangular hole in the ground at MSU. It was the first of four installments for the floor of the 1500-by-70-foot tunnel that will house FRIB’s linear accelerator.

FRIB, which is funded by the Department of Energy's Office of Science, Michigan State University and the State of Michigan, will support the mission of DOE's Office of Nuclear Physics and will be available for use by researchers from around the world. It is scheduled for completion in 2022.

FRIB will produce the highest-intensity beam of uranium ions of any rare isotope facility in the world. When scientists accelerate uranium ions to about half the speed of light and then smash them into a target such as a disc of graphite, they create a slew of particles—including some rare isotopes.

The more intense the beam, the heavier and larger variety of rare isotopes that scientists can produce, says FRIB Project Manager Thomas Glasmacher: “The more incoming beam of particles you have, the better.”

FRIB should be able to produce a variety of different rare isotopes, says Walter Henning, former director for the GSI Laboratory in Germany that performs similar research.

“With FRIB, and other major facilities, one hopes to get further out on the periodic table and be more complete,” he says.

Nearly two dozen facilities across the globe produce rare isotopes. Facilities such as the ATLAS accelerator facility at Argonne National Laboratory and the Radioactive Ion Beam Factory at the RIKEN Institute in Japan focus their efforts on creating rare isotopes for scientists to study the nuclear properties and behavior. Other facilities, such as the Heavy Ion Research Facility in Lanzhou, China, and TRIUMF Laboratory in Canada, offer research in additional applications such as cancer treatment. FRIB will offer researchers the chance to do a little bit of both and more.

“There are four pillars of the FRIB science program,” says MSU professor Bradley Sherrill, chief scientist of FRIB: Understanding the stability of atomic nuclei; discovering their origin and history in the universe; testing the fundamental laws of symmetries of nature; and identifying industrial applications of rare isotopes.

The properties and behaviors of rare isotopes and how they decay could hold clues to why matter is far more abundant than antimatter in the universe—a mystery that concerns particle physicists.

The big bang should have created equal amounts of matter and antimatter particles. If particles and antiparticles behave differently, that could be the cause of the imbalance that allows us to exist. The decay behavior of rare isotopes could divulge never-before-seen particles or interactions that would offer further insight to this mystery.

Like what you see? Sign up for a free subscription to symmetry!

### Sean Carroll - Preposterous Universe

Quantum Foundations of a Classical Universe

Greetings from sunny (for the moment) Yorktown Heights, NY, home of IBM’s Watson Research Center. I’m behind on respectable blogging (although it’s been nice to see some substantive conversation on the last couple of comment threads), and I’m at a conference all week here, so that situation is unlikely to change dramatically in the next few days.

But the conference should be great — a small workshop, Quantum Foundations of a Classical Universe. We’re going to be arguing about how we’re supposed to connect wave functions and quantum observables to the everyday world of space and stuff. I will mostly be paying attention to the proceedings, but I might occasionally interject a tweet if something interesting/amusing happens. I’m told that some sort of proceedings will eventually be put online.

Update: Trying something new here. I’ve been tweeting about the workshop under the hashtag #quantumfoundations. So here I am using Storify to collect those tweets, making a quasi-live-blog on the cheap. Let’s see if it works.

<noscript>[<a href="http://storify.com/seanmcarroll/quantum-foundations-workshop-day-1" target="_blank">View the story "Quantum Foundations Workshop 2014" on Storify</a>]</noscript>

### CERN Bulletin

Concert | The CERN Choir hits the high notes | Victoria Hall | 30 September
60 – 40 – 25: a series of numbers that have inspired an exceptional concert. They refer to the 60th anniversary of CERN, the 40th anniversary of the CERN Choir and the 25th anniversary of its direction by Gonzalo Martinez. On the occasion of this collision of anniversaries, the Committee of this CERN club decided to organise an appropriately significant event to celebrate the important worldwide role that CERN has played for 60 years, the fact that the CERN Choir has brought together amateur singers for 40 years, and finally the decisive role in the Choir’s history of its director, Gonzalo Martinez.   The work chosen for this concert also had to be something exceptional. A work which, through its monumental status, its brilliance, its innovation, its originality and its energy, symbolises CERN’s scientific discoveries, reflects the genius of its creator and represents the highest creative ambitions: Beethoven’s Missa Solemnis.  Performances of this work are rare enough to make each occasion a major event, and performances by amateur ensembles are even rarer. The performance on 30 September 2014 will benefit from the support of a professional choir, the Zürcher Sing-Akademie, along with the Geneva Chamber Orchestra and four exceptional soloists.    The CERN Choir is international by its very nature, but thanks to the concerts it has given every year for 40 years it has become part of the local cultural fabric. Since 1974, the composition of the CERN Choir has been enriched with distinguished amateurs and seasoned choral singers: most of its members sing in other respected choirs in Geneva and the surrounding area. The organisation of such a concert represents a huge challenge, not least in financial terms. By giving the CERN community an opportunity to hear a masterpiece such as the Missa Solemnis, the CERN Choir hopes to attract as many people as possible to celebrate 60 years of CERN and, as at last year's Bosons&More concert, to experience an extraordinary musical event. Missa Solemnis 30/09/2014 at the Victoria Hall, Geneva CERN Choir / Zürcher Sing-Akademie / Geneva Chamber Orchestra Tickets are on sale at various outlets in Geneva: Espace Ville de Genève, Pont de la Machine Mon 12 noon - 5.30 p.m. - Tues-Fri 9 a.m. - 5.30 p.m. - Sat 10 a.m. - 4.30 p.m. Maison des arts du Grütli, 16 rue du Général-Dufour, 16 Mon-Fri 10 a.m. - 6 p.m. - Sat 10 a.m. - 5 p.m. Genève-Tourisme, Rue du Mont-Blanc, 18 Mon 10 a.m. - 6 p.m. - Tues-Sat 9 a.m. - 6 p.m. - Sun 10 a.m. - 4.30 p.m. Cité-Séniors, Rue Amat, 28 Tues-Fri 9 a.m. - 12.15 p.m. At the venue, one hour before the concert   Online: http://billetterie-culture.ville-ge.ch   Tel.: 0800 418 418 (free from Switzerland), +41 22 418 36 18 (from outside Switzerland, charges apply)   See below for an overview of prices: Prices for category 1 2 3 4 5   Full price 60 50 35 20 13   Discounted prices (AVS, students, unemployed, children over 12 years of age) 54 44 30 15 9   Number of seats per category 295 316 324 304 260 1,499

### Jester - Resonaances

Weekend Plot: BaBar vs Dark Force
BaBar was an experiment studying 10 GeV electron-positron collisions. The collider is long gone, but interesting results keep appearing from time to time.  Obviously, this is not a place to discover new heavy particles. However, due to the large luminosity and clean experimental environment,  BaBar is well equipped to look for light and very weakly coupled particles that can easily escape detection in bigger but dirtier machines like the LHC. Today's weekend plot is the new BaBar limits on dark photons:

Dark photon is a hypothetical spin-1 boson that couples to other particles with the strength proportional to their electric charges. Compared to the ordinary photon, the dark one is assumed to have a non-zero mass mA' and the coupling strength suppressed by the factor ε. If ε is small enough the dark photon can escape detection even if mA' is very small, in the MeV or GeV range. The model was conceived long ago, but in the previous decade it has gained wider popularity as the leading explanation of the PAMELA anomaly.  Now, as PAMELA is getting older, she is no longer considered a convincing evidence of new physics. But the dark photon model remains an important benchmark - a sort of spherical cow model for light hidden sectors. Indeed, in the simplest realization, the model is fully described by just two parameters: mA' and ε, which makes it easy to present and compare results of different searches.

In electron-positron collisions one can produce a dark photon in association with an ordinary photon, in analogy to the familiar process of e+e- annihilation into 2 photons. The dark photon then decays to a pair of electrons or muons (or heavier charged particles, if they are kinematically available). Thus, the signature is a spike in the e+e- or μ+μ- invariant mass spectrum of γl+l- events. BaBar performed this search to obtain world's best limits on dark photons in the mass range 30 MeV - 10 GeV, with the upper limit on ε in the 0.001 ballpark. This does not have direct consequences for the explanation of the  PAMELA anomaly, as the model works with a smaller ε too. On the other hand, the new results close in on the parameter space where the minimal dark photon model  can explain the muon magnetic moment anomaly (although one should be aware that one can reduce the tension with a trivial modification of the model, by allowing the dark photon to decay into the hidden sector).

So, no luck so far, we need to search further. What one should retain is that finding new heavy particles and finding new light weakly interacting particles seems equally probable at this point :)

### Jester - Resonaances

Weekend plot: BICEP limits on tensor modes
The insurgency gathers pace. This weekend we contemplate a plot from the recent paper of Michael Mortonson and Uroš Seljak:

It shows the parameter space of inflationary models in the plane of the spectral index ns vs. the tensor-to-scalar ratio r. The yellow region is derived from Planck CMB temperature and WMAP polarization data, while the purple regions combine those with the BICEP2 data. Including BICEP gives a stronger constraint on the tensor modes, rather than a detection of r≠0.

The limits on r from Planck temperature data are dominated by large angular scales (low l) data which themselves display an anomaly, so  they should be taken with a grain of salt. The interesting claim here is that BICEP alone does not hint at r≠0, after using the most up-to-date information on galactic foregrounds and marginalizing over current uncertainties. In this respect, the paper by Michael and Uroš reaches similar conclusions as the analysis of Raphael Flauger and collaborators. The BICEP collaboration originally found that the galactic dust foreground can account for at most 25% of their signal. However, judging from scavenged Planck polarization data, it appears that BICEP underestimated the dust polarization fraction by roughly a factor 2. As this enters in square in the B-mode correlation spectrum, dust can easily account for all the signal observed in BICEP2. The new paper adds a few interesting details to the story. One is that not only the normalization but also the shape of the BICEP spectrum can be reasonably explained by dust if it scales as l^-2.3, as suggested by Planck data. Another is the importance of gravitational lensing effects (neglected by BICEP) in extracting the signal of tensor modes.  Although lensing dominates at high l, it also helps to fit the low l BICEP2 data with r=0. Finally, the paper suggests that it is not at all certain that the forthcoming Planck data will clean up the situation. If the uncertainty on the dust foreground in the BICEP patch is of order 20%, which look like a reasonable figure, the improvement over the current sensitivity to tensor modes may be marginal. So, BICEP may remain a Schrödinger cat for a little while longer.

### Tommaso Dorigo - Scientificblogging

Summer Flukes Inspire Creative Theorists
Today the Cornell arxiv features a paper by J. Aguilar Saavedra and F. Jouaquim, titled "A closer look at the possible CMS signal of a new gauge boson". As I read the title I initially felt somewhat lost, as being a CMS member I usually know about the possible new physics signals that my experiment produces, and the fact that we had a possible signal of a new gauge boson had entirely escaped my attention. Hence I downloaded the paper and started reading it, hoping to discover I had discovered something new.

### The n-Category Cafe

The Ten-Fold Way (Part 3)

My last article on the ten-fold way was a piece of research in progress — it only reached a nice final form in the comments. Since that made it rather hard to follow, let me try to present a more detailed and self-contained treatment here!

But if you’re in a hurry, you can click on this:

and get my poster for next week’s scientific advisory board meeting at the Centre for Quantum Technologies, in Singapore. That’s where I work in the summer, and this poster is supposed to be a terse introduction to the ten-fold way.

First we’ll introduce the ‘Brauer monoid’ of a field. This is a way of assembling all simple algebras over that field into a monoid: a set with an associative product and unit. One reason for doing this is that in quantum physics, physical systems are described by vector spaces that are representations of certain ‘algebras of observables’, which are sometimes simple (in the technical sense). Combining physical systems involves taking the tensor product of their vector spaces and also these simple algebras. This gives the multiplication in the Brauer monoid.

We then turn to a larger structure called the ‘super Brauer monoid’ or ‘Brauer–Wall monoid’. This is the ‘super’ or ${ℤ}_{2}\mathbb\left\{Z\right\}_2$-graded version of the same idea, which shows up naturally in physical systems containing both bosons and fermions. For the field $\mathbb\left\{R\right\}$, the super Brauer monoid has 10 elements. This gives a nice encapsulation of the ‘ten-fold way’ introduced in work on condensed matter physics. At the end I’ll talk about this particular example in more detail.

Actually elements of the Brauer monoid of a field are equivalence classes of simple algebras over this field. Thus, I’ll start by reminding you about simple algebras and the notion of equivalence we need, called ‘Morita equivalence’. Briefly, two algebras are Morita equivalent if they have the same category of representations. Since in quantum physics it’s the representations of an algebra that matter, this is sometimes the right concept of equivalence, even though it’s coarser than isomorphism.

## Review of algebra

We begin with some material that algebraists consider well-known.

### Simple algebras and division algebras

Let $kk$ be a field.

By an algebra over $kk$ we will always mean a finite-dimensional associative unital $kk$-algebra: that is, a finite-dimensional vector space $AA$ over $kk$ with an associative bilinear multiplication and a multiplicative unit $1\in A1 \in A$.

An algebra $AA$ over $kk$ is simple if its only two-sided ideals are $\left\{0\right\}\\left\{0\\right\}$ and $AA$ itself.

A division algebra over $kk$ is an algebra $AA$ such that if $a\ne 0a \ne 0$ there exists $b\in Ab \in A$ such that $ab=ba=1a b= b a = 1$. Using finite-dimensionality the following condition is equivalent: if $a,b\in Aa,b \in A$ and $ab=0a b = 0$ then either $a=0a = 0$ or $b=0b = 0$.

A division algebra is automatically simple. More interestingly, by a theorem of Wedderburn, every simple algebra $AA$ over $kk$ is an algebra of $n×nn \times n$ matrices with entries in some division algebra $DD$ over $kk$. We write this as

$A\cong D\left[n\right] A \cong D\left[n\right] $

where $D\left[n\right]D\left[n\right]$ is our shorthand for the algebra of $n×nn \times n$ matrices with entries in $DD$.

The center of an algebra over $kk$ always includes a copy of $kk$, the scalar multiples of $1\in A1 \in A$. If $DD$ is a division algebra, its center $Z\left(D\right)Z\left(D\right)$ is a commutative algebra that’s a division algebra in its own right. So $Z\left(D\right)Z\left(D\right)$ is field, and it’s a finite extension of $kk$, meaning it contains $kk$ as a subfield and is a finite-dimensional algebra over $kk$.

If $AA$ is a simple algebra over $kk$, its center is isomorphic to the center of some $D\left[n\right]D\left[n\right]$, which is just the center of $DD$. So, the center of $AA$ is a field that’s a finite extension of $kk$. We’ll need this fact when defining in the multiplication in the Brauer monoid.

Example. I’m mainly interested in the case $k=ℝk = \mathbb\left\{R\right\}$. A theorem of Frobenius says the only division algebras over $\mathbb\left\{R\right\}$ are $\mathbb\left\{R\right\}$ itself, the complex numbers $\mathbb\left\{C\right\}$ and the quaternions $\mathbb\left\{H\right\}$. Of these, the first two are fields, while the third is noncommutative. So, the simple algebras over $\mathbb\left\{R\right\}$ are the matrix algebras $ℝ\left[n\right]\mathbb\left\{R\right\}\left[n\right]$, $ℂ\left[n\right]\mathbb\left\{C\right\}\left[n\right]$ and $ℍ\left[n\right]\mathbb\left\{H\right\}\left[n\right]$. The center of $ℝ\left[n\right]\mathbb\left\{R\right\}\left[n\right]$ and $ℍ\left[n\right]\mathbb\left\{H\right\}\left[n\right]$ is $\mathbb\left\{R\right\}$, while the center of $ℂ\left[n\right]\mathbb\left\{C\right\}\left[n\right]$ is $\mathbb\left\{C\right\}$, the only nontrivial finite extension of $\mathbb\left\{R\right\}$.

Example. The case $k=ℂk = \mathbb\left\{C\right\}$ is more boring, because $\mathbb\left\{C\right\}$ is algebraically closed. Any division algebra $DD$ over an algebraically closed field $kk$ must be $kk$ itself. (To see this, consider $x\in Dx \in D$ and look at the smallest subring of $DD$ containing $kk$ and $xx$ and closed under taking inverses. This is a finite hence algebraic extension of $kk$, so it must be $kk$.) So if $kk$ is algebraically closed, the only simple algebras over $kk$ are the matrix algebras $k\left[n\right]k\left[n\right]$.

Example. The case where $kk$ is a finite field has a very different flavor. A theorem of Wedderburn and Dickson implies that any division algebra over a finite field $kk$ is a field, indeed a finite extension of $kk$. So, the only simple algebras over $kk$ are the matrix algebras $F\left[n\right]F\left[n\right]$ where $FF$ is a finite extension of $kk$. Moreover, we can completely understand these finite extensions, since the finite fields are all of the form ${𝔽}_{{p}^{n}}\mathbb\left\{F\right\}_\left\{p^n\right\}$ where $pp$ is a prime and $n=1,2,3,\dots n = 1,2,3,\dots$, and the only finite extensions of ${𝔽}_{{p}^{n}}\mathbb\left\{F\right\}_\left\{p^n\right\}$ are the fields ${𝔽}_{{p}^{m}}\mathbb\left\{F\right\}_\left\{p^m\right\}$ where $nn$ divides $mm$.

### Morita equivalence and the Brauer group

Given an algebra $AA$ over $kk$ we define $\mathrm{Rep}\left(A\right)Rep\left(A\right)$ to be the category of left $AA$-modules. We say two algebras $A,BA, B$ over $kk$ are Morita equivalent if $\mathrm{Rep}\left(A\right)\simeq \mathrm{Rep}\left(B\right)Rep\left(A\right) \simeq Rep\left(B\right)$. In this situation we write $A\simeq BA \simeq B$.

Isomorphic algebras are Morita equivalent, but this equivalence relation is more general; for example we always have $A\left[n\right]\simeq AA\left[n\right] \simeq A$, where $A\left[n\right]A\left[n\right]$ is the algebra of $n×nn \times n$ matrices with entries in $AA$.

We’ve seen that if $AA$ is simple, $A\cong D\left[n\right]A \cong D\left[n\right]$, and this implies $A\simeq D\left[n\right]A \simeq D\left[n\right]$. On the other hand, we have $D\left[n\right]\simeq DD\left[n\right] \simeq D$. So, every simple algebra over $kk$ is Morita equivalent to a division algebra over $kk$.

As a set, the Brauer monoid of $kk$ will simply be the set of Morita equivalence classes of simple algebras over $kk$. By what I just said, this is also the set of Morita equivalence classes of division algebras over $kk$. The trick will be defining multiplication in the Brauer monoid. For this we need to think about tensor products of algebras.

The tensor product of two algebras $A,BA,B$ over $kk$ is another algebra over $kk$, which we’ll write as $A{\otimes }_{k}BA \otimes_k B$. This gets along with Morita equivalence:

$A\simeq A\prime \phantom{\rule{thickmathspace}{0ex}}\mathrm{and}\phantom{\rule{thickmathspace}{0ex}}B\simeq B\prime \phantom{\rule{thickmathspace}{0ex}}⇒\phantom{\rule{thickmathspace}{0ex}}A{\otimes }_{k}A\prime \simeq B{\otimes }_{k}B\prime A \simeq A\text{'} \; and \; B \simeq B\text{'} \; \implies \; A \otimes_k A\text{'} \simeq B \otimes_k B\text{'} $

However, the tensor product of simple algebras need not be simple! And the tensor product of division algebras need not be a division algebra, or even simple. So, we have to be a bit careful if we want a workable multiplication in the Brauer monoid.

For example, take $k=ℝk = \mathbb\left\{R\right\}$. The division algebras over $\mathbb\left\{R\right\}$ are $ℝ,ℂ\mathbb\left\{R\right\}, \mathbb\left\{C\right\}$ and the quaternions $\mathbb\left\{H\right\}$. We have

$ℍ{\otimes }_{ℝ}ℍ\cong ℝ\left[4\right]\simeq ℝ \mathbb\left\{H\right\} \otimes_\left\{\mathbb\left\{R\right\}\right\} \mathbb\left\{H\right\} \cong \mathbb\left\{R\right\}\left[4\right] \simeq \mathbb\left\{R\right\} $

so this particular tensor product of division algebras over $\mathbb\left\{R\right\}$ is simple and thus Morita equivalent to another division algebra over $\mathbb\left\{R\right\}$. On the other hand,

$ℂ{\otimes }_{ℝ}ℂ\cong ℂ\oplus ℂ \mathbb\left\{C\right\} \otimes_\left\{\mathbb\left\{R\right\}\right\} \mathbb\left\{C\right\} \cong \mathbb\left\{C\right\} \oplus \mathbb\left\{C\right\} $

and this is not a division algebra, nor even simple, nor even Morita equivalent to a simple algebra.

What’s the problem with the latter example? The problem turns out to be that the division algebra $\mathbb\left\{C\right\}$ does not have $\mathbb\left\{R\right\}$ as its center: it has a larger field, namely $\mathbb\left\{C\right\}$ itself, as its center.

It turns out that if you tensor two simple algebras over a field $kk$ and they both have just $kk$ as their center, the result is again simple. So, in Brauer theory, people restrict attention to simple algebras over $kk$ having just $kk$ as their center. These are called central simple algebras over $kk$. The set of Morita equivalence classes of these is closed under tensor product, so it becomes a monoid. And this monoid happens to be be an abelian group: Brauer group of $kk$, denoted $\mathrm{Br}\left(k\right)Br\left(k\right)$. I want to work with all simple algebras over $kk$. So I will need to change this recipe a bit. But it will still be good to compute a few Brauer groups.

To do this, it pays to note that element of $\mathrm{Br}\left(k\right)Br\left(k\right)$ has a representative that is a division algebra over $kk$ whose center is $kk$. Why? Every simple algebra over $kk$ is $D\left[n\right]D\left[n\right]$ for some division algebra $DD$ over $kk$. $D\left[n\right]D\left[n\right]$ is central simple over $kk$ iff the center of $DD$ is $kk$, and $D\left[n\right]D\left[n\right]$ is Morita equivalent to $DD$. Using this, we easily see:

Example. The Brauer group $\mathrm{Br}\left(ℝ\right)Br\left(\mathbb\left\{R\right\}\right)$ is ${ℤ}_{2}\mathbb\left\{Z\right\}_2$, the 2-element group consisting of $\left[ℝ\right]\left[\mathbb\left\{R\right\}\right]$ and $\left[ℍ\right]\left[\mathbb\left\{H\right\}\right]$. We have

$\left[ℝ\right]\cdot \left[ℝ\right]=\left[ℝ{\otimes }_{ℝ}ℝ\right]=\left[ℝ\right] \left[\mathbb\left\{R\right\}\right] \cdot \left[\mathbb\left\{R\right\}\right] = \left[\mathbb\left\{R\right\} \otimes_\mathbb\left\{R\right\} \mathbb\left\{R\right\}\right] = \left[\mathbb\left\{R\right\}\right] $

$\left[ℝ\right]\cdot \left[ℍ\right]=\left[ℝ{\otimes }_{ℝ}ℍ\right]=\left[ℍ\right] \left[\mathbb\left\{R\right\}\right] \cdot \left[\mathbb\left\{H\right\}\right] = \left[\mathbb\left\{R\right\} \otimes_\mathbb\left\{R\right\} \mathbb\left\{H\right\}\right] = \left[\mathbb\left\{H\right\}\right] $

$\left[ℍ\right]\cdot \left[ℍ\right]=\left[ℍ{\otimes }_{ℝ}ℍ\right]=\left[ℝ\right] \left[\mathbb\left\{H\right\}\right] \cdot \left[\mathbb\left\{H\right\}\right] = \left[\mathbb\left\{H\right\} \otimes_\mathbb\left\{R\right\} \mathbb\left\{H\right\}\right] = \left[\mathbb\left\{R\right\}\right] $

Example. The Brauer group of any algebraically closed field $kk$ is trivial, since the only division algebra over $kk$ is $kk$ itself. Thus $\mathrm{Br}\left(ℂ\right)=1Br\left(\mathbb\left\{C\right\}\right) = 1$.

Example. The Brauer group of any finite field $kk$ is trivial, since the only division algebras over $kk$ are fields that are finite extensions of $kk$, and of these only $kk$ itself has $kk$ as center.

Example. Just so you don’t get the impression that Brauer groups tend to be boring, consider the Brauer group of the rational numbers:

$\mathrm{Br}\left(ℚ\right)=\left\{\left(a,x\right):\phantom{\rule{thickmathspace}{0ex}}a\in \left\{0,\frac{1}{2}\right\},\phantom{\rule{1em}{0ex}}x\in \underset{p}{⨁}ℚ/ℤ,\phantom{\rule{1em}{0ex}}a+\sum _{p}{x}_{p}=0\right\} Br\left(\mathbb\left\{Q\right\}\right) = \left\\left\{ \left(a,x\right) : \; a \in \\left\{0,\frac\left\{1\right\}\left\{2\right\}\\right\}, \quad x \in \bigoplus_p \mathbb\left\{Q\right\}/\mathbb\left\{Z\right\}, \quad a + \sum_p x_p = 0 \right\\right\} $

where the sum is over all primes. This is a consequence of the Albert–Brauer–Hasse–Noether theorem. The funny-looking $\left\{0,\frac{1}{2}\right\}\\left\{0,\frac\left\{1\right\}\left\{2\right\}\\right\}$ is just a way to think about the group ${ℤ}_{2}\mathbb\left\{Z\right\}_2$ as a subgroup of $ℚ/ℤ\mathbb\left\{Q\right\}/\mathbb\left\{Z\right\}$. The elements of this correspond to $\mathbb\left\{Q\right\}$ itself and a rational version of the quaternions. The other stuff comes from studying the situation ‘locally’ one prime at a time. However, the two aspects interact.

## The Brauer monoid of a field

Let $kk$ be a field and $\overline{k}\overline\left\{k\right\}$ its algebraic completion. Let $LL$ be the set of intermediate fields

$k\subseteq F\subseteq \overline{k} k \subseteq F \subseteq \overline\left\{k\right\} $

where $FF$ is a finite extension of $kk$. This set $LL$ is partially ordered by inclusion, and in fact it is a semilattice: any finite subset of $LL$ has a least upper bound. We write $F\vee F\prime F \vee F\text{'}$ for the least upper bound of $F,F\prime \in LF,F\text{'} \in L$. This is just the smallest subfield of $\overline{k}\overline\left\{k\right\}$ containing both $FF$ and $F\prime F\text{'}$.

We define the Brauer monoid of $kk$ to be the disjoint union

$\mathrm{BR}\left(k\right)=\coprod _{F\in L}\mathrm{Br}\left(F\right) BR\left(k\right) = \coprod_\left\{F \in L\right\} Br\left(F\right) $

So, every simple algebra over $kk$ shows up in here: if $AA$ is a simple algebra over $kk$ with center $FF$, the Morita equivalence class $\left[A\right]\left[A\right]$ will appear as an element of $\mathrm{Br}\left(F\right)Br\left(F\right)$. However, isomorphic copies of the same simple algebra will show up repeatedly in the Brauer monoid, since we may have $F\ne F\prime F \ne F\text{'}$ but still $F\cong F\prime F \cong F\text{'}$.

How do we define multiplication in the Brauer monoid? The key is that the Brauer group is functorial. Suppose we have an inclusion of fields $F\subseteq F\prime F \subseteq F\text{'}$ in the semilattice $LL$. Then we get a homomorphism

${\mathrm{Br}}_{F\prime ,F}:\mathrm{Br}\left(F\right)\to \mathrm{Br}\left(F\prime \right) Br_\left\{F\text{'}, F\right\} : Br\left(F\right) \to Br\left(F\text{'}\right) $

as follows. Any element $\left[A\right]\in \mathrm{Br}\left(F\right)\left[A\right] \in Br\left(F\right)$ comes from a central simple algebra $AA$ over $FF$; the algebra $F\prime {\otimes }_{F}AF\text{'} \otimes_F A$ will be central simple over $F\prime F\text{'}$, and we define

${\mathrm{Br}}_{F\prime ,F}\left[A\right]=\left[F\prime {\otimes }_{F}A\right] Br_\left\{F\text{'},F\right\} \left[A\right] = \left[F\text{'} \otimes_F A\right] $

Of course we need to check that this is well-defined, but this is well-known. People call ${\mathrm{Br}}_{F\prime ,F}Br_\left\{F\text{'},F\right\}$ restriction, since larger fields have smaller Brauer groups, but I’d prefer to call it ‘extension’, since we’re extending an algebra to be defined over a larger field.

It’s easy to see that if $F\subseteq F\prime \subseteq F″F \subseteq F\text{'} \subseteq F\text{'}\text{'}$ then

${\mathrm{Br}}_{F″,F}={\mathrm{Br}}_{F″,F\prime }{\mathrm{Br}}_{F\prime ,F} Br_\left\{F\text{'}\text{'}, F\right\} = Br_\left\{F\text{'}\text{'} ,F\text{'}\right\} Br_\left\{F\text{'}, F\right\} $

and this together with

${\mathrm{Br}}_{F,F}={1}_{\mathrm{Br}\left(F\right)} Br_\left\{F,F\right\} = 1_\left\{Br\left(F\right)\right\} $

implies that we have a functor

$\mathrm{Br}:L\to \mathrm{AbGp} Br: L \to AbGp $

So now suppose we have two elements of $\mathrm{BR}\left(k\right)BR\left(k\right)$ and we want to multiply them. To do this, we simply write them as $\left[A\right]\in \mathrm{Br}\left(F\right)\left[A\right] \in Br\left(F\right)$ and $\left[A\prime \right]\in \mathrm{Br}\left(F\prime \right)\left[A\text{'}\right] \in Br\left(F\text{'}\right)$, map them both into $\mathrm{Br}\left(F\vee F\prime \right)Br\left(F \vee F\text{'}\right)$, and then multiply them there:

$\left[A\right]\cdot \left[A\prime \right]\phantom{\rule{thickmathspace}{0ex}}:=\phantom{\rule{thickmathspace}{0ex}}{\mathrm{Br}}_{F\vee F\prime ,F}\left[A\right]\phantom{\rule{thickmathspace}{0ex}}\cdot {\mathrm{Br}}_{F\vee F\prime ,F\prime }\left[A\prime \right] \left[A\right] \cdot \left[A\text{'}\right] \; := \; Br_\left\{F \vee F\text{'}, F\right\} \left[A\right] \; \cdot Br_\left\{F \vee F\text{'}, F\text{'}\right\} \left[A\text{'}\right] $

This can also be expressed with less jargon as follows:

$\left[A\right]\cdot \left[A\prime \right]=\left[A{\otimes }_{F}\left(F\vee F\prime \right){\otimes }_{F\prime }A\prime \right] \left[A\right] \cdot \left[A\text{'}\right] = \left[A \otimes_F \left(F \vee F\text{'}\right) \otimes_\left\{F\text{'}\right\} A\text{'}\right] $

However, the functorial approach gives a nice outlook on this basic result:

Proposition. With the above multiplication, $\mathrm{BR}\left(k\right)BR\left(k\right)$ is a commutative monoid.

Proof. The multiplicative identity is $\left[k\right]\in \mathrm{Br}\left(k\right)\left[k\right] \in Br\left(k\right)$, and commutativity is obvious, so the only thing to check is associativity. This is easy enough to do directly, but it’s a bit enlightening to notice that it’s a special case of an idea that goes back to A. H. Clifford.

In modern language: suppose we have any semilattice $LL$ and any functor $B:L\to \mathrm{AbGp}B: L \to AbGp$. This gives an abelian group $B\left(x\right)B\left(x\right)$ for any $x\in Lx \in L$, and a homomorphism

${B}_{x\prime ,x}:B\left(x\right)\to B\left(x\prime \right) B_\left\{x\text{'}, x\right\} : B\left(x\right) \to B\left(x\text{'}\right) $

whenever $a\le a\prime a \le a\text{'}$. Then the disjoint union

$\coprod _{x\in L}B\left(x\right) \coprod_\left\{x \in L\right\} B\left(x\right) $

becomes a commutative monoid if we define the product of $a\in B\left(x\right)a \in B\left(x\right)$ and $a\prime \in B\left(x\prime \right)a\text{'} \in B\left(x\text{'}\right)$ by

$a\cdot a\prime ={B}_{x\vee x\prime ,x}\left(a\right)\phantom{\rule{thickmathspace}{0ex}}\cdot \phantom{\rule{thickmathspace}{0ex}}{B}_{x\vee x\prime ,x\prime }\left(a\prime \right) a \cdot a\text{'} = B_\left\{x \vee x\text{'},x\right\} \left(a\right) \; \cdot \; B_\left\{x \vee x\text{'}, x\text{'}\right\}\left(a\text{'}\right) $

Checking associativity is an easy fun calculation, so I won’t deprive you of the pleasure. Moreover, there’s nothing special about abelian groups here: a functor $BB$ from $LL$ to commutative monoids would work just as well. ∎

Let’s see a couple of examples:

Example. The Brauer monoid of the real numbers is the disjoint union

$\mathrm{BR}\left(ℝ\right)=\mathrm{Br}\left(ℝ\right)\bigsqcup \mathrm{Br}\left(ℂ\right) BR\left(\mathbb\left\{R\right\}\right) = Br\left(\mathbb\left\{R\right\}\right) \sqcup Br\left(\mathbb\left\{C\right\}\right) $

This has three elements: $\left[ℝ\right]\left[\mathbb\left\{R\right\}\right]$, $\left[ℂ\right]\left[\mathbb\left\{C\right\}\right]$ and $\left[ℍ\right]\left[\mathbb\left\{H\right\}\right]$. Leaving out the brackets, the multiplication table is

$\begin{array}{lrrr}\cdot & ℝ& ℂ& ℍ\\ ℝ& ℝ& ℂ& ℍ\\ ℂ& ℂ& ℂ& ℂ\\ ℍ& ℍ& ℂ& ℝ\end{array} \begin\left\{array\right\}\left\{lrrr\right\} \cdot & \mathbf\left\{\mathbb\left\{R\right\}\right\} & \mathbf\left\{\mathbb\left\{C\right\}\right\} & \mathbf\left\{\mathbb\left\{H\right\}\right\} \\ \mathbf\left\{\mathbb\left\{R\right\}\right\} & \mathbb\left\{R\right\} & \mathbb\left\{C\right\} &\mathbb\left\{H\right\} \\ \mathbf\left\{\mathbb\left\{C\right\}\right\} & \mathbb\left\{C\right\} & \mathbb\left\{C\right\} & \mathbb\left\{C\right\} \\ \mathbf\left\{\mathbb\left\{H\right\}\right\} & \mathbb\left\{H\right\} & \mathbb\left\{C\right\} & \mathbb\left\{R\right\} \end\left\{array\right\} $

So, this monoid is isomorphic to the multiplicative monoid $𝟛=\left\{1,0,-1\right\}\mathbb\left\{3\right\} = \\left\{1, 0, -1\\right\}$. This formalizes the multiplicative aspect of Dyson’s ‘threefold way’, which I started grappling with in my paper Division algebras and quantum theory. If you read that paper you can see why I care: Hilbert spaces over the real numbers, complex numbers and quaternions are all important in quantum theory, so they must fit into a single structure. The Brauer monoid is a nice way to describe this structure.

Example. The Brauer monoid of a finite field $kk$ is the disjoint union

$\mathrm{BR}\left(k\right)=\coprod _{F\in L}\mathrm{Br}\left(F\right) BR\left(k\right) = \coprod_\left\{F \in L\right\} Br\left(F\right) $

where $LL$ is the lattice of subfields of the algebraic closure $\overline{k}\overline\left\{k\right\}$ that are finite extensions of $kk$. However, we’ve seen that $\mathrm{Br}\left(F\right)Br\left(F\right)$ is always the trivial group. Thus

$\mathrm{Br}\left(k\right)\cong L Br\left(k\right) \cong L $

with the monoid structure being the operation $\vee \vee$ in the lattice $LL$.

Example. The Brauer monoid of $\mathbb\left\{Q\right\}$ seems quite complicated to me, since it’s the disjoint union of $\mathrm{Br}\left(F\right)Br\left(F\right)$ for all $F\subset \overline{ℚ}F \subset \overline\left\{\mathbb\left\{Q\right\}\right\}$ that are finite extensions of $\mathbb\left\{Q\right\}$. Such fields $FF$ are called algebraic number fields, and their Brauer groups can, I believe, be computed using the Albert–Brauer–Hasse–Noether theorem. However, here we are doing this for all algebraic number fields, and also keeping track of how they ‘fit together’ using the so-called restriction maps ${\mathrm{Br}}_{F\prime ,F}:\mathrm{Br}\left(F\right)\to \mathrm{Br}\left(F\prime \right)Br_\left\{F\text{'}, F\right\} : Br\left(F\right) \to Br\left(F\text{'}\right)$ whenever $F\subseteq F\prime F\subseteq F\text{'}$. The absolute Galois group of a field always acts on its Brauer monoid, so the rather fearsome absolute Galois group of $\mathbb\left\{Q\right\}$ acts on $\mathrm{Br}\left(ℚ\right)Br\left(\mathbb\left\{Q\right\}\right)$, for whatever that’s worth.

Fleeing the siren song of number theory, let us move on to my main topic of interest, which is the ‘super’ or ${ℤ}_{2}\mathbb\left\{Z\right\}_2$-graded version of this whole story.

## Review of superalgebra

We now want to repeat everything we just did, systematically replacing the category of vector spaces over $kk$ by the category of super vector spaces over $kk$, which are ${ℤ}_{2}\mathbb\left\{Z\right\}_2$ graded vector spaces:

$V={V}_{0}\oplus {V}_{1} V = V_0 \oplus V_1 $

We call the elements of ${V}_{0}V_0$ even and the elements of ${V}_{1}V_1$ odd. Elements of either ${V}_{0}V_0$ or ${V}_{1}V_1$ are called homogeneous, and we say an element $a\in {V}_{i}a \in V_i$ has degree $ii$. A morphism in the category of super vector spaces is a linear map that preserves the degree of homogeneous elements.

The category of super vector spaces is symmetric monoidal in a way where we introduce a minus sign when we switch two odd elements.

### Simple superalgebras and division superalgebras

A superalgebra is a monoid in the category of super vector spaces. In other words, it is a super vector space $A={A}_{0}\oplus {A}_{1}A = A_0 \oplus A_1$ where the vector space $AA$ is an algebra in the usual sense and

$a\in {A}_{i},\phantom{\rule{thickmathspace}{0ex}}b\in {A}_{j}\phantom{\rule{1em}{0ex}}⇒\phantom{\rule{1em}{0ex}}a\cdot b\in {A}_{i+j} a \in A_i, \; b \in A_j \quad \implies \quad a \cdot b \in A_\left\{i + j\right\} $

where we do our addition mod 2. There is a tensor product of superalgebras, where

$\left(A\otimes B{\right)}_{i}=\underset{i=j+k}{⨁}{A}_{j}\otimes {B}_{k} \left(A \otimes B\right)_i = \bigoplus_\left\{i = j + k\right\} A_j \otimes B_k $

and multiplication is defined on homogeneous elements by:

$\left(a\otimes b\right)\left(a\prime \otimes b\prime \right)=\left(-1{\right)}^{i+j}\phantom{\rule{thickmathspace}{0ex}}aa\prime \otimes bb\prime \left(a \otimes b\right)\left(a\text{'} \otimes b\text{'}\right) = \left(-1\right)^\left\{i+j\right\} \; a a\text{'} \otimes b b\text{'} $

where $b\in {B}_{i},a\prime \in {A}_{j}b \in B_i, a\text{'} \in A_j$ are the elements getting switched.

An ideal $II$ of a superalgebra is homogeneous if it is of the form

$I={I}_{0}\oplus {I}_{1} I = I_0 \oplus I_1 $

where ${I}_{i}\subseteq {A}_{i}I_i \subseteq A_i$. We can take the quotient of a superalgebra by a homogeneous two-sided ideal and get another superalgebra. So, we say a superalgebra $AA$ over $FF$ is simple if its only two-sided homogeneous ideals are $\left\{0\right\}\\left\{0\\right\}$ and $AA$ itself.

A division superalgebra over $kk$ is a superalgebra $AA$ such that if $a\ne 0a \ne 0$ is homogeneous then there exists $b\in Ab \in A$ such that $ab=ba=1a b= b a = 1$.

At this point it is clear what we aim to do: generalize Brauer groups to this ‘super’ context by replacing division algebras with division superalgebras. Luckily this was already done a long time ago, by Wall:

• C. T. C. Wall, Graded Brauer groups, Journal für die reine und angewandte Mathematik 213 (1963–1964), 187–199.

He showed there are 10 division superalgebras over $\mathbb\left\{R\right\}$ and showed how 8 of these become elements of a kind of super Brauer group for $\mathbb\left\{R\right\}$, now called the ‘Brauer–Wall’ group. The other 2 become elements of the Brauer–Wall group of $\mathbb\left\{C\right\}$. A more up-to-date treatment of some of this material can be found here:

• Pierre Deligne, Notes on spinors, in Quantum Fields and Strings: a Course for Mathematicians, vol. 1, AMS. Providence, RI, 1999, pp. 99–135.

Nontrivial results that I state without proof will come from these sources.

Every division superalgebra is simple. Conversely, we want a super-Wedderburn theorem describing simple superalgebras in terms of division superalgebras. However, this must be more complicated than the ordinary Wedderburn theorem saying every simple algebra is a matrix algebra $D\left[n\right]D\left[n\right]$ with $DD$ a division algebra.

After all, besides matrix algebras, we have ‘matrix superalgebras’ to contend with. For any $p,q\ge 0p,q \ge 0$ let ${k}^{p|q}k^\left\{p|q\right\}$ be the super vector space with even part ${k}^{p}k^p$ and odd part ${k}^{q}k^q$. Then its endomorphism algebra

$k\left[p|q\right]=\mathrm{End}\left({k}^{p|q}\right) k\left[p|q\right] = End\left(k^\left\{p|q\right\}\right)$

becomes a superalgebra in a standard way, called a matrix superalgebra. Matrix superalgebras are always simple.

Deligne gives a classification of ‘central simple’ superalgebras, and from this we can derive a super-Wedderburn theorem. But what does ‘central simple’ mean in this context?

The supercommutator of two homogeneous elements $a\in {A}_{i}a \in A_i$, $b\in {A}_{j}b \in A_j$ of a superalgebra $AA$ is

$\left[a,b\right]=ab-\left(-1{\right)}^{i+j}ba \left[a,b\right] = a b - \left(-1\right)^\left\{i+j\right\} b a$

We can extend this by bilinearity to all elements of $AA$. We say $a,b\in Aa,b \in A$ supercommute if $\left[a,b\right]=0\left[a,b\right]= 0$. The supercenter of $AA$ is the set of elements in $AA$ that supercommute with every element of $AA$. If all elements of $AA$ supercommute, or equivalently if the supercenter of $AA$ is all of $AA$, we say $AA$ is supercommutative.

I believe a superalgebra $AA$ over $kk$ is central simple if $AA$ is simple and its supercenter is just $k\subseteq {A}_{0}k \subseteq A_0$, the scalar multiples of the identity. Deligne gives a more complicated definition of ‘central simple’, but then in Remark 3.5 proves it is equivalent to being semisimple with supercenter just $kk$. I believe this is equivalent to the more reasonable-sounding condition I just gave, but have not carefully checked.

In Remark 3.5, Deligne says that by copying an argument in Chapter 8 of Bourbaki’s Algebra one can show:

Proposition. Any central simple superalgebra over $kk$ is of the form $D\left[p|q\right]D\left[p|q\right]$ for some division superalgebra $DD$ whose supercenter is $kk$. Conversely, any superalgebra of this form is central simple.

Starting from this, Guo Chuan Thiang showed me how to prove the:

Super-Wedderburn Theorem. Suppose $AA$ is a simple superalgebra over $kk$, where $kk$ is a field not of characteristic 2. Its supercenter $Z\left(A\right)Z\left(A\right)$ is purely even, and $Z\left(A\right)Z\left(A\right)$ is a field extending $kk$. $AA$ is isomorphic to $D\left[p|q\right]D\left[p|q\right]$ where $DD$ is some division superalgebra $DD$ over $Z\left(A\right)Z\left(A\right)$.

It follows that any simple superalgebra over $kk$ is of the form $D\left[p|q\right]D\left[p|q\right]$ where $DD$ is a division superalgebra over $kk$. Conversely, if $DD$ is any division algebra over $kk$, then $D\left[p,q\right]D\left[p,q\right]$ is a simple superalgebra over $kk$.

Proof. Suppose $AA$ is a simple superalgebra over $kk$, and let $Z\left(A\right)Z\left(A\right)$ be its supercenter. Suppose $aa$ is a nonzero homogeneous element of $Z\left(A\right)Z\left(A\right)$. Then $aAa A$ is a graded two-sided ideal of $AA$. Since this ideal contains $aa$ it is nonzero. Thus, this must be $AA$ itself. So, there exists $b\in Ab \in A$ such that $ab=1a b = 1$.

If $aa$ is even, $bb$ must be as well, and we obtain $ba=ab=1b a = a b = 1$, so $aa$ has an inverse. Thus, the even part of $Z\left(A\right)Z\left(A\right)$ is a field.

If $aa$ is odd, it satisfies ${a}^{2}=-{a}^{2}a^2=-a^2$. Multiplying on the left by $bb$, then it follows that $a=-aa = -a$, so $a=0a = 0$, since $kk$ is not of characteristic 2.

In short, nonzero elements of $Z\left(A\right)Z\left(A\right)$ must be even and invertible. It follows that $Z\left(A\right)Z\left(A\right)$ is purely even, and is a field extending $kk$. $AA$ is central over this field $Z\left(A\right)Z\left(A\right)$, so by the previous proposition we see $A\cong D\left[p|q\right]A \cong D\left[p|q\right]$ for some division superalgebra $DD$ over $Z\left(A\right)Z\left(A\right)$. $DD$ will automatically be a division superalgebra over the smaller field $kk$ as well.

Conversely, suppose $DD$ is a division algebra over $kk$. Since $DD$ is simple, its supercenter will be a field $FF$ extending $kk$. By the previous proposition $D\left[p|q\right]D\left[p|q\right]$ will be a central simple superalgebra over $FF$. It follows that $D\left[p|q\right]D\left[p|q\right]$ is simple as a superalgebra over $kk$. ∎

Here is an all-important example:

Example. Let $k\left[\sqrt{-1}\right]k\left[\sqrt\left\{-1\right\}\right]$ be the free superalgebra over $kk$ on an odd generator whose square is -1. This superalgebra has a 1-dimensional even part and a 1-dimensional odd part. It is a division superalgebra. It is not supercommutative, since $\sqrt{-1}\sqrt\left\{-1\right\}$ does not supercommute with itself. It is central simple: its supercenter is just $kk$. Over an algebraically closed field $kk$ of characteristic other than 2, the only division superalgebras are $kk$ itself and $k\left[\sqrt{-1}\right]k\left[\sqrt\left\{-1\right\}\right]$.

I don’t understand what happens in characteristic 2.

### Morita equivalence and the Brauer–Wall group

The Brauer–Wall group consists of Morita equivalence classes of central simple superalgebras, or equivalently, Morita equivalence classes of division superalgebras. For this to make sense, first we need to define Morita equivalence.

Given a superalgebra $AA$ over $kk$ we define a left module to be a super vector space $VV$ over $kk$ equipped with a morphism (that is, a grade-preserving linear map)

$A\otimes V\to V A \otimes V \to V $

obeying the usual axioms of a left module. We define a morphism of left $AA$-modules in the obvious way, and let $\mathrm{Rep}\left(A\right)Rep\left(A\right)$ be the category of left $AA$-modules.

We say two algebras $A,BA, B$ over $kk$ are Morita equivalent if $\mathrm{Rep}\left(A\right)\simeq \mathrm{Rep}\left(B\right)Rep\left(A\right) \simeq Rep\left(B\right)$. In this situation we write $A\simeq BA \simeq B$.

Example. Every matrix superalgebra $k\left[p|q\right]k\left[p|q\right]$ is Morita equivalent to $kk$.

Example. If $A\simeq A\prime A \simeq A\text{'}$ and $B\simeq B\prime B \simeq B\text{'}$ then $A{\otimes }_{k}A\prime \simeq B{\otimes }_{k}B\prime A \otimes_k A\text{'} \simeq B \otimes_k B\text{'} $.

Example. Since every central simple superalgebra over $kk$ is of the form $D\left[p|q\right]=D\otimes k\left[p|q\right]D\left[p|q\right] = D \otimes k\left[p|q\right]$ for some division superalgebra $DD$ whose supercenter is just $kk$, the previous two examples imply that every central simple superalgebra over $kk$ is Morita equivalent to a division superalgebra whose center is just $kk$.

We define the Brauer–Wall group $\mathrm{Bw}\left(k\right)Bw\left(k\right)$ of the field $kk$ to be the set of Morita equivalence classes of central simple superalgebras over $kk$, given the following multiplication:

$\left[A\right]\otimes \left[B\right]\phantom{\rule{thickmathspace}{0ex}}:=\phantom{\rule{thickmathspace}{0ex}}\left[A\otimes B\right] \left[A\right] \otimes \left[B\right] \; := \; \left[A \otimes B\right] $

This is well-defined because the tensor product of central simple superalgebras is again central simple. Given that, $\mathrm{Bw}\left(k\right)Bw\left(k\right)$ is clearly a commutative monoid. But in fact it’s an abelian group.

Since every central simple superalgebra over $kk$ is Morita equivalent to a division superalgebra whose center is just $kk$, we can compute Brauer–Wall groups by focusing on these division superalgebras.

Example. For any algebraically closed field $kk$, the Brauer–Wall group $\mathrm{Bw}\left(k\right)Bw\left(k\right)$ is ${ℤ}_{2}\mathbb\left\{Z\right\}_2$, where the two elements are $\left[k\right]\left[k\right]$ and $\left[k\left[\sqrt{-1}\right]\right]\left[k\left[\sqrt\left\{-1\right\}\right]\right]$. In particular, $\mathrm{Bw}\left(ℂ\right)Bw\left(\mathbb\left\{C\right\}\right)$ is ${ℤ}_{2}\mathbb\left\{Z\right\}_2$. Wall showed that this ${ℤ}_{2}\mathbb\left\{Z\right\}_2$ is related to the period-2 phenomenon in complex K-theory and the theory of complex Clifford algebras. The point is that

$ℂ\left[\sqrt{-1}{\right]}^{\otimes n}\cong ℂ{\mathrm{liff}}_{n} \mathbb\left\{C\right\}\left[\sqrt\left\{-1\right\}\right]^\left\{\otimes n\right\} \cong \mathbb\left\{C\right\}liff_n $

where $ℂ{\mathrm{liff}}_{n}\mathbb\left\{C\right\}liff_n$ is the complex Clifford algebra on $nn$ square roots of -1, made into a superalgebra in the usual way. It is well-known that

$ℂ{\mathrm{liff}}_{2}\simeq ℂ{\mathrm{liff}}_{0} \mathbb\left\{C\right\}liff_2 \simeq \mathbb\left\{C\right\}liff_0 $

and this gives the period-2 phenomenon.

Example. $\mathrm{Bw}\left(ℝ\right)Bw\left(\mathbb\left\{R\right\}\right)$ is much more interesting: by a theorem of Wall, this is ${ℤ}_{8}\mathbb\left\{Z\right\}_8$. This is generated by $\left[ℝ\left[\sqrt{-1}\right]\right]\left[\mathbb\left\{R\right\}\left[\sqrt\left\{-1\right\}\right]\right]$. Wall showed that this ${ℤ}_{8}\mathbb\left\{Z\right\}_8$ is related to the period-8 phenomenon in real K-theory and the theory of real Clifford algebras. The point is that

$ℝ\left[\sqrt{-1}{\right]}^{\otimes n}\cong {\mathrm{Cliff}}_{n} \mathbb\left\{R\right\}\left[\sqrt\left\{-1\right\}\right]^\left\{\otimes n\right\} \cong Cliff_\left\{n\right\} $

where ${\mathrm{Cliff}}_{n}Cliff_\left\{n\right\}$ is the real Clifford algebra on $nn$ square roots of -1, made into a superalgebra in the usual way. It is well-known that

${\mathrm{Cliff}}_{8}\simeq {\mathrm{Cliff}}_{0} Cliff_\left\{8\right\} \simeq Cliff_0 $

and this gives the period-8 phenomenon.

Example. More generally, Wall showed that as long as $kk$ doesn’t have characteristic 2, $\mathrm{Bw}\left(k\right)Bw\left(k\right)$ is an iterated extension of ${ℤ}_{2}\mathbb\left\{Z\right\}_2$ by ${k}^{*}/\left({k}^{*}{\right)}^{2}k^*/\left(k^*\right)^2$ by $\mathrm{Br}\left(k\right)Br\left(k\right)$. For a quick modern proof, see Lemma 3.7 in Deligne’s paper. In the case $k=ℝk = \mathbb\left\{R\right\}$ all three of these groups are ${ℤ}_{2}\mathbb\left\{Z\right\}_2$ and the iterated extension gives ${ℤ}_{8}\mathbb\left\{Z\right\}_8$.

## The Brauer–Wall monoid

And now the rest practically writes itself. Let $kk$ be a field and $\overline{k}\overline\left\{k\right\}$ its algebraic completion. As before, let $LL$ be the semilattice of intermediate fields

$k\subseteq F\subseteq \overline{k} k \subseteq F \subseteq \overline\left\{k\right\} $

where $FF$ is a finite extension of $kk$.

We define the underlying set of Brauer–Wall monoid of $kk$ to be the disjoint union

$\mathrm{BW}\left(k\right)=\coprod _{F\in L}\mathrm{Bw}\left(F\right) BW\left(k\right) = \coprod_\left\{F \in L\right\} Bw\left(F\right) $

To make this into a commutative monoid, we use the functoriality of the Brauer–Wall group. Suppose we have an inclusion of fields $F\subseteq F\prime F \subseteq F\text{'}$ in the semilattice $LL$. Then we get a homomorphism

${\mathrm{Bw}}_{F\prime ,F}:\mathrm{Bw}\left(F\right)\to \mathrm{Bw}\left(F\prime \right) Bw_\left\{F\text{'}, F\right\} : Bw\left(F\right) \to Bw\left(F\text{'}\right) $

as follows:

${\mathrm{Bw}}_{F\prime ,F}\left[A\right]=\left[F\prime {\otimes }_{F}A\right] Bw_\left\{F\text{'},F\right\} \left[A\right] = \left[F\text{'} \otimes_F A\right] $

and this gives a functor

$\mathrm{Bw}:L\to \mathrm{AbGp} Bw: L \to AbGp $

Using this, we multiply two elements in the Brauer–Wall monoid as follows. Given $\left[A\right]\in \mathrm{Bw}\left(F\right)\left[A\right] \in Bw\left(F\right)$ and $\left[A\prime \right]\in \mathrm{Bw}\left(F\prime \right)\left[A\text{'}\right] \in Bw\left(F\text{'}\right)$, their product is

$\left[A\right]\cdot \left[A\prime \right]\phantom{\rule{thickmathspace}{0ex}}:=\phantom{\rule{thickmathspace}{0ex}}{\mathrm{Bw}}_{F\vee F\prime ,F}\left[A\right]\phantom{\rule{thickmathspace}{0ex}}\cdot {\mathrm{Bw}}_{F\vee F\prime ,F\prime }\left[A\prime \right] \left[A\right] \cdot \left[A\text{'}\right] \; := \; Bw_\left\{F \vee F\text{'}, F\right\} \left[A\right] \; \cdot Bw_\left\{F \vee F\text{'}, F\text{'}\right\} \left[A\text{'}\right] $

or in other words

$\left[A\right]\cdot \left[A\prime \right]=\left[A{\otimes }_{F}\left(F\vee F\prime \right){\otimes }_{F\prime }A\prime \right] \left[A\right] \cdot \left[A\text{'}\right] = \left[A \otimes_F \left(F \vee F\text{'}\right) \otimes_\left\{F\text{'}\right\} A\text{'}\right] $

Proposition. With the above multiplication, $\mathrm{BR}\left(k\right)BR\left(k\right)$ is a commutative monoid.

Proof. The same argument that let us show associativity for multiplication in the Brauer monoid works again here. ∎

Example. As a set, the Brauer–Wall monoid of the real numbers is the disjoint union

$\mathrm{BW}\left(ℝ\right)=\mathrm{Bw}\left(ℝ\right)\bigsqcup \mathrm{Bw}\left(ℂ\right)\cong {ℤ}_{8}\bigsqcup {ℤ}_{2} BW\left(\mathbb\left\{R\right\}\right) = Bw\left(\mathbb\left\{R\right\}\right) \sqcup Bw\left(\mathbb\left\{C\right\}\right) \cong \mathbb\left\{Z\right\}_8 \sqcup \mathbb\left\{Z\right\}_2 $

The monoid operation — let’s call it addition now — is the usual addition on ${ℤ}_{8}\mathbb\left\{Z\right\}_8$ when applied to two elements of that group, and the usual addition on ${ℤ}_{2}\mathbb\left\{Z\right\}_2$ when applied to two elements of that group. The only interesting part is when we add an element $a\in {ℤ}_{8}a \in \mathbb\left\{Z\right\}_8$ and an element $b\in {ℤ}_{2}b \in \mathbb\left\{Z\right\}_2$. For this we need to convert $aa$ into an element of ${ℤ}_{2}\mathbb\left\{Z\right\}_2$. For that we use the homomorphism

${\mathrm{Bw}}_{ℂ,ℝ}:\mathrm{Bw}\left(ℝ\right)\to \mathrm{Bw}\left(ℂ\right) Bw_\left\{\mathbb\left\{C\right\}, \mathbb\left\{R\right\}\right\} : Bw\left(\mathbb\left\{R\right\}\right) \to Bw\left(\mathbb\left\{C\right\}\right) $

which sends $\left[ℝ\left[\sqrt{-1}\right]\right]\left[\mathbb\left\{R\right\}\left[\sqrt\left\{-1\right\}\right]\right]$ to $\left[ℂ\left[\sqrt{-1}\right]\right]\left[\mathbb\left\{C\right\}\left[\sqrt\left\{-1\right\}\right]\right]$. More concretely,

${\mathrm{Bw}}_{ℂ,ℝ}:{ℤ}_{8}\to {ℤ}_{2} Bw_\left\{\mathbb\left\{C\right\}, \mathbb\left\{R\right\}\right\} : \mathbb\left\{Z\right\}_8 \to \mathbb\left\{Z\right\}_2 $

takes an integer mod 8 and gives the corresponding integer mod 2.

So, very concretely,

$\mathrm{BW}\left(ℝ\right)\cong \mathrm{𝟙𝟘}=\left\{0,1,2,3,4,5,6,7,0,1\right\} BW\left(\mathbb\left\{R\right\}\right) \cong \mathbb\left\{10\right\} = \\left\{0,1,2,3,4,5,6,7,\mathbf\left\{0\right\}, \mathbf\left\{1\right\}\\right\}$

where the monoid operation in $\mathrm{𝟙𝟘}\mathbb\left\{10\right\}$ is addition mod 8 for two lightface numbers, but addition mod 2 for two boldface numbers or a boldface and a lightface one.

## Conclusion

I had meant to include a section explaining in detail how the 10 elements of this monoid $\mathrm{𝟙𝟘}\mathbb\left\{10\right\}$ correspond to 10 kinds of matter, but this post is getting too long. So for now, at least, you can click on this picture to get an explanation of that!

### References

Besides what I’ve already mentioned about the classification of simple superalgebras, here are some other links. Wall proved a kind of super-Wedderburn theorem starting in the section of his paper called Elementary properties. Natalia Zhukavets has an Introduction to superalgebras which in Theorem 1.5 proves that in an algebraically closed field of characteristic different than 2, any simple superalgebra is of the form $k\left[p|q\right]k\left[p|q\right]$ or $D\left[n\right]D\left[n\right]$ where $D=k\left[u\right]D = k\left[u\right]$, $uu$ being an odd square root of 1. Over an algebraically closed field, this superdivision algebra $DD$ is isomorphic to the division algebra that I called $k\left[\sqrt{-1}\right]k\left[\sqrt\left\{-1\right\}\right]$. Over a field that is not algebraically closed, they can be different, and there can be many nonisomorphic division algebras obtained by adjoining to $kk$ an odd square root of $a\in ka \in k$ where $a\ne 0a \ne 0$.

Jinkui Wan and Weiqiang Wang have a paper with a Digression on superalgebras which summarizes Wall’s results in more modern language. Benjamin Gammage has an expository paper with a Classification of finite-dimensional simple superalgebras. This only classifies the ‘central’ ones — but as we’ve seen, that’s the key case.

### The n-Category Cafe

The Ten-Fold Way (Part 4)

Back in 2005, Todd Trimble came out with a short paper on the super Brauer group and super division algebras, which I’d like to TeXify and reprint here.

In it, he gives extremely efficient proofs of several facts I alluded to last time. Namely:

• There are exactly 10 real division superalgebras.

• 8 of them have center $\mathbb\left\{R\right\}$, and these are Morita equivalent to the real Clifford algebras ${\mathrm{Cliff}}_{0},\dots ,{\mathrm{Cliff}}_{7}Cliff_0, \dots, Cliff_7$.

• 2 of them have center $\mathbb\left\{C\right\}$, and these are Morita equivalent to the complex Clifford algebras $ℂ{\mathrm{liff}}_{0}\mathbb\left\{C\right\}liff_0$ and $ℂ{\mathrm{liff}}_{1}\mathbb\left\{C\right\}liff_1$.

• The real Clifford algebras obey

${\mathrm{Cliff}}_{i}{\otimes }_{ℝ}{\mathrm{Cliff}}_{j}\simeq {\mathrm{Cliff}}_{i+j\mathrm{mod}8} Cliff_i \otimes_\left\{\mathbb\left\{R\right\}\right\} Cliff_j \simeq Cliff_\left\{i + j mod 8\right\} $

where $\simeq \simeq$ means they’re Morita equivalent as superalgebras.

It easily follows from his calculations that also:

• The complex Clifford algebras obey

$ℂ{\mathrm{liff}}_{i}{\otimes }_{ℂ}ℂ{\mathrm{liff}}_{j}\simeq ℂ{\mathrm{liff}}_{i+j\mathrm{mod}2} \mathbb\left\{C\right\}liff_i \otimes_\left\{\mathbb\left\{C\right\}\right\} \mathbb\left\{C\right\}liff_j \simeq \mathbb\left\{C\right\}liff_\left\{i + j mod 2\right\} $

These facts lie at the heart of the ten-fold way. So, let’s see why they’re true!

Before we start, two comments are in order. First, Todd uses Deligne’s term ‘super Brauer group’ where I decided to use ‘Brauer–Wall group’. Second, and more importantly, there’s something about Morita equivalence everyone should know.

In my last post I said that two algebras are Morita equivalent if they have equivalent categories of representations. Todd uses another definition which I actually like much better. It’s equivalent, it takes longer to explain, but it reveals more about what’s really going on. For any field $kk$, there is a bicategory with

• algebras over $kk$ as objects,

$AA$-$BB$ bimodules as 1-morphisms from the algebra $AA$ to the algebra $BB$, and

• bimodule homomorphisms as 2-morphisms.

Two algebras $AA$ and $BB$ over $kk$ are Morita equivalent if they are equivalent in this bicategory; that is, if there’s a $AA$-$BB$ bimodule $MM$ and a $BB$-$AA$ bimodule $NN$ such that

$M{\otimes }_{B}N\cong A M \otimes_B N \cong A $

as an $AA$-$AA$ bimodule and

$N{\otimes }_{A}M\cong B N \otimes_A M \cong B $

as a $BB$-$BB$ bimodule. The same kind of definition works for Morita equivalence of superalgebras, and Todd uses that here.

So, with no further ado, here is Todd’s note.

## The super Brauer group and division superalgebras

### The super Brauer group

Let $\mathrm{SuperVect}SuperVect$ be the symmetric monoidal category of finite-dimensional super vector spaces over $\mathbb\left\{R\right\}$. By super algebra I mean a monoid in this category. There’s a bicategory whose objects are super algebras $AA$, whose 1-morphisms $M:A\to BM: A \to B$ are left $AA$- right $BB$-modules in $VV$, and whose 2-morphisms are homomorphisms between modules. This is a symmetric monoidal bicategory under the usual tensor product on $\mathrm{SuperVect}SuperVect$.

$AA$ and $BB$ are Morita equivalent if they are equivalent objects in this bicategory. Equivalence classes $\left[A\right]\left[A\right]$ form an abelian monoid whose multiplication is given by the monoidal product. The super Brauer group of $\mathbb\left\{R\right\}$ is the subgroup of invertible elements of this monoid.

If $\left[B\right]\left[B\right]$ is inverse to [A] in this monoid, then in particular $A\otimes \left(-\right)A \otimes \left(-\right)$ can be considered left biadjoint to $B\otimes \left(-\right)B \otimes \left(-\right)$. On the other hand, in the bicategory above we always have a biadjunction

$\begin{array}{c}A\otimes C\to D\\ ------\\ C\to {A}^{*}\otimes D\end{array} \begin\left\{array\right\}\left\{ccl\right\} A \otimes C \to D \\ ------ \\ C \to A^* \otimes D \end\left\{array\right\} $

essentially because left $AA$-modules are the same as right ${A}^{*}A^*$-modules, where ${A}^{*}A^*$ denotes the super algebra opposite to $AA$. Since right biadjoints are unique up to equivalence, we see that if an inverse to $\left[A\right]\left[A\right]$ exists, it must be $\left[{A}^{*}\right]\left[A^*\right]$.

This can be sharpened: an inverse to $\left[A\right]\left[A\right]$ exists iff the unit and counit

$1\to {A}^{*}\otimes A\phantom{\rule{2em}{0ex}}A\otimes {A}^{*}\to 1 1 \to A^* \otimes A \qquad A \otimes A^* \to 1 $

are equivalences in the bicategory. Actually, one is an equivalence iff the other is, because both of these canonical 1-morphisms are given by the same $AA$-bimodule, namely the one given by $AA$ acting on both sides of the underlying superspace of $AA$ (call it $SS$) by multiplication. Either is an equivalence if the bimodule structure map

${A}^{*}\otimes A\to \mathrm{Hom}\left(S,S\right), A^* \otimes A \to Hom\left(S, S\right), $

which is a map of superalgebras, is an isomorphism.

### ${\mathrm{Cliff}}_{1}Cliff_1$

As an example, let $A={\mathrm{Cliff}}_{1}A = Cliff_1$ be the Clifford algebra generated by the 1-dimensional space $\mathbb\left\{R\right\}$ with the usual quadratic form $Q\left(x\right)={x}^{2}Q\left(x\right) = x^2$, and ${ℤ}_{2}\mathbb\left\{Z\right\}_2$-graded in the usual way. Thus, the homogeneous parts of $AA$ are 1-dimensional and there is an odd generator $ii$ satisfying ${i}^{2}=-1i^2 = -1$. The opposite ${A}^{*}A^*$ is similar except that there is an odd generator $ee$ satisfying ${e}^{2}=1e^2 = 1$. Under the map

${A}^{*}\otimes A\to \mathrm{Hom}\left(S,S\right) A^* \otimes A \to Hom\left(S, S\right) $

where we write $SS$ as a sum of even and odd parts $ℝ+ℝi\mathbb\left\{R\right\} + \mathbb\left\{R\right\}i$, this map has a matrix representation

$e\otimes i↦\left(\begin{array}{cc}-1& 0\\ 0& 1\end{array}\right) e \otimes i \mapsto \left\left(\begin\left\{array\right\}\left\{cc\right\} -1 & 0 \\ 0 & 1 \end\left\{array\right\} \right\right) $

$1\otimes i↦\left(\begin{array}{cc}0& -1\\ 1& 0\end{array}\right) 1 \otimes i \mapsto \left\left(\begin\left\{array\right\}\left\{cc\right\} 0 & -1 \\ 1 & 0 \end\left\{array\right\} \right\right) $

$e\otimes 1↦\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right) e \otimes 1 \mapsto \left\left(\begin\left\{array\right\}\left\{cc\right\} 0 & 1 \\ 1 & 0 \end\left\{array\right\} \right\right) $

which makes it clear that this map is surjective and thus an isomorphism. Hence $\left[{\mathrm{Cliff}}_{1}\right]\left[Cliff_1\right]$ is invertible.

One manifestation of Bott periodicity is that $\left[{\mathrm{Cliff}}_{1}\right]\left[Cliff_1\right]$ has order 8. We will soon see a very easy proof of this fact. A theorem of C. T. C. Wall is that $\left[{\mathrm{Cliff}}_{1}\right]\left[Cliff_1\right]$ in fact generates the super Brauer group; I believe this can be shown by classifying super division algebras, as discussed below.

### Bott periodicity

That $\left[{\mathrm{Cliff}}_{1}\right]\left[Cliff_1\right]$ has order 8 is an easy calculation. Let ${\mathrm{Cliff}}_{r}Cliff_r$ denote the $rr$-fold tensor power of ${\mathrm{Cliff}}_{1}Cliff_1$. ${\mathrm{Cliff}}_{2}Cliff_2$ for instance has two supercommuting odd elements $i,ji, j$ satisfying ${i}^{2}={j}^{2}=-1i^2 = j^2 = -1$; it follows that $k\phantom{\rule{thickmathspace}{0ex}}:=ijk \;:= i j$ satisfies ${k}^{2}=-1k^2 = -1$, and we get the usual quaternions, graded so that the even part is the span $⟨1,k⟩\langle 1, k\rangle$ and the odd part is $⟨i,j⟩\langle i, j\rangle$.

${\mathrm{Cliff}}_{3}Cliff_3$ has three supercommuting odd elements $i,j,l,i, j, l,$ all of which are square roots of $-1-1$. It follows that $e=ijle = i j l$ is an odd central involution (here ‘central’ is taken in the ungraded sense), and also that $i\prime =jli\text{'} = j l$, $j\prime =lij\text{'} = l i$, $k\prime =ijk\text{'} = i j$ satisfy the Hamiltonian equations

$\left(i\prime {\right)}^{2}=\left(j\prime {\right)}^{2}=\left(k\prime {\right)}^{2}=i\prime j\prime k\prime =-1, \left(i\text{'}\right)^2 = \left(j\text{'}\right)^2 = \left(k\text{'}\right)^2 = i\text{'}j\text{'}k\text{'} = -1, $

so we have ${\mathrm{Cliff}}_{3}=ℍ\left[e\right]/⟨{e}^{2}-1⟩Cliff_3 = \mathbb\left\{H\right\}\left[e\right]/\langle e^2 - 1\rangle$. Note this is the same as

$ℍ\otimes {\mathrm{Cliff}}_{1}^{*} \mathbb\left\{H\right\} \otimes Cliff_1^* $

where the $\mathbb\left\{H\right\}$ here is the quaternions viewed as a super algebra concentrated in degree 0 (i.e. is purely bosonic).

Then we see immediately that ${\mathrm{Cliff}}_{4}={\mathrm{Cliff}}_{3}\otimes {\mathrm{Cliff}}_{1}Cliff_4 = Cliff_3 \otimes Cliff_1$ is equivalent to purely bosonic $\mathbb\left\{H\right\}$ (since the ${\mathrm{Cliff}}_{1}Cliff_1$ cancels ${\mathrm{Cliff}}_{1}^{*}Cliff_1^*$ in the super Brauer group).

At this point we are done: we know that conjugation on (purely bosonic) $\mathbb\left\{H\right\}$ gives an isomorphism

${ℍ}^{*}\cong ℍ \mathbb\left\{H\right\}^* \cong \mathbb\left\{H\right\} $

hence $\left[ℍ{\right]}^{-1}=\left[{ℍ}^{*}\right]=\left[ℍ\right]\left[\left\{\mathbb\left\{H\right\}\right\}\right]^\left\{-1\right\} = \left[\mathbb\left\{H\right\}^*\right] = \left[\mathbb\left\{H\right\}\right]$, i.e. $\left[ℍ\right]=\left[{\mathrm{Cliff}}_{4}\right]\left[\mathbb\left\{H\right\}\right] = \left[Cliff_4\right]$ has order 2! Hence $\left[{\mathrm{Cliff}}_{1}\right]\left[Cliff_1\right]$ has order 8.

### The super Brauer clock

All this generalizes to arbitrary Clifford algebras: if a real quadratic vector space $\left(V,Q\right)\left(V, Q\right)$ has signature $\left(r,s\right)\left(r, s\right)$, then the superalgebra $\mathrm{Cliff}\left(V,Q\right)Cliff\left(V, Q\right)$ is isomorphic to ${A}^{\otimes r}\otimes {{A}^{*}}^{\otimes s}A^\left\{\otimes r\right\} \otimes \left\{A^*\right\}^\left\{\otimes s\right\}$, where ${A}^{\otimes r}A^\left\{\otimes r\right\}$ denotes the $rr$-fold tensor product of $A={\mathrm{Cliff}}_{1}A = Cliff_1$. By the above calculation we see tha $\mathrm{Cliff}\left(V,Q\right)Cliff\left(V, Q\right)$ is equivalent to ${\mathrm{Cliff}}_{r-s}Cliff_\left\{r-s\right\}$ where $r-sr-s$ is taken modulo 8.

For the record, then, here are the hours of the super Brauer clock, where $ee$ denotes an odd element, and $\simeq \simeq$ denotes Morita equivalence:

$\begin{array}{ccl}{\mathrm{Cliff}}_{0}& \simeq & ℝ\\ {\mathrm{Cliff}}_{1}& \simeq & ℝ+ℝe,\phantom{\rule{1em}{0ex}}{e}^{2}=-1\\ {\mathrm{Cliff}}_{2}& \simeq & ℂ+ℂe,\phantom{\rule{1em}{0ex}}{e}^{2}=-1,ei=-ie\\ {\mathrm{Cliff}}_{3}& \simeq & ℍ+ℍe,\phantom{\rule{1em}{0ex}}{e}^{2}=1,ei=ie,ej=je,ek=ke\\ {\mathrm{Cliff}}_{4}& \simeq & ℍ\\ {\mathrm{Cliff}}_{5}& \simeq & ℍ+ℍe,\phantom{\rule{1em}{0ex}}{e}^{2}=-1,ei=ie,ej=je,ek=ke\\ {\mathrm{Cliff}}_{6}& \simeq & ℂ+ℂe,\phantom{\rule{1em}{0ex}}{e}^{2}=1,ei=-ie\\ {\mathrm{Cliff}}_{7}& \simeq & ℝ+ℝe,\phantom{\rule{1em}{0ex}}{e}^{2}=1\end{array} \begin\left\{array\right\}\left\{ccl\right\} Cliff_0 & \simeq & \mathbb\left\{R\right\} \\ Cliff_1 & \simeq & \mathbb\left\{R\right\} + \mathbb\left\{R\right\}e, \quad e^2 = -1 \\ Cliff_2 & \simeq & \mathbb\left\{C\right\} + \mathbb\left\{C\right\}e, \quad e^2 = -1, e i = -i e \\ Cliff_3 & \simeq & \mathbb\left\{H\right\} + \mathbb\left\{H\right\}e, \quad e^2 = 1, e i = i e, e j = j e, e k = k e \\ Cliff_4 & \simeq & \mathbb\left\{H\right\} \\ Cliff_5 & \simeq & \mathbb\left\{H\right\} + \mathbb\left\{H\right\}e, \quad e^2 = -1, e i = i e, e j = j e, e k = k e \\ Cliff_6 & \simeq & \mathbb\left\{C\right\} + \mathbb\left\{C\right\} e, \quad e^2 = 1, e i = -i e \\ Cliff_7 & \simeq & \mathbb\left\{R\right\} + \mathbb\left\{R\right\}e, \quad e^2 = 1 \end\left\{array\right\} $

All the superalgebras on the right are in fact division superalgebras, i.e. superalgebras in which every nonzero homogeneous element is invertible.

To prove Wall’s result that $\left[{\mathrm{Cliff}}_{1}\right]\left[Cliff_1\right]$ generates the super Brauer group, we need a lemma: any element in the super Brauer group is the class of a central division superalgebra: that is, one with $\mathbb\left\{R\right\}$ as its center.

Then, if we classify the division superalgebras over $\mathbb\left\{R\right\}$ and show the central ones are Morita equivalent to ${\mathrm{Cliff}}_{0},\dots ,{\mathrm{Cliff}}_{7}Cliff_0, \dots, Cliff_7$, we’ll be done.

### Classifying real division superalgebras

I’ll take as known that the only associative division algebras over $\mathbb\left\{R\right\}$ are $ℝ,ℂ,ℍ\mathbb\left\{R\right\}, \mathbb\left\{C\right\}, \mathbb\left\{H\right\}$ — the even part $AA$ of an associative division superalgebra must be one of these cases. We can express the associativity of a superalgebra (with even part $AA$) by saying that the odd part $MM$ is an $AA$-bimodule equipped with a $AA$-bimodule map pairing $⟨-,-⟩:M{\otimes }_{A}M\to A \langle - , - \rangle : M \otimes_A M \to A $ such that:

$a⟨b,c⟩=⟨a,b⟩c\phantom{\rule{thickmathspace}{0ex}}\mathrm{for}\phantom{\rule{thickmathspace}{0ex}}\mathrm{all}\phantom{\rule{thickmathspace}{0ex}}a,b,c\in M\phantom{\rule{2em}{0ex}}\left(\star \right) a\langle b, c\rangle = \langle a, b\rangle c \; for \; all \; a, b, c \in M \qquad \left(\star\right) $

If the superalgebra is a division superalgebra which is not wholly concentrated in even degree, then multiplication by a nonzero odd element induces an isomorphism

$A\to M A \to M $

and so $MM$ is 1-dimensional over A; choose a basis element $ee$ for $MM$.

The key observation is that for any $a\in Aa \in A$, there exists a unique $a\prime \in Aa\text{'} \in A$ such that

$ae=ea\prime a e = e a\text{'} $

and that the $AA$-bimodule structure forces $\left(ab\right)\prime =a\prime b\prime \left(a b\right)\text{'} = a\text{'}b\text{'}$. Hence we have an automorphism (fixing the real field) $\left(-\right)\prime :A\to A \left(-\right)\text{'}: A \to A $

and we can easily enumerate (up to isomorphism) the possibilities for associative division superalgebras over $\mathbb\left\{R\right\}$:

1. $A=ℝA = \mathbb\left\{R\right\}$. Here we can adjust $ee$ so that ${e}^{2}\phantom{\rule{thickmathspace}{0ex}}:=⟨e,e⟩e^2 \; := \langle e, e\rangle$ is either $-1-1$ or $11$. The corresponding division superalgebras occur at 1 o’clock and 7 o’clock on the super Brauer clock.

2. $A=ℂA = \mathbb\left\{C\right\}$. There are two $\mathbb\left\{R\right\}$-automorphisms $ℂ\to ℂ\mathbb\left\{C\right\} \to \mathbb\left\{C\right\}$. In the case where the automorphism is conjugation, condition $\left(\star \right)\left(\star\right)$ for super associativity gives $⟨e,e⟩e=e⟨e,e⟩\langle e, e\rangle e = e\langle e, e\rangle$ so that $⟨e,e⟩\langle e, e\rangle$ must be real. Again $ee$ can be adjusted so that $⟨e,e⟩\langle e, e\rangle$ equals $-1-1$ or $11$. These possibilities occur at 2 o’clock and 6 o’clock on the super Brauer clock.

For the identity automorphism, we can adjust $ee$ so that $⟨e,e⟩\langle e, e \rangle$ is 1. This gives the super algebra $ℂ\left[e\right]/⟨{e}^{2}-1⟩\mathbb\left\{C\right\}\left[e\right]/\langle e^2 - 1\rangle$ (where $ee$ commutes with elements in $\mathbb\left\{C\right\}$). This does not occur on the super Brauer clock over $\mathbb\left\{R\right\}$. However, it does generate the super Brauer group over $\mathbb\left\{C\right\}$ (which is of order two).

3. $A=ℍA = \mathbb\left\{H\right\}$. Here $\mathbb\left\{R\right\}$-automorphisms $ℍ\to ℍ\mathbb\left\{H\right\} \to \mathbb\left\{H\right\}$ are given by $h↦xh{x}^{-1}h \mapsto x h x^\left\{-1\right\}$ for $x\in ℍx \in \mathbb\left\{H\right\}$. In other words

$he=exh{x}^{-1} h e = e x h x^\left\{-1\right\} $

whence $exe x$ commutes with all elements of $\mathbb\left\{H\right\}$ (i.e. we can assume wlog that the automorphism is the identity). The properties of the pairing guarantee that $h⟨e,e⟩=⟨e,e⟩hh\langle e, e\rangle = \langle e, e\rangle h$ for all $h\mathrm{in}ℍh in \mathbb\left\{H\right\}$, so $⟨e,e⟩\langle e, e \rangle$ is real and again we can adjust $ee$ so that $⟨e,e⟩\langle e, e\rangle$ equals $11$ or $-1-1$. These cases occur at 3 o’clock and 5 o’clock on the super Brauer clock.

This appears to be a complete (even if a bit pedestrian) analysis.

## August 11, 2014

### Clifford V. Johnson - Asymptotia

Mountain Sketch
I went for a little hike on Sunday. Usually when I'm here visiting at the Aspen Center for Physics I go on several hikes, but this year it looks like I will only do one, and a moderate one at that. I had a bit of a foot injury several weeks ago, so don't want to put too much stress on it for a while. If you've looked at the Aspen Center film (now viewable on YouTube!) you'll know from some of the interviews that this is a big component of many physicist's lives while at the Center. I find that it is nice to get my work to a point where I can step back from a calculation and think a bit more broadly about the physics for a while. A hike is great for that, and in all likelihood one comes back from the hike with new ideas and insights (as happened for me on this hike - more later)... maybe even an idea for a new calculation. So I took the bus up to the Maroon Bells and hiked up to Crater Lake and a bit beyond into the West Maroon Valley, hunting a few wildflowers. I will share some pictures of those later. (I've heard that they are great up at Buckskin pass, and I was tempted to push on up to there, but I resisted the temptation.) I brought along several pens, watercolour pencils, and a water brush (for the watercolour pencils) because I'd decided that I would do some sketches at various points... you know, really sit with the landscape and drink it in - in that [...] Click to continue reading this post

### astrobites - astro-ph reader's digest

Peeling apart a neutron star

Title: Properties of High-Density Matter in Neutron Stars
Authors: F. Weber, G. A. Contrera, M. G. Orsaria, W. Spinella, O. Zubairi
First Author’s institution: Department of Physics, San Diego State University

Neutron stars are the remnant cores of massive progenitor stars, and contain the most extreme states of matter detectable in the Universe. While much effort has been expended on examining matter at extreme densities and temperatures in terrestrial environments (e.g. experiments such as the Relativistic Heavy Ion Collider, LHC, etc.), neutron stars offer us a rare glimpse in how these states of matter occur in nature.

At the extreme densities found inside neutron stars, atoms are so densely packed together that new states of matter can exist. While neutron stars are extreme environments in themselves, it is possible for matter to transform into something even more exotic through an increase in density. The main example of this is quark deconfinement, in which fundamental particles (e.g. electrons and neutrons) break down into their constituent quarks. Quarks normally don’t exist as free particles, but this can happen under the extreme temperature and densities at which quark deconfinement occurs. In the strange quark matter hypothesis, a quark star could result if quark matter is more stable than ordinary matter (Fig. 1).

Fig. 1: The predicted structures of a quark star and a neutron star.

Neutron stars usually spin at high rotational frequencies (which we observe as pulsars), and this rotation can also induce interesting changes in its structure. A large rotational velocity can alter a star’s core density through centrifugal forces (i.e. a faster spin leads to a decrease in density). This change in density can lead to a phase transition between baryons and their constituent quarks. The resulting transformation in the state of matter will change the star’s moment of inertia. A different moment of inertia will subsequently affect a neutron star’s spin rate, causing a spin up. Normally, neutron stars spin down over time, and thus their central densities increase slightly owing to a lack of centrifugal forces. If this scenario occurs, we should be able to observe a sudden increase in a neutron star’s spin rate, an effect known as “backbending”. By examining the braking behavior of pulsars over time, it might be possible to detect signs of quark deconfinement occurring within the core.

The structure and composition of neutron stars can also be affected by their magnetic fields. Neutron stars are likely deformed into oblate spheroids due to the extremely strong magnetic fields they produce. This resulting oblateness can increase the maximum mass of the neutron star that can be supported by neutron degeneracy pressure.

While lots of theoretical work has been done in modeling the structure of neutron stars and quark stars, much of this is yet to be observationally verified. Future data from telescopes like such as the Chandra X-ray Observatory and  ground based detectors such as the Square Kilometer Array (SKA) will provide insights into the physics of these extreme objects. If the strange matter hypothesis holds true, the transitional state between a neutron star and quark star (known as a quark nova) could explain the origins of gamma-ray bursts, production of certain heavy elements, and anomalously luminous supernovae.

### Quantum Diaries

Latest video in Huffington Post’s Talk Nerdy to Me video series

Watch Fermilab Deputy Director Joe Lykken in the latest entry in Huffington Post’s “Talk Nerdy To Me” video series.

What’s the smallest thing in the universe? Check out the latest entry in Huffington Post‘s Talk Nerdy to Me video series. Host Jacqueline Howard takes the viewer inside Fermilab and explains how scientists look for the smallest components that make up our world. Fermilab Deputy Director Joe Lykken talks about the new discoveries we hope to make in exploring the the subatomic realm.

View the 3-minute video at Huffington Post.

### Symmetrybreaking - Fermilab/SLAC

New game trades clicks for physics discoveries

A group of students at CERN have created a computer game that makes particle physics research as addictive as Candy Crush Saga.

If you’re hearing an incessant clicking sound right now, someone around you has probably just discovered Particle Clicker.

Particle Clicker is an addictive computer game a group of students created over the course of a 48-hour hackathon at CERN. Modeled after another compulsive clicking game, Cookie Clicker, it allows the player to work through a full career (or several, really) in particle physics over the course of about a day.

“We thought it would be good to have an addictive game that sneaks in some physics content,” says Igor Babuschkin, the Technical University of Dortmund student who originally proposed the idea.

In the beginning, the player must click on an image that looks suspiciously like the CMS detector at the Large Hadron Collider. Each click creates a particle collision, which produces data. Produce enough data, and you can conduct research, which earns you reputation points, which wins you grant money, which allows you to hire other people to do the clicking for you.

For the rest of the game, data and money accumulate on their own at faster and faster rates the more research you do, the more upgrades you purchase and the more staff you hire. Eventually you’re running a lab with PhD students, postdocs, research fellows, tenured professors, Nobel Prize winners and—most productive of all, according to the game—summer students. The research options follow a rough history of discoveries in particle physics.

The game—created by Babuschkin, Kevin Dungs, Gabor Biro, Tadej Novak and Jiannan Zhang (pictured above)—was the winning entry for CERN’s annual hackathon, called Webfest. Other teams created a mobile app for measuring elevation, a private query database, a crowdsourcing platform for non-governmental organizations responding to humanitarian disasters, and a 3-D game that uses Kinect technology.

Incremental games like this one, Cookie Clicker and another classic, Candy Box, “sound so simple and stupid,” Babuschkin says. “If you tell somebody about it, they say this could never work. The genre is addictive, but you can’t explain why.

“Even while making it, sometimes you caught yourself and said, ‘Oh no, I must stop playing.’ We’re not immune to the game.”

“Nobody’s immune,” Dungs says.

In the week since the game’s release, the Particle Clicker page has had more than 50,000 unique visitors. “It’s completely bananas,” Dungs says.

The game’s code is open-source and available on code-sharing site GitHub. Its creators are still working with volunteers at CERN to make the game more educational—and with volunteers through GitHub and Reddit to make it even more addictive. Click at your own risk.

Like what you see? Sign up for a free subscription to symmetry!

### Clifford V. Johnson - Asymptotia

Perseids, Meet Supermoon!
So tonight (meaning the wee hours of Monday morning and the next few mornings, for optimum viewing - more civilised hours might work too, of course) the Perseid meteor shower will be on view! Have a look at this site (picked at random; there are many more) for more about how to view the meteors, in case you're not sure. Well, here's an interesting thing. The moon will be at its brightest as well, so that'll mean that the viewing conditions for meteors will not be ideal, unfortunately. And it really will be extra bright (well, slightly, to be honest) because tonight's full moon is during the moon's closest approach to [...] Click to continue reading this post

## August 10, 2014

### Andrew Jaffe - Leaves on the Line

Around Asia in search of a meal

I’m recently back from my mammoth trip through Asia (though in fact I’m up in Edinburgh as I write this, visiting as a fellow of the Higgs Centre For Theoretical Physics).

I’ve already written a little about the middle week of my voyage, observing at the James Clerk Maxwell Telescope, and I hope to get back to that soon — at least to post some pictures of and from Mauna Kea. But even more than telescopes, or mountains, or spectacular vistas, I seemed to have spent much of the trip thinking about and eating food. (Even at the telescope, food was important — and the chefs at Halu Pohaku do some amazing things for us sleep-deprived astronomers, though I was too tired to record it except as a vague memory.) But down at sea level, I ate some amazing meals.

When I first arrived in Taipei, my old colleague Proty Wu picked me up at the airport, and took me to meet my fellow speakers and other Taiwanese astronomers at the amazing Din Tai Fung, a world-famous chain of dumpling restaurants. (There are branches in North America but alas none in the UK.) As a scientist, I particularly appreciated the clean room they use to prepare the dumplings to their exacting standards:

Later in the week, a few of us went to a branch of another famous Taipei-based chain, Shin Yeh, for a somewhat traditional Taiwanese restaurant meal. It was amazing, and I wish I could remember some of the specifics. Alas, I’ve only recorded the aftermath:

From Taipei, I was off to Hawaii. Before and after my observing trip, I spent a few days in Honolulu, where I managed to find a nice plate of sushi at Doraku — good, but not too much better than I’ve had in London or New York, despite the proximity to Japan.

From Hawaii, I had to fly back for a transfer in Taipei, where I was happy to find plenty more dumplings (as well as pleasantly sweet Taiwanese pineapple cake). Certainly some of the best airport food I’ve had (for the record, my other favourites are sausages in Munich, and sushi at the Ebisu counter at San Francisco):

From there, my last stop was 40 hours in Beijing. Much more to say about that visit, but the culinary part of the trip had a couple of highlights. After a morning spent wandering around the Forbidden City (aka the Palace Museum), I was getting tired and hungry. I tried to find Tian Di Yi Jia, supposedly “An Incredible Imperial-Style Restaurant”. Alas, some combination of not having a website, not having Roman-lettered signs, and the likelihood that it had closed down meant an hour’s wandering Beijing’s streets was in vain. Instead, I ended up at this hole in the wall: And was very happy indeed, in particular with the amazing slithery, tangy eggplant: That night, I ended up at The Grandma’s, an outpost of yet another chain, seemingly a different chain than Grandma’s Kitchen, which apparently serves American food. Definitely not American food. Note especially the “thousand-year egg” at left (I was happy to see from wikipedia that the idea they’re cured in horse urine is only a myth!):

It was a very tasty trip. I think there was science, too.

### Lubos Motl - string vacua and pheno

Kaggle Higgs: approaching 3.85
If you follow the preliminary leaderboard of the Higgs ATLAS Kaggle contest where 1,288 teams from various places of planet Earth are competing, you may have noticed that I have invited Christian Veelken of CERN to join my team. He kindly agreed. I believe he is one of the best programmers who reconstructs the Higgs properties from the tau-tau decays in the CMS team, the other big collaboration at CERN aside from ATLAS whose folks organize the competition.

The current decision is that so far the viable scores were obtained predominantly by me so I own 90% of the team which is enough not to ask the minority shareholders whether they like the name of the team. ;-) Of course that it may change in the future. My belief is that the relative importance of members of such a team has to be based on the preliminary scores and their contributions to the high ones. It's not a perfect way to rate things but it's better than all others, for reasons I could explain. This question is analogous to the question whether managers' incomes in companies should depend on the profits, revenues, and the stock price. Even though there are risks and things can go wrong, I would answer Yes because this arrangement rooted in imperfect yet measurable data at least guarantees some correlation between the salary and the future of the company and some motivation for the manager to fundamentally improve things.

For the first time in the human history, Christian has applied the CMS' methods to evaluate these tau-tau decays (SVFIT) to the ATLAS data, the data of his intra-CERN competitors. It works. So far, it doesn't produce detectable improvements in the AMS score by itself (or in combination with the ATLAS methods): SVFIT, although more sophisticated, behaves almost identically to ATLAS' MMC. Christian has some really professional ideas what to do and I also believe that if they fail to produce high scores, he will help me to professionalize the codes that I used to get where I/we seem to be because, as you can imagine, the codes have become messy.

Meanwhile, however, I kept on improving the score. Our best one currently stands at 3.83674, just 0.014 below the current leader Gábos Melis. That's exactly equal to my last improvement and I got two of them in the last 24 hours so feel free to estimate how much time it should take to take over. ;-)

There have been moments when my mood was one of resignation. It seemed impossible to reach the heights of the leaders and the progress was so slow (my jumping up by 1 place ahead of the marijuana guy is the only change in the top 16 during the last week). One simply couldn't have thought about beating Melis, Salimans, or even the marijuana guy – bright kids and men with years of experience in manipulating similar data and doing machine learning.

Without much kidding, my life's only experience with manipulating "big data" was the conversion of 80,000 Echo comments on this blog to the DISQUS platform when Echo came out of business three years ago or so.

But the mood is very different now. It seems that I can add 0.01 to the score more easily than to prepare coffee. It's almost as easy as writing +1-1 at the end of a command y=f(x). ;-) Well, not quite but it is almost mechanically straightforward and it has repeatedly (but not quite always) worked.

One of the proprietary ideas that I've been fond of from the beginning and that I turned into a more viable one by having refined certain functions became even more effective when I realized what are probably the other conditions of the evaluation that are needed for the proprietary idea to become truly efficient, to show its muscles.

Because this explanation seems to be justified by some abstract theoretical thinking as well as the real-world empirical data, I will probably automatize the system and try to prepare a submission without self-evident fine-tuning that could produce a very high score immediately.

Now it even seems plausible to me that even the final scores – which will be computed from 2 submissions per team compared against 450k test events not included in the 100k test events that are the basis of the preliminary leaderboard – could exceed 3.8 so that I will lose a \$100 bet. But it's too early to tell. The bet is as open as it can get. Note that the "best score per team" is almost certainly an overestimate of the final score because the preliminary AMS scores contain some noise with the standard deviation of 0.08. So with 300+ submissions, like mine, the best preliminary score could actually be up to a 3-sigma i.e. 0.24 overestimate of the genuine score. There are some reasons to think that the overestimates aren't this brutal but I don't want to go into technicalities that are partly speculative, anyway.

## August 09, 2014

### ZapperZ - Physics and Physicists

Data Analysis App
A while back, I asked if anyone had a suggestion for the best physics apps that are available for mobile devices. I've been mostly using my iPad when I am away from home, ditching my travel laptop. It has worked rather well for me. The only thing that I miss is that I don't have my usual data analysis/graphing software that I often use. I use Origin on my laptop/desktop to analyze, plot, and produce publication-quality graphs. I don't intend to do such extensive work on my iPad, but I do need a quick and dirty way to enter or import data, plot it, and do some rudimentary analysis on it. At the very least, it must be able to do some simple data-fitting and produce a decent-enough graph that I can e-mail to my collaborators.

After looking around for a bit, and after trying this one out for the past month, I think I found a very nice app that does just the thing that I was looking for. The app is called "DataAnalysis". You can find it in the Apple App Store, and I don't know if it has a version on Android. I don't work for the company and get nothing for recommending this app (darn it!), so this is an unsolicited recommendation.

The app is easy enough to use, even though it has links to a couple of YouTube tutorials if you need them. You can either import ASCII text data, or create your own data in an empty data sheet. The data are in a simple, two-column format, space separated (don't you commas or it'll complain!). Once you have your data, you can easily plot it.

You then have the option of doing some simple data analysis. It has a number of already built-in mathematical expression that you can fit your data to. For an undergraduate student in science and engineering, this feature should be sufficient for most cases.

It has a limited number of customization for your graphs. I don't expect to produce a publication-quality graph using this app. But it is good enough for me to send a graph to my collaborators. Having the ability to save and/or send graphics/pdf of the data easily is an important feature that I require, and this app does that.

The one major drawback that I see with this app is the inability (at least, I couldn't find how to do it yet, if the capability exists) to plot more than just one set of data on the same graph. Right now, all I can do is give a set of x and y values. I can't do a set of x, and then a set of y1, y2, etc.. values. It will be a nice feature to have to be able to plot more than just one set of data in a single graph. It can't be that difficult of a feature to add.

Otherwise, this is a very useful app on the go and it does what I need it to do.

Zz.

## August 08, 2014

### The Great Beyond - Nature blog

Geneticists say popular book misrepresents research on human evolution

Courtesy of Penguin Press

More than 130 leading population geneticists have condemned a book arguing that genetic variation between human populations could underlie global economic, political and social differences.

A Troublesome Inheritance, by science journalist Nicholas Wade, was published in June by Penguin Press in New York. The 278-page work garnered widespread criticism, much of it from scientists, for suggesting that genetic differences (rather than culture) explain, for instance, why Western governments are more stable than those in African countries. Wade is former staff reporter and editor at the New York Times, Science and Nature.

But the letter — signed by a who’s-who of researchers in population genetics and human evolution — and published in the 10 August issue of the New York Times — represents a rare unified statement from scientists in the field and includes many whose work was cited by Wade. “It’s just a measure of how unified people are in their disdain for what was done with the field,” says Michael Eisen, a geneticist at the University of California, Berkeley, who helped to draft the letter.

“Wade juxtaposes an incomplete and inaccurate explanation of our research on human genetic differences with speculation that recent natural selection has led to worldwide differences in IQ test results, political institutions and economic development. We reject Wade’s implication that our findings substantiate his guesswork. They do not,” write the authors of the letter, which is a response to a critical review of the book published in the New York Times.

“This letter is driven by politics, not science,” Wade said in a statement. “I am confident that most of the signatories have not read my book and are responding to a slanted summary devised by the organizers.”

Wade added that he had asked the letter’s authors — Eisen and Graham Coop, a population geneticist at the University of California, Davis — for a list of errors so that he could correct future editions of the book. According to Wade, Coop did not reply and Eisen promised a response but has yet to deliver one.

The book, Wade said, “argues that opposition to racism should be based on principle, not on the anti-evolutionary myth that there is no biological basis to race”.

Coop says the idea for the letter emerged over discussions at conferences. “There was a strong feeling that we as a community needed to respond,” he says. Like many of the signers, Coop is not pleased about how his research was explained by Wade.

The first portion of the book summarizes recent research in human population genetics, to support the author’s argument that geographically defined ‘races’ are supported by patterns of genetic variation, and that the different environments encountered by these groups led to genetic adaptations after humans left Africa more than 50,000 years ago — such as lighter skin or the ability to digest milk sugar (lactose) into adulthood.

For instance, in making the argument that populations outside of Africa experienced more evolutionary adaptations known as ‘selective sweeps’ than Africans did, Wade quotes a 2002 paper by Coop, in which his team wrote: “A plausible explanation is that humans experienced many novel selective pressures as they spread out of Africa into new habitats and cooler climates … Hence there may have been more sustained selective pressure on non-Africans for novel phenotypes.”

But Coop notes that Wade omitted key caveats, including the statement that African populations may have actually experienced more selective sweeps than non-Africans, but which the researchers missed for technical reasons. “While Wade is obviously welcome to choose his quotes and observations, he consistently seems to ignore the caveats and cautions people lay out in their papers when they do not suit his ends,” Coop says.

Sarah Tishkoff, a population geneticist at the University of Pennsylvania in Philadelphia who studies human variation in Africa, finds the book’s efforts to explain race in genetic terms to be problematic.

Wade cites a study from another team that analysed genome data from 1,056 humans from around the world using a computer program that divides people into clusters on the basis of their genetic similarity. If the researchers instructed the program to put people into five clusters, the assignments corresponded to continental groups — Africa, East Asia, Europe and the Middle East, the Americas and the Pacific Islands. Wade cited that study and others as evidence for the existence of five human races.

But Tishkoff says that the five clusters are somewhat arbitrary. In a 2009 study that included numerous African populations, her team found that 14 clusters (most of them composed of Africans) were a better explanation for global genetic diversity. “You may see that individuals cluster by major geographic regions. The problem is, there are no firm boundaries,” she says.

Tishkoff also acknowledges that natural selection has created biological differences that vary with geography. For example, her team discovered mutations that allows some African populations to digest lactose. But she scoffs at the idea, proposed by Wade, that natural selection has shaped cognitive and behavioural differences between populations around the world. “We don’t have any strong candidates for playing a role in behaviour,” she says.

But she and the other letter signers are most riled by what, they feel, is Wade’s contention that his book is an objective account of their research. “He’s claiming to be a spokesperson for the science and, no, he’s not,” she says.

### Symmetrybreaking - Fermilab/SLAC

A team of scientists generated a giant cosmic simulation—and now they're giving it away.

A small team of astrophysicists and computer scientists have created some of the highest-resolution snapshots yet of a cyber version of our own cosmos. Called the Dark Sky Simulations, they’re among a handful of recent simulations that use more than 1 trillion virtual particles as stand-ins for all the dark matter that scientists think our universe contains.

They’re also the first trillion-particle simulations to be made publicly available, not only to other astrophysicists and cosmologists to use for their own research, but to everyone. The Dark Sky Simulations can now be accessed through a visualization program in coLaboratory, a newly announced tool created by Google and Project Jupyter that allows multiple people to analyze data at the same time.

To make such a giant simulation, the collaboration needed time on a supercomputer. Despite fierce competition, the group won 80 million computing hours on Oak Ridge National Laboratory’s Titan through the Department of Energy’s 2014 INCITE program.

In mid-April, the group turned Titan loose. For more than 33 hours, they used two-thirds of one of the world’s largest and fastest supercomputers to direct a trillion virtual particles to follow the laws of gravity as translated to computer code, set in a universe that expanded the way cosmologists believe ours has for the past 13.7 billion years.

“This simulation ran continuously for almost two days, and then it was done,” says Michael Warren, a scientist in the Theoretical Astrophysics Group at Los Alamos National Laboratory. Warren has been working on the code underlying the simulations for two decades. “I haven’t worked that hard since I was a grad student.”

Back in his grad school days, Warren says, simulations with millions of particles were considered cutting-edge. But as computing power has increased, particle counts did too. “They were doubling every 18 months. We essentially kept pace with Moore’s Law.”

When planning such a simulation, scientists make two primary choices: the volume of space to simulate and the number of particles to use. The more particles added to a given volume, the smaller the objects that can be simulated—but the more processing power needed to do it.

Current galaxy surveys such as the Dark Energy Survey are mapping out large volumes of space but also discovering small objects. The under-construction Large Synoptic Survey Telescope “will map half the sky and can detect a galaxy like our own up to 7 billion years in the past,” says Risa Wechsler, Skillman’s colleague at KIPAC who also worked on the simulation. “We wanted to create a simulation that a survey like LSST would be able to compare their observations against.”

The time the group was awarded on Titan made it possible for them to run something of a Goldilocks simulation, says Sam Skillman, a postdoctoral researcher with the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford and SLAC National Accelerator Laboratory. “We could model a very large volume of the universe, but still have enough resolution to follow the growth of clusters of galaxies.”

The end result of the mid-April run was 500 trillion bytes of simulation data. Then it was time for the team to fulfill the second half of their proposal: They had to give it away.

They started with 55 trillion bytes: Skillman, Warren and Matt Turk of the National Center for Supercomputing Applications spent the next 10 weeks building a way for researchers to identify just the interesting bits—no pun intended— and use them for further study, all through the Web.

“The main goal was to create a cutting-edge data set that’s easily accessed by observers and theorists,” says Daniel Holz from the University of Chicago. He and Paul Sutter of the Paris Institute of Astrophysics, helped to ensure the simulation was based on the latest astrophysical data. “We wanted to make sure anyone can access this data—data from one of the largest and most sophisticated cosmological simulations ever run—via their laptop.”

Like what you see? Sign up for a free subscription to symmetry!

### The Great Beyond - Nature blog

Ban all ivory sales for 10 years, says conservationist

The international community should ban all sales of ivory — including seized tusks and antique pieces that were created when trade was legal — for at least 10 years, argues a peer-reviewed essay published today in Conservation Biology. Without such measures, the epidemic corruption and high demand will ruin attempts to save African elephants, the author says.

The article comes from Elizabeth Bennett, who is vice president for species conservation at the Wildlife Conservation Society (WCS), a non-profit organization based in New York. The WCS has previously voiced opposition to some legal ivory markets, but Bennett told Nature, “This is not a fundamentalist stand that we believe ivory should never be sold”.

She added, “Under current conditions and lack of controls, closing all markets for at least 10 years and after that until poaching no longer threatens wild populations is the only way to get the situation under control and give a break to the elephants.”

Ivory seized in the United States and destroyed in 2013.
Kate Miyamoto / USFWS.

Conservationists have long complained that legal markets, which exist across the globe and can include sales of antique ivory pieces or new carvings of ivory sold legally from stockpiles, are used as a cover for ivory poached from Africa’s elephant herds. Concern has increased as poaching has recently surged in Africa. If a vendor is allowed to trade ivory, it can be difficult to determine whether a given product is actually from a legal source or has been poached and then integrated into the legal market.

But legal markets in other countries have also come under increased scrutiny lately, with New Jersey state banning all trade in elephant ivory and rhino horn this month.

Some countries, including the United States and China, periodically destroy stockpiles of seized ivory to avoid fuelling the growing demand. However some African states are known to be keen keeping limited legal sales, especially of the large amounts of illegal ivory they have seized. Supporters of such ‘one-off sales’ say they can reduce pressure on wild elephants by flooding the market.

In her article, Bennett says legal markets cannot be tolerated because of the level of corruption among government officials in charge of them. She points out that six out of the eight countries identified as the world’s leading offenders in global ivory trafficking are in the bottom half of league of corruption drawn up by Transparency International. And six of the 12 countries in Africa that have elephants populations are also in the half.

“If we are to conserve remaining wild populations, we must close all markets because, under current levels of corruption, they cannot be controlled in a way that does not provide opportunities for illegal ivory being laundered into legal markets,” she writes.

### Tommaso Dorigo - Scientificblogging

Status Of The Higgs Challenge
As I reported a couple of times in the course of the last three months, the ATLAS experiment (one of the two all-purpose experiments at the CERN Large Hadron Collider) has launched a challenge to data analyzers around the world. The task is to correctly classify as many Higgs boson decays to tau lepton pairs as possible, separating them from all competing backgrounds. Those of you who are not familiar with the search of the Higgs boson may wonder what the above means, so here is a crash course on that topic.

Crash course on the Higgs and its decays to tau leptons

### Clifford V. Johnson - Asymptotia

Aspen Art Museum Opening
I just got back from the Aspen Art Museum's new building. They've been having a members-only series of nights before the big opening to the public in a few days, and an invitation was sent along to Aspen Center for Physics people to come along, and so (of course) I did. It was a nice thing to do at the end of a day of working on revising drafts of two papers, before settling down to a nice dinner of squash, green beans, tomatoes, and lemon-pepper pasta that I made, all from the Saturday Farmers' Market. But I digress. Let me say right at the outset that the building is fantastic. There will no doubt be arguments back and forth about the suitability of the building for the town, and so forth (and there have been), but as a space for both art and community (and to my mind, those should go together in a city's main art space) it is simply [...] Click to continue reading this post

### Lubos Motl - string vacua and pheno

Sum rule constraint on BSM models
Guest blog by Paul Frampton, paper by PF and Thomas Kephart

It is good to back after my unexpected sabbatical of 2 years and 4 months in South America. During that time the BEH scalar boson (called $$H$$) was discovered on July 4th, 2012 at the LHC by both the CMS and ATLAS groups. The subsequent experimental study of the production and decay of $$H$$ provides particle phenomenology with the first really new data for decades. Physicists who are less than 50 years old cannot remember the excitement in particle phenomenology of the 1970s. In the 1980s, 1990s and 2000s which included the important discoveries of the $$W^\pm$$ and $$Z^0$$, and the top quark the interplay between theory and experiment was nevertheless less exciting than the 1970s. Now the study of $$H$$ again is.

In this paper, two assumptions are made:
1. The masses of the fermions arise entirely from their Yukawa couplings.
2. The mass of $$W^\pm$$ arises entirely from the BEH mechanism. Both of these assumptions are implicit in the standard model so, if violated, there is already new physics to understand.

With these two assumptions we (my coauthor is Tom Kephart) derive a sum rule which must be satisfied by the Yukawa coupling constants. It states that the sum of the squares of the standard model Yukawa couplings divided by their measured values must equal one. This sum rule has several immediate consequences.

The partial decay rates for the decays $$H\to b+b$$ and $$H \to \tau + \tau$$ (one of the decay products has a bar over it) cannot be less than the corresponding rate in the standard model. The reason is simple to explain. If the Yukawa coupling were smaller, the corresponding vacuum value must be bigger but that gives too large a $$W$$-mass by the BEH mechanism and hence is disallowed because the W mass is known to an accuracy better than 0.01%.

In beyond the standard model (BSM) theories with two distinct scalar doublets coupling respectively to the top, and to the bottom and tau, such as the MSSM, the sum rule constrains $$\tan\beta$$ certainly to be less than one, quite different from what is often assumed in fits. Although the MSSM was already on life support before this work, I would dare to say that the plug is now pulled half-way out of the socket. BSM theories like Peccei-Quinn and the 2HDMs are likewise constrained by the sum rule.

There are BSM models where three distinct scalar doublets couple to the top, bottom and tau. These include theories with global flavor symmetries, including several of my group's old models. Here the sum rule is even more exacting and almost no model of this type can survive at 3 sigma.

Regarding the MSSM, the supersymmetry community is very clever and no doubt a generalization of MSSM will be constructed which can satisfy the new sum rule even with the higher accuracy data expected from the LHC. But it will be challenging.

More generally, the sum rule means that constructing any viable theory beyond the standard model becomes more difficult and that is obviously a good thing.

So that's my guest blog, Lubos.

With my best regards as always,
Paul

## August 07, 2014

### Symmetrybreaking - Fermilab/SLAC

Science on demand

Brian Greene welcomes the Internet to physics class with World Science U.

Professor, author and string-theorist-about-town Brian Greene wants to expand the ways we learn about science. Greene, the author of popular physics books such as The Elegant Universe and the host of several science specials on PBS, recently led the creation of a free online learning hub called World Science U.

The site offers video courses on topics such as special relativity and quantum mechanics. It is a spin-off of the World Science Festival, an annual event run by a nonprofit organization Greene founded in New York.

Greene says he’s been thinking about digital education since the 1990s, when he moved from Cornell University to Columbia University but wanted to continue teaching his former students.

“I was doing video-conferenced courses way back then, when the technology really couldn’t support all that we needed,” Greene says. “And now there’s a huge opportunity to leverage the technical prowess we have to create a new type of educational experience.”

The challenge has been finding the resources to make the most use of the available tools, he says.

“It’s expensive to make animations,” Greene says. “It’s hugely time-consuming to make computer-based interactive demonstrations. But if you’re not just reaching 30 kids in your own classroom, when you’re creating things that can reach hundreds of thousands of people, then the investment becomes worth it.”

The World Science U site is currently populated with WSU’s pilot offerings, including a short course and a longer, university-level course, plus more than 500 “Science Unplugged” clips, basic explainers with Greene candidly answering frequently asked scientific questions in about a minute or less each.

Some of the classes require no math knowledge at all; others require high-school-level calculus and physics.

The WSU website went live earlier this year. Since March, about 130,000 people have signed up to access the videos and courses, according to WSU’s tally. And the total number of views for the “Science Unplugged” clips is just shy of 1 million.

WSU has worked with 10 institutions, both domestic and international, to offer college credit when an onsite instructor teaches a class using WSU material. These include Greene’s own Columbia, as well as Duke University, the University of California, Santa Barbara, the University of Cape Town and others, says Kadi Hughes, special projects manager at World Science Festival.

Classes are taught in a “flipped” format, with the students watching videos of lectures as homework and then working through problem sets with the professor during class time.

Ronen Plesser, Duke University physics and mathematics professor, taught a for-credit parallel section of “Special Relativity” to his students in fall 2013.

“We added this to the catalog a week before the semester started,” he says. “I was gratified that the class was full to its capacity of 15 students.”

Plesser had his own experiences with digital learning; Duke University produced an online astronomy course he developed for another free online learning site, Coursera.

“The students liked the flipped format and enjoyed the recorded materials,” Plesser says. “The availability of animations and interactive demonstrations is very helpful. Students are more likely to watch the video than to read a chapter assigned in a book.”

Professor David Kagan at the University of Massachusetts Dartmouth taught another on-campus section of the special relativity course. One of his students, Alex Grube, a junior computer engineering major, says taking the WSU special relativity course influenced his decision to add a physics minor to his studies.

“It feels much more immersive to see Brian Greene explaining advanced concepts in a video than simply reading assigned text,” Grube says. “And the weekly discussion periods with Professor Kagan made the course go from good to fantastic.”

This fall, WSU will expand its teaching staff to include other well-known scientists. MIT theoretical physicist and cosmologist Alan Guth will explain cosmic inflation, a theory he developed that made headlines earlier this year with preliminary support from observations made by the BICEP2 experiment.

WSU will eventually offer course content in other scientific disciplines beyond physics, Greene says. The plan is to offer future classes in biology, chemistry, astronomy and mathematics, taught by luminary instructors from these fields.

Like what you see? Sign up for a free subscription to symmetry!

## August 06, 2014

### Quantum Diaries

Hidden gender bias still influences physics field

Yale University astrophysicist Meg Urry spoke about gender bias in science at the July 30 Fermilab Colloquium. Photo: Lauren Biron

Both men and women need to improve how they evaluate women in the sciences to help eliminate bias, says Meg Urry, who spoke at last week’s Fermilab Colloquium. People of either gender fall victim to unconscious prejudices that affect who succeeds, particularly in physics.

“Less than 20 percent of the Ph.D.s in physics go to women,” Urry noted, a figure that has barely crept up even while fields such as medicine have approached parity.

Urry, a professor at Yale University and president of the American Astronomical Society, unleashed a torrent of studies demonstrating bias during her talk, “Women in Physics: Why So Few? And How to Move Toward Normal.”

In one example, letters of recommendation for men were more likely to include powerful adjectives and contain specifics, while those for women were often shorter, included hints of doubt or made explicit mention of gender.

Another study found that in jobs that were perceived as masculine, both men and women tended to award the position to the man even when the woman was the qualified individual.

Other data showed that women are less likely to be perceived as the leader in mixed-gender scenarios, Urry said. When small numbers of women are present, they can become an “other” that stands in for the whole gender, magnifying perceived mistakes and potentially confirming a bias that women are less proficient in physics.

“You need a large enough group that people stop thinking of them as the woman and start thinking of them as the scientist,” Urry said.

Urry advised the many young women in the audience to own their ambition, prep their elevator speeches, get male allies who will stand up if female voices are ignored, practice confidence and network. Above all, she said, work hard, do interesting work, and don’t be discouraged if things get rough.

Meanwhile, Urry said, leaders need to learn about bias, actively look for diverse candidates rather than wait for applications, mentor and prevalidate women, such as when introducing a speaker.

Urry worked hard to debunk the myth that hiring more women means lowering the bar for diversity’s sake.

“When you hire a diverse group of scientists, you are improving your quality, not lowering your standards,” Urry said, echoing sentiments from her lunchtime talk with 40 women. “We should be aspiring to diversity of thought to enrich science.”

Lauren Biron

### CERN Bulletin

CERN Bulletin Issue No. 32-34/2014
Link to e-Bulletin Issue No. 32-34/2014Link to all articles in this issue No.