# Particle Physics Planet

## August 17, 2017

### Emily Lakdawalla - The Planetary Society Blog

Celebrating the 40th anniversaries of the Voyager launches
Sunday, August 20 marks the 40th anniversary of the launch of Voyager 2. Tuesday, September 5, will be the 40th anniversary for Voyager 1. Throughout the next three weeks, we'll be posting new and classic material in honor of the Voyagers. Here's a preview.

### Christian P. Robert - xi'an's og

Das Kapital [not a book review]

A rather bland article by Gareth Stedman Jones in Nature reminded me that the first volume of Karl Marx’ Das Kapital is 150 years old this year. Which makes it appear quite close in historical terms [just before the Franco-German war of 1870] and rather remote in scientific terms. I remember going painstakingly through the books in 1982 and 1983, mostly during weekly train trips between Paris and Caen, and not getting much out of it! Even with the help of a cartoon introduction I had received as a 1982 Xmas gift! I had no difficulty in reading the text per se, as opposed to my attempt of Kant’s Critique of Pure Reason the previous summer [along with the other attempt to windsurf!], as the discourse was definitely grounded in economics and not in philosophy. But the heavy prose did not deliver a convincing theory of the evolution of capitalism [and of its ineluctable demise]. While the fundamental argument of workers’ labour being an essential balance to investors’ capital for profitable production was clearly if extensively stated, the extrapolations on diminishing profits associated with decreasing labour input [and the resulting collapse] were murkier and sounded more ideological than scientific. Not that I claim any competence in the matter: my attempts at getting the concepts behind Marxist economics stopped at this point and I have not been seriously thinking about it since! But it still seems to me that the theory did age very well, missing the increasing power of financial agents in running companies. And of course [unsurprisingly] the numerical revolution and its impact on the (des)organisation of work and the disintegration of proletariat as Marx envisioned it. For instance turning former workers into forced and poor entrepreneurs (Uber, anyone?!). Not that the working conditions are particularly rosy for many, from a scarsity of low-skill jobs, to a nurtured competition between workers for existing jobs (leading to extremes like the scandalous zero hour contracts!), to minimum wages turned useless by the fragmentation of the working space and the explosion of housing costs in major cities, to the hopelessness of social democracies to get back some leverage on international companies…

Filed under: Statistics Tagged: book reviews, comics, Das Kapital, economics, Immanuel Kant, Karl Marx, London, Marxism, Nature, Paris, philosophy, political economics

### Peter Coles - In the Dark

Clearing Advice for Physics and Astronomy Applicants!

Today’s the day! This year’s A-level results are out today, Thursday 17th August, with the consequent scramble as students across the country to confirm places at university. Good luck to all students everywhere waiting for your results. I hope they are what you expected!

For those of you who didn’t get the grades they needed, I have one piece of very clear advice:

The clearing system is very efficient and effective, as well as being quite straightforward to use, and there’s still every chance that you will find a place somewhere good. So keep a cool head and follow the instructions. You won’t have to make a decision straight away, and there’s plenty of time to explore all the options.

As a matter of fact there are a few places still left for various courses in the School of Physics & Astronomy at Cardiff University. Why should you choose Cardiff? Well, obviously I have a vested interest since I work here, but here’s a video of some students talking about the School.

For further information check here!

### John Baez - Azimuth

Complex Adaptive System Design (Part 3)

It’s been a long time since I’ve blogged about the Complex Adaptive System Composition and Design Environment or CASCADE project run by John Paschkewitz. For a reminder, read these:

Complex adaptive system design (part 1), Azimuth, 2 October 2016.

Complex adaptive system design (part 2), Azimuth, 18 October 2016.

A lot has happened since then, and I want to explain it.

I’m working with Metron Scientific Solutions to develop new techniques for designing complex networks.

The particular problem we began cutting our teeth on is a search and rescue mission where a bunch of boats, planes and drones have to locate and save people who fall overboard during a boat race in the Caribbean Sea. Subsequently the Metron team expanded the scope to other search and rescue tasks. But the real goal is to develop very generally applicable new ideas on designing and ‘tasking’ networks of mobile agents—that is, designing these networks and telling the agents what to do.

We’re using the mathematics of ‘operads’, in part because Spivak’s work on operads has drawn a lot of attention and raised a lot of hopes:

An operad is a bunch of operations for sticking together smaller things to create bigger ones—I’ll explain this in detail later, but that’s the core idea. Spivak described some specific operads called ‘operads of wiring diagrams’ and illustrated some of their potential applications. But when we got going on our project, we wound up using a different class of operads, which I’ll call ‘network operads’.

Here’s our dream, which we’re still trying to make into a reality:

Network operads should make it easy to build a big network from smaller ones and have every agent know what to do. You should be able to ‘slap together’ a network, throwing in more agents and more links between them, and automatically have it do something reasonable. This should be more flexible than an approach where you need to know ahead of time exactly how many agents you have, and how they’re connected, before you can tell them what to do.

You don’t want a network to malfunction horribly because you forgot to hook it up correctly. You want to focus your attention on optimizing the network, not getting it to work at all. And you want everything to work so smoothly that it’s easy for the network to adapt to changing conditions.

To achieve this we’re using network operads, which are certain special ‘typed operads’. So before getting into the details of our approach, I should say a bit about typed operads. And I think that will be enough for today’s post: I don’t want to overwhelm you with too much information at once.

In general, a ‘typed operad’ describes ways of sticking together things of various types to get new things of various types. An ‘algebra’ of the operad gives a particular specification of these things and the results of sticking them together. For now I’ll skip the full definition of a typed operad and only highlight the most important features. A typed operad $O$ has:

• a set $T$ of types.

• collections of operations $O(t_1,...,t_n ; t)$ where $t_i, t \in T$. Here $t_1, \dots, t_n$ are the types of the inputs, while $t$ is the type of the output.

• ways to compose operations. Given an operation
$f \in O(t_1,\dots,t_n ;t)$ and $n$ operations

$g_1 \in O(t_{11},\dots,t_{1 k_1}; t_1),\dots, g_n \in O(t_{n1},\dots,t_{n k_n};t_n)$

we can compose them to get

$f \circ (g_1,\dots,g_n) \in O(t_{11}, \dots, t_{nk_n};t)$

These must obey some rules.

But if you haven’t seen operads before, you’re probably reeling in horror—so I need to rush in and save you by showing you the all-important pictures that help explain what’s going on!

First of all, you should visualize an operation $f \in O(t_1, \dots, t_n; t)$ as a little gizmo like this:

It has $n$ inputs at top and one output at bottom. Each input, and the output, has a ‘type’ taken from the set $T.$ So, for example, if you operation takes two real numbers, adds them and spits out the closest integer, both input types would be ‘real’, while the output type would be ‘integer’.

The main thing we do with operations is compose them. Given an an operation $f \in O(t_1,\dots,t_n ;t),$ we can compose it with $n$ operations

$g_1 \in O(t_{11},\dots,t_{1 k_1}; t_1), \quad \dots, \quad g_n \in O(t_{n1},\dots,t_{n k_n};t_n)$

by feeding their outputs into the inputs of $f,$ like this:

The result is an operation we call

$f \circ (g_1, \dots, g_n)$

Note that the input types of $f$ have to match the output types of the $g_i$ for this to work! This is the whole point of types: they forbid us from composing operations in ways that don’t make sense.

This avoids certain stupid mistakes. For example, you can take the square root of a positive number, but you may not want to take the square root of a negative number, and you definitely don’t want to take the square root of a hamburger. While you can land a plane on an airstrip, you probably don’t want to land a plane on a person.

The operations in an operad are quite abstract: they aren’t really operating on anything. To render them concrete, we need another idea: operads have ‘algebras’.

An algebra $A$ of the operad $O$ specifies a set of things of each type $t \in T$ such that the operations of $O$ act on these sets. A bit more precisely, an algebra consists of:

• for each type $t \in T,$ a set $A(t)$ of things of type $t$

• an action of $O$ on $A,$ that is, a collection of maps

$\alpha : O(t_1,...,t_n ; t) \times A(t_1) \times \cdots \times A(t_n) \to A(t)$

obeying some rules.

In other words, an algebra turns each operation $f \in O(t_1,...,t_n ; t)$ into a function that eats things of types $t_1, \dots, t_n$ and spits out a thing of type $t.$

When we get to designing systems with operads, the fact that the same operad can have many algebras will be useful. Our operad will have operations describing abstractly how to hook up networks to form larger networks. An algebra will give a specific implementation of these operations. We can use one algebra that’s fairly fine-grained and detailed about what the operations actually do, and another that’s less detailed. There will then be a map between from the first algebra to the second, called an ‘algebra homomorphism’, that forgets some fine-grained details.

There’s a lot more to say—all this is just the mathematical equivalent of clearing my throat before a speech—but I’ll stop here for now.

And as I do—since it also takes me time to stop talking—I should make it clear yet again that I haven’t even given the full definition of typed operads and their algebras! Besides the laws I didn’t write down, there’s other stuff I omitted. Most notably, there’s a way to permute the inputs of an operation in an operad, and operads have identity operations, one for each type.

To see the full definition of an ‘untyped’ operad, which is really an operad with just one type, go here:

They just call it an ‘operad’. Note that they first explain ‘non-symmetric operads’, where you can’t permute the inputs of operations, and then explain operads, where you can.

If you’re mathematically sophisticated, you can easily guess the laws obeyed by a typed operad just by looking at this article and inserting the missing types. You can also see the laws written down in Spivak’s paper, but with some different terminology: he calls types ‘objects’, he calls operations ‘morphisms’, and he calls typed operads ‘symmetric colored operads’—or once he gets going, just ‘operads’.

You can also see the definition of a typed operad in Section 2.1 here:

• Donald Yau, Operads of wiring diagrams.

What I would call a typed operad with $S$ as its set of types, he calls an ‘$S$-colored operad’.

I guess it’s already evident, but I’ll warn you that the terminology in this subject varies quite a lot from author to author: for example, a certain community calls typed operads ‘symmetric multicategories’. This is annoying at first but once you understand the subject it’s as ignorable as the fact that mathematicians have many different accents. The main thing to remember is that operads come in four main flavors, since they can either be typed or untyped, and they can either let you permute inputs or not. I’ll always be working with typed operads where you can permute inputs.

Finally, I’ll say that while the definition of operad looks lengthy and cumbersome at first, it becomes lean and elegant if you use more category theory.

Next time I’ll give you an example of an operad: the simplest ‘network

## August 16, 2017

### Christian P. Robert - xi'an's og

Berlin [and Vienna] noir [book review]

While in Cambridge last month, I picked a few books from a local bookstore as fodder for my incoming vacations. Including this omnibus volume made of the first three books by Philip Kerr featuring Bernie Gunther, a private and Reich detective in Nazi Germany, namely, March Violets (1989), The Pale Criminal (1990), and A German Requiem (1991). (Book that I actually read before the vacations!) The stories take place before the war, in 1938, and right after, in 1946, in Berlin and Vienna. The books centre on a German version of Philip Marlowe, wise cracks included, with various degrees of success. (There actually is a silly comparison with Chandler on the back of the book! And I found somewhere else a similarly inappropriate comparison with Graham Greene‘s The Third Man…) Although I read the whole three books in a single week, which clearly shows some undeniable addictive quality in the plots, I find those plots somewhat shallow and contrived, especially the second one revolving around a serial killer of young girls that aims at blaming Jews for those crimes and at justifying further Nazi persecutions. Or the time spent in Dachau by Bernie Gunther as undercover agent for Heydrich. If anything, the third volume taking place in post-war Berlin and Wien is much better at recreating the murky atmosphere of those cities under Allied occupations. But overall there is much too much info-dump passages in those novels to make them a good read. The author has clearly done his documentation job correctly, from the early homosexual persecutions to Kristallnacht, to the fights for control between the occupying forces, but the information about the historical context is not always delivered in the most fluent way. And having the main character working under Heydrich, then joining the SS, does make relating to him rather unlikely, to say the least. It is hence unclear to me why those books are so popular, apart from the easy marketing line that stories involving Nazis are more likely to sell… Nothing to be compared with the fantastic Alone in Berlin, depicting the somewhat senseless resistance of a Berliner during the Nazi years, dropping hand-written messages against the regime under strangers’ doors.

Filed under: Statistics Tagged: Alone in Berlin, Berlin, Berlin noir, book reviews, Dachau, Graham Greene, Nazi State, Raymond Chandler, Reinhart Heydrich, Wien, WW II

### Symmetrybreaking - Fermilab/SLAC

QuarkNet takes on solar eclipse science

High school students nationwide will study the effects of the solar eclipse on cosmic rays.

While most people are marveling at Monday’s eclipse, a group of researchers will be measuring its effects on cosmic rays—particles from space that collide with the earth’s atmosphere to produce muons, heavy cousins of the electron. But these researchers aren’t the usual PhD-holding suspects: They’re still in high school.

More than 25 groups of high school students and teachers nationwide will use small-scale detectors to find out whether the number of cosmic rays raining down on Earth changes during an eclipse. Although the eclipse event will last only three hours, this student experiment has been a months-long collaboration.

The cosmic ray detectors used for this experiment were provided as kits by QuarkNet, an outreach program that gives teachers and students opportunities to try their hands at high-energy physics research. Through QuarkNet, high school classrooms can participate in a whole range of physics activities, such as analyzing real data from the CMS experiment at CERN and creating their own experiments with detectors.

“Really active QuarkNet groups run detectors all year and measure all sorts of things that would sound crazy to a physicist,” says Mark Adams, QuarkNet’s cosmic ray studies coordinator. “It doesn’t really matter what the question is as long as it allows them to do science.”

And this year’s solar eclipse will give students a rare chance to answer a cosmic question: Is the sun a major producer of the cosmic rays that bombard Earth, or do they come from somewhere else?

“We wanted to show that, if the rate of cosmic rays changes a lot during the eclipse, then the sun is a big source of cosmic rays,” Adams says. “We sort of know that the sun is not the main source, but it’s a really neat experiment. As far as we know, no one has ever done this with cosmic ray muons at the surface.”

Adams and QuarkNet teacher Nate Unterman will be leading a group of nine students and five adults to Missouri to the heart of the path of totality—where the moon will completely cover the sun—to take measurements of the event. Other QuarkNet groups will stay put, measuring what effect a partial eclipse might have on cosmic rays in their area.

Most cosmic rays are likely high-energy particles from exploding stars deep in space, which are picked up via muons in QuarkNet detectors. But the likely result of the experiment—that cosmic rays don’t change their rate when the moon moves in front of the sun—doesn’t eclipse the excitement for the students in the collaboration.

“They’ve been working for months and months to develop the design for the measurements and the detectors,” Adams says. “That’s the great part—they’re not focused on what the answer is but the best way to find it.”

### Peter Coles - In the Dark

The Anomaly of Research England

The other day I was surprised to see this tweet announcing the impending formation of a new council under the umbrella of the new organisation UK Research & Innovation (UKRI):

These changes are consequences of the Higher Education and Research Act (2017) which was passed at the end of the last Parliament before the Prime Minister decided to reduce the Government’s majority by calling a General Election.

It seems to me that it’s very strange indeed to have a new council called Research England sitting inside an organisation that purports to be a UK-wide outfit without having a corresponding Research Wales, Research Scotland and Research Northern Ireland. The seven existing research councils which will henceforth sit alongside Research England within UKRI are all UK-wide.

This anomaly stems from the fact that Higher Education policy is ostensibly a devolved matter, meaning that England, Wales, Scotland and Northern Ireland each have separate bodies to oversee their universities. Included in the functions of these bodies is the so-called QR funding which is allocated on the basis of the Research Excellence Framework. This used to be administered by the Higher Education Funding Council for England (HEFCE), but each devolved council distributed its own funds in its own way. The new Higher Education and Research Act however abolishes HEFCE and replaces some of its functions into an organisation called the Office for Students, but not those connected with research. Hence the creation of the new Research England’. This will not only distribute QR funding among English universities but also administer a number of interdisciplinary research programmes.

The dual support system of government funding consists of block grants of QR funding allocated as above alongside targeted at specific projects by the Research Councils (such as the Science and Technology Facilities Council, which is responsible for astronomy, particle physics and nuclear physics research). There is nervousness in England that the new structure will put both elements of the dual support system inside the same organisation, but my greatest concern is that by exlcuding Wales, Scotland and Northern Ireland, English universities will be given an unfair advantage when it comes to interdisciplinary research. Surely there should be representation within UKRI for Wales, Scotland and Northern Ireland too?

Incidentally, the Science and Technology Facilities Council (STFC) has started the process of recruiting a new Executive Chair. If you’re interested in this position you can find the advertisement here. Ominously, the only thing mentioned under Skills Required’ is Change Management’.

### Emily Lakdawalla - The Planetary Society Blog

Could the total solar eclipse reveal a comet?
Next week's solar eclipse will reveal the Sun's corona, nearby bright planets and stars, and, if we get extremely lucky, a comet!

## August 15, 2017

### Christian P. Robert - xi'an's og

at the foot of Monte Rosa

After last-year failed attempt at climbing a summit in the Monte Rosa group, we came back for hiking on the “other side” of Monte Rosa, at the Italian foot of Dufourspitze with less ambitious plans of hiking around this fantastic range. Alas the weather got set against us and thunderstorms followed one another, making altitude hikes unreasonable because of the fresh snow and frequent fog, and lower altitude walks definitely unpleasant, turning the trails into creeks.

While we had many nice chats with local people in a mixture of French and Italian, and sampled local cheese and beers, this still felt like a bit of a wasted vacations week, especially when remembering how reasonable the weather had been the week before in Scotland. However, all things considered, the previous week had seen a deadly heat wave cross the south of Europe that very week, so I should not be complaining about rain and snow! And running a whole week checking the altimeter instead of the chronometer is a nice experience.

Filed under: Mountains, pictures, Travel, Wines Tagged: climbing, Dufourspitze, hiking, Italia, Macugnana, Monte Rosa, rain, running, snow, Switzerland

### Symmetrybreaking - Fermilab/SLAC

Dark matter hunt with LUX-ZEPLIN

A video from SLAC National Accelerator Laboratory explains how the upcoming LZ experiment will search for the missing 85 percent of the matter in the universe.

What exactly is dark matter, the invisible substance that accounts for 85 percent of all the matter in the universe but can’t be seen even with our most advanced scientific instruments?

Most scientists believe it’s made of ghostly particles that rarely bump into their surroundings. That’s why billions of dark matter particles might zip right through our bodies every second without us even noticing. Leading candidates for dark matter particles are WIMPs, or weakly interacting massive particles.

Scientists at SLAC National Accelerator Laboratory are helping to build and test one of the biggest and most sensitive detectors ever designed to catch a WIMP: the LUX-ZEPLIN or LZ detector. The following video explains how it works.

## Dark Matter Hunt with LUX-ZEPLIN (LZ)

Video of Dark Matter Hunt with LUX-ZEPLIN (LZ)

### Peter Coles - In the Dark

Well, I made it back to Cardiff on schedule last night, although that did involve getting home at 2am. I was pretty much exhausted by then so had a bit of a lie-in this morning. I think I’m getting too old for all this gallivanting about. I crashed out soon after getting home and had to spend an hour or so this morning sorting through the stack of mail that arrived while I was away (including some book tokens courtesy of another crossword prize).

I usually try to get to the airport plenty of time in advance when I’m flying somewhere, so got to Copenhagen airport yesterday a good three hours before my scheduled departure. I had checked in online before setting out so I could have left it later, but I’m obviously a creature of habit. As it happened I was able to leave my luggage at the bag drop immediately and it took no longer than 5 minutes to clear the security checks, which meant that I was left with time to kill but I had my iPod and plenty to read so it was all fine.

I was a little disturbed when I got to the departure gate to hear the announcement that Tonight’s British Airways flight to London Heathrow is operated by Qatar Airways’, but at least it explained why it wasn’t a BA plane standing outside on the tarmac. As it happened the flight went smoothly and Qatar Airways do free food and drink for economy class passengers (unlike BA who nowadays sell expensive snacks and beverages supplied by Marks and Spencer). The only downside when we arrived at Heathrow was that we parked at a remote stand and had to wait 20 minutes or so for a bus to take us to Terminal 5.  I could hear the ground crew unloading luggage while we waited, however, so that meant less time waiting at the carousels…

On previous occasions I’ve been greeted at Heathrow by a packed passport control area, but this time it was virtually deserted. In fact I’ve never seen it so empty. My bag was waiting for me when I got to the reclaim area so I got to the Heathrow Express terminal and thence to Paddington in time for the 10.45pm train to Cardiff.

When I got back to the Data Innovation Research Institute office around lunchtime I discovered that our big screen TV has been installed.

This will of course be used exclusively for skype calls and video conferences and in no way for watching cricket or football or any other inappropriate activity.

Well, I’d better get on. Marking resit exams is the order of the day.

### Emily Lakdawalla - The Planetary Society Blog

A dispatch from the path of totality: the 2017 solar eclipse in Ravenna, Nebraska
Ravenna, population 1,400, sits on the plains of central Nebraska, and almost on the center line of the path of totality for the upcoming Great American Eclipse. Nebraska native Shane Pekny reports on how this small town is preparing for the big event.

### Emily Lakdawalla - The Planetary Society Blog

Bill Nye's top eclipse tip: Protect your eyes
Bill Nye, CEO of The Planetary Society, has some suggestions for staying safe during next week's solar eclipse.

### Tommaso Dorigo - Scientificblogging

Revenge Of The Slimeballs - Part 3
This is the third part of Chapter 3 of the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab". The chapter recounts the pioneering measurement of the Z mass by the CDF detector, and the competition with SLAC during the summer of 1989. The title of the post is the same as the one of chapter 3, and it refers to the way some SLAC physicists called their Fermilab colleagues, whose hadron collider was to their eyes obviously inferior to the electron-positron linear collider.

### John Baez - Azimuth

Norbert Blum on P versus NP

There’s a new paper on the arXiv that claims to solve a hard problem:

• Norbert Blum, A solution of the P versus NP problem.

Most papers that claim to solve hard math problems are wrong: that’s why these problems are considered hard. But these papers can still be fun to look at, at least if they’re not obviously wrong. It’s fun to hope that maybe today humanity has found another beautiful grain of truth.

I’m not an expert on the P versus NP problem, so I have no opinion on this paper. So don’t get excited: wait calmly by your radio until you hear from someone who actually works on this stuff.

I found the first paragraph interesting, though. Here it is, together with some highly non-expert commentary. Beware: everything I say could be wrong!

Understanding the power of negations is one of the most challenging problems in complexity theory. With respect to monotone Boolean functions, Razborov [12] was the first who could shown that the gain, if using negations, can be super-polynomial in comparision to monotone Boolean networks. Tardos [16] has improved this to exponential.

I guess a ‘Boolean network’ is like a machine where you feed in a string of bits and it computes new bits using the logical operations ‘and’, ‘or’ and ‘not’. If you leave out ‘not’ the Boolean network is monotone, since then making more inputs equal to 1, or ‘true’, is bound to make more of the output bits 1 as well. Blum is saying that including ‘not’ makes some computations vastly more efficient… but that this stuff is hard to understand.

For the characteristic function of an NP-complete problem like the clique function, it is widely believed that negations cannot help enough to improve the Boolean complexity from exponential to polynomial.

A bunch of nodes in a graph are a clique if each of these nodes is connected by an edge to every other. Determining whether a graph with $n$ vertices has a clique with more than $k$ nodes is a famous problem: the clique decision problem.

For example, here’s a brute-force search for a clique with at least 4 nodes:

The clique decision problem is NP-complete. This means that if you can solve it with a Boolean network whose complexity grows like some polynomial in n, then P = NP. But if you can’t, then P ≠ NP.

(Don’t ask me what the complexity of a Boolean network is; I can guess but I could get it wrong.)

I guess Blum is hinting that the best monotone Boolean network for solving the clique decision problem has a complexity that’s exponential in $n.$ And then he’s saying it’s widely believed that not gates can’t reduce the complexity to a polynomial.

Since the computation of an one-tape Turing machine can be simulated by a non-monotone Boolean network of size at most the square of the number of steps [15, Ch. 3.9], a superpolynomial lower bound for the non-monotone network complexity of such a function would imply P ≠ NP.

Now he’s saying what I said earlier: if you show it’s impossible to solve the clique decision problem with any Boolean network whose complexity grows like some polynomial in n, then you’ve shown P ≠ NP. This is how Blum intends to prove P ≠ NP.

For the monotone complexity of such a function, exponential lower bounds are known [11, 3, 1, 10, 6, 8, 4, 2, 7].

Should you trust someone who claims they’ve proved P ≠ NP, but can’t manage to get their references listed in increasing order?

But until now, no one could prove a non-linear lower bound for the nonmonotone complexity of any Boolean function in NP.

That’s a great example of how helpless we are: we’ve got all these problems whose complexity should grow faster than any polynomial, and we can’t even prove their complexity grows faster than linear. Sad!

An obvious attempt to get a super-polynomial lower bound for the non-monotone complexity of the clique function could be the extension of the method which has led to the proof of an exponential lower bound of its monotone complexity. This is the so-called “method of approximation” developed by Razborov [11].

I don’t know about this. All I know is that Razborov and Rudich proved a whole bunch of strategies for proving P ≠ NP can’t possibly work. These strategies are called ‘natural proofs’. Here are some friendly blog articles on their result:

• Timothy Gowers, How not to prove that P is not equal to NP, 3 October 2013.

• Timothy Gowers, Razborov and Rudich’s natural proofs argument, 7 October 2013.

From these I get the impression that what Blum calls ‘Boolean networks’ may be what other people call ‘Boolean circuits’. But I could be wrong!

Continuing:

Razborov [13] has shown that his approximation method cannot be used to prove better than quadratic lower bounds for the non-monotone complexity of a Boolean function.

So, this method is unable to prove some NP problem can’t be solved in polynomial time and thus prove P ≠ NP. Bummer!

But Razborov uses a very strong distance measure in his proof for the inability of the approximation method. As elaborated in [5], one can use the approximation method with a weaker distance measure to prove a super-polynomial lower bound for the non-monotone complexity of a Boolean function.

This reference [5] is to another paper by Blum. And in the end, he claims to use similar methods to prove that the complexity of any Boolean network that solves the clique decision problem must grow faster than a polynomial.

So, if you’re trying to check his proof that P ≠ NP, you should probably start by checking that other paper!

The picture below, by Behnam Esfahbod on Wikicommons, shows the two possible scenarios. The one at left is the one Norbert Blum claims to have shown we’re in.

## August 14, 2017

### Clifford V. Johnson - Asymptotia

A Skyline to Come?

I finished that short story project for that anthology I told you about and submitted the final files to the editor on Sunday. Hurrah. It'll appear next year and I'll give you a warning about when it is to appear once they announce the book. It was fun to work on this story. The sample above is a couple of process shots of me working (on my iPad) on an imagining of the LA skyline as it might look some decades from now. I've added several buildings among the ones that might be familiar. It is for the opening establishing shot of the whole book. There's one of San Francisco later on, by the way. (I learned more about the SF skyline and the Bay Bridge than I care to admit now...)

I will admit that I went a bit overboard with the art for this project! I intended to do a lot rougher and looser style in both pencil work and colour and of course ended up with far too much obsessing over precision and detail in the end (as you can also see here, here and here). As an interesting technical landmark [...] Click to continue reading this post

The post A Skyline to Come? appeared first on Asymptotia.

### Peter Coles - In the Dark

Farvel til NBI

I just had my last lunch in the canteen in the Niels Bohr Institute and will shortly be heading off to the airport to begin the journey back to Blighty. It’s been a pretty intense couple of weeks but I’ve enjoyed it enormously and have learnt a lot, even though I’ve done hardly any of the things I originally planned to do!

I haven’t been staying in the building shown in the picture, but in one of the adjacent buildings not shown. In fact my office is directly above the canteen. I took this picture on the way home on Sunday, as I noticed that the main entrance has the date 1920′ written on it. I do hope they’re planning a 100th anniversary!

Anyway, farewell to everyone at the Niels Bohr Institute and elsewhere. I hope to return before too long.

### Emily Lakdawalla - The Planetary Society Blog

Book Review: Sun Moon Earth
With the North American Total Solar Eclipse coming on August 21, people across the continent are getting eclipse mania! Astronomer Tyler Nordgren has written a detailed book on eclipses with a special focus on the August 21st event.

### Peter Coles - In the Dark

A Picture of Peter Cvitanovic

The angry chap on the right (appropriately enough) on this image taken at the violent demonstration at Charlottesville Va at the weekend is a white nationalist  member of the alt-right white supremacist Nazi by the name of Peter Cvjetanovic.

Apparently Peter is unhappy that his picture is being shared so widely on the internet. Life is tough sometimes.

And, yes, I mean Nazi.

## August 12, 2017

### Lubos Motl - string vacua and pheno

Arctic mechanism: a derivation of the multiple point criticality principle?
One of the ideas I found irresistible in my research during the last 3 weeks was the multiple point criticality principle mentioned in a recent blog post about a Shiu-Hamada paper.

Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something maximally special seems to happen over there.

This principle is supported by a reasonably impressive prediction of the fine-structure constant, the top quark mass, the Higgs boson mass, and perhaps the neutrino masses and/or the cosmological constant related to them.

In some sense, the principle modifies the naive "uniform measure" on the parameter space that is postulated by naturalness. We may say that the multiple point criticality principle not only modifies naturalness. It almost exactly negates it. The places with $$\theta=0$$ where $$\theta$$ is the distance from some phase transition are of measure zero, and therefore infinitely unlikely, according to naturalness. But the multiple point criticality principle says that they're really preferred. In fact, if there are several phase transitions and $$\theta_i$$ measure the distances from several domain walls in the moduli space, the multiple point criticality principle wants to set all the parameters $$\theta_i$$ equal to zero.

Is there an everyday life analogy for that? I think so. Look at the picture at the top and ignore the boat with the German tourist in it. What you see is the Arctic Ocean – with lots of water and ice over there. What is the temperature of the ice and the water? Well, it's about 0 °C, the melting point of water. In reality, the melting point is a bit different due to the salinity.

But in this case, there exists a very good reason to conclude that we're near the melting point. It's because we can see that the water and the ice co-exist. And the water may only exist above the melting point; and the ice may only exist beneath the melting point. The intersection of these two intervals is a narrow interval – basically the set containing the melting point only. If the water were much warmer than the melting point, it would have to cool quickly enough because the ice underneath is colder – it can't really be above the melting point.

(The heat needed for the ice to melt is equal to the heat needed to warm the same amount of water by some 80 °C if I remember well.)

How is it possible that the temperature 0 °C, although it's a special value of measure zero, is so popular in the Arctic Ocean? It's easy. If you study what's happening when you warm the ice – start with a body of ice only – you will ultimately get to the melting point and a part of ice will melt. You will obtain a mixture of the ice and water. Now, if you are adding additional heat, the ice no longer heats up. Instead, the extra heat will be used to transform an increasing fraction of the ice to the water – i.e. to melt the ice.

So the growth of the temperature stops at the melting point. Instead of the temperature, what the additional incoming heat increases is the fraction of the H2O molecules that have already adopted the liquid state. Only when the fraction increases to 100%, you get pure liquid water and the additional heating may increase the temperature above 0 °C.

In theoretical physics, we want things like the top quark mass $$m_t$$ to be analogous to the temperature $$T$$ of the Arctic water. Can we find a similar mechanism in physics that would just explain why the multiple point criticality principle is right?

The easiest way is to take the analogy literally and consider the multiverse. The multiverse may be just like the Arctic Ocean. And parts of it may be analogous to the floating ice, parts of it may be analogous to the water underneath. There could be some analogy of the "heat transfer" that forces something like $$m_t$$ to be nearly the same in the nearby parts of the multiverse. But the special values of $$m_t$$ that allow several phases may occupy a finite fraction of the multiverse and what is varying in this region isn't $$m_t$$ but rather the percentage of the multiverse occupied by the individual phases.

There may be regions of the multiverse where several phases co-exist and several parameters analogous to $$m_t$$ appear to be fine-tuned to special values.

I am not sure whether an analysis of this sort may be quantified and embedded into a proper full-blown cosmological model. It would be nice. But maybe the multiverse isn't really needed. It seems to me that at these special values of the parameters where several phases co-exist, the vacuum states could naturally be superpositions of quantum states built on several classically very different configurations. Such a law would make it more likely that the cosmological constant is described by a seesaw mechanism, too.

If it's true and if the multiple-phase special points are favored, it's because of some "attraction of the eigenvalues". If you know random matrix theory, i.e. the statistical theory of many energy levels in the nuclei, you know that the energy levels tend to repel each other. It's because some Jacobian factor is very small in the regions where the energy eigenvalues approach each other. Here, we need the opposite effect. We need the values of parameters such as $$m_t$$ to be attracted to the special values where phases may be degenerate.

So maybe even if you avoid any assumption about the existence of any multiverse, you may invent a derivation at the level of the landscape only. We normally assume that the parameter spaces of the low-energy effective field theory (or their parts allowed in the landscape, i.e. those draining the swamp) are covered more or less uniformly by the actual string vacua. We know that this can't quite be right. Sometimes we can't even say what the "uniform distribution" is supposed to look like.

But this assumption of uniformity could be flawed in very specific and extremely interesting ways. It could be that the actual string vacua actually love to be degenerate – "almost equal" superpositions of vacua that look classically very different from each other. In general, there should be some tunneling in between the vacua and the tunneling gives you off-diagonal matrix elements (between different phases) to many parameters describing the low-energy physics of the vacua (coupling constants, cosmological constant).

And because of the off-diagonal elements, the actual vacua we should find when we're careful aren't actually "straightforward quantum coherent states" built around some classical configurations. But very often, they may like to be superpositions – with non-negligible coefficients – of many phases. If that's so, even the single vacuum – in our visible Universe – could be analogous to the Arctic Ocean in my metaphor and an explanation of the multiple point criticality principle could exist.

If it were right qualitatively, it could be wonderful. One could try to look for a refinement of this Arctic landscape theory – a theory that tries to predict more realistic probability distributions on the low-energy effective field theories' parameter spaces, distributions that are non-uniform and at least morally compatible with the multiple point criticality principle. This kind of reasoning could even lead us to a calculation of some values of the parameters that are much more likely than others – and it could be the right ones which are compatible with our measurements.

A theory of the vacuum selection could exist. I tend to think that this kind of research hasn't been sufficiently pursued partly because of the left-wing bias of the research community. They may be impartial in many ways but the biases often do show up even in faraway contexts. Leftists may instinctively think that non-uniform distributions are politically incorrect so they prefer the uniformity of naturalness or the "typical vacua" in the landscape. I have always felt that these Ansätze are naive and on the wrong track – and the truth is much closer to their negations. The apparent numerically empirical success of the multiple point criticality principle is another reason to think so.

Note that while we're trying to calculate some non-uniform distributions, the multiple point criticality principle is a manifestation of egalitarianism and multiculturalism from another perspective – because several phases co-exist as almost equal ones. ;-)

## August 11, 2017

### The n-Category Cafe

Magnitude Homology in Sapporo

John and I are currently enjoying Applied Algebraic Topology 2017 in the city of Sapporo, on the northern Japanese island of Hokkaido.

I spoke about magnitude homology of metric spaces. A central concept in applied topology is persistent homology, which is also a homology theory of metric spaces. But magnitude homology is different.

It was brought into being one year ago on this very blog, principally by Mike Shulman, though Richard Hepworth and Simon Willerton had worked out a special case before. You can read a long post of mine about it from a year ago, which in turn refers back to a very long comments thread of an earlier post.

But for a short account, try my talk slides. They introduce both magnitude itself (including some exciting new developments) and magnitude homology. Both are defined in the wide generality of enriched categories, but I concentrated on the case of metric spaces.

Of course, John’s favourite slide was the one shown.

### The n-Category Cafe

A Graphical Calculus for Proarrow Equipments

guest post by David Myers

Proarrow equipments (which also go by the names “fibrant double categories” and “framed bicategories”) are wonderful and fundamental category-like objects. If categories are the abstract algebras of functions, then equipments are the abstract algebras of functions and relations. They are a fantastic setting to do formal category theory, which you can learn about in Mike’s post about them on this blog!

For my undergraduate thesis, I came up with a graphical calculus for working with equipments. I wasn’t the first to come up with it (if you’re familiar with both string diagrams and equipments, it’s basically the only sort of thing that you’d try), but I did prove it sound using a proof similar to Joyal and Street’s proof of the soundness of the graphical calculus for monoidal categories. You can see the full paper on the arXiv, or see the slides from a talk I gave about it at CT2017 here. Below the fold, I’ll show you the diagrams and a bit of what you can do with them.

#### What is a Double Category?

A double category is a category internal to the category of categories. Now, this is fun to say, but takes a bit of unpacking. Here is a more elementary definition together with the string diagrams:

Definition: A double category has

• Objects $AA$, $BB$, $C,C,$ $\dots \ldots$, which will be written as bounded plane regions of different colors , , , $\dots \ldots$.
• Vertical arrows $f:f :$ $\to \rightarrow$ , $\dots \ldots$, which we will just call arrows and write as vertical lines , directed downwards, dividing the plane region from .
• Horizontal arrows $JJ$, $KK$, $H:H : $ $\to \to$ , $\dots \ldots$, which we will just call proarrows and write as horizontal lines dividing the plane region from .
• 2-cells ,$\dots \ldots$, are represented as beads between the arrows and proarrows ,$\dots \ldots$

The usual square notation is on the left, and the string diagrams are on the right.

There are two ways to compose 2-cells: horizontally, and vertically. These satisfy and interchange law saying that composing horizontally and then vertically is the same as composing vertically and then horizontally.

Note that when we compose 2-cells horizontally, we must compose the vertical arrows. Therefore, the vertical arrows will form a category. Similary, when we compose 2-cells vertically, we must compose the horizontal proarrows. Therefore, the horizontal proarrows will form a category. Except, this is not quite true; in most of our examples in the wild, the composition of proarrows will only commute up to isomorphism, so they will form a bicategory. I’ll just hand wave this away for the rest of the post.

This is about all there is to the graphical calculus for double categories. Any deformation of a double diagram that keeps the vertical arrows vertical and the horizontal proarrows horizontal will describe an equal composite in any double category.

Here are some examples of double categories:

In many double categories that we meet “in the wild”, the arrows will be function-like and the proarrows relation-like. These double categories are called equipments. In these cases, we can turn functions into relations by taking their graphs. This can be realized in the graphical calculus by bending vertical arrows horizontal.

#### Companions, Conjoints, and Equipments

An arrow has a companion if there is a proarrow together with two 2-cells and such that

= and = .

I call these the “kink identities”, because they are reminiscent of the “zig-zag identities” for adjunctions in string diagrams. We can think of as the graph of as a subset of its domain times its codomain.

Similarly, is said to have a conjoint if there is a proarrow together with two 2-cells and such that

$==$ and $==$ .

Definition: A proarrow equipment is a double category where every arrow has a conjoint and a companion.

The prototypical example of a proarrow equipment, and also the reason for the name, is the equipment of categories, functors, profunctors, and profunctor morphisms. In this equipment, companions are the restriction of the hom of the codomain by the functor on the left, and conjoints are the restriction of the hom of the codomain by the functor on the right.

In the equipment with objects sets, arrows functions, and proarrows relations, the companion and conjoint are the graph of a function as a relation from the domain to codomain or from the codomain to domain respectively.

The following lemma is a central elementary result of the theory of equipments:

Lemma (Spider Lemma): In an equipment, we can bend arrows. More formally, there is a bijective correspondence between diagrams of form of the left, and diagrams of the form of the right:

$\approx \approx$ .

Proof. The correspondence is given by composing the outermost vertical or horizontal arrows by their companion or conjoint (co)units, as suggested by the slight bends in the arrows above. The kink identities then ensure that these two processes are inverse to each other, giving the desired bijection.

In his post, Mike calls this the “fundamental lemma”. This is the engine humming under the graphical calculus; in short, the Spider Lemma says that we can bend vertical wires horizontal. We can use this bending to prove a classical result of category theory in a very general setting.

It is a classical fact of category theory that an adjunction $f⊣g:A⇄Bf \dashv g : A \rightleftarrows B$ may be defined using natural transformations $\eta :id\to \mathrm{fg}\eta: \id \rightarrow fg$ and $ϵ:\mathrm{gf}\to id\epsilon: gf \rightarrow \id$ (which we will call a zig-zag adjunction, after the coherence conditions they have to satisfy – also called the triangle equations), or by giving a natural isomorphism $\psi :B\left(f,1\right)\cong A\left(1,g\right)\psi : B\left(f, 1\right) \cong A\left(1, g\right)$. This equivalence holds in any proarrow equipment, which we can now show quickly and intuitively with string diagrams.

Suppose we have an adjunction $\dashv$ , given by the vertical cells and , satisfying the zig-zag identities

= and = .

By bending the unit and counit, we get the horizontal cells and . Bending the zig-zag identities shows that these maps are inverse to each other

= = = ,

and are therefore the natural isomorphism $\cong \cong$ we wanted.

Going the other way, suppose is a natural isomorphism with inverse . That is,

= and = .

Then we can define a unit and counit by bending. These satisfy the zig-zag identities by pulling straight and using (1):

= = = ,

= = = .

Though this proof can be discovered graphically, it specializes to the usual argument in the case that the equipment is an equipment of enriched categories!

#### And Much, Much More!

In the paper, you’ll find that every deformation of an equipment diagram gives the same composite – the graphical calculus is sound. But you’ll also find an application of the calculus: a “Yoneda-style” embedding of every equipment into the equipment of categories enriched in it. The paper still definitely needs some work, so I welcome any feedback in the comments!

I hope these string diagrams make using equipments easier and more fun.

## August 10, 2017

### Symmetrybreaking - Fermilab/SLAC

Think FAST

The new Fermilab Accelerator Science and Technology facility at Fermilab looks to the future of accelerator science.

Unlike most particle physics facilities, the new Fermilab Accelerator Science and Technology facility (FAST) wasn’t constructed to find new particles or explain basic physical phenomena. Instead, FAST is a kind of workshop—a space for testing novel ideas that can lead to improved accelerator, beamline and laser technologies.

Historically, accelerator research has taken place on machines that were already in use for experiments, making it difficult to try out new ideas. Tinkering with a physicist’s tools mid-search for the secrets of the universe usually isn’t a great idea. By contrast, FAST enables researchers to study pieces of future high-intensity and high-energy accelerator technology with ease.

“FAST is specifically aiming to create flexible machines that are easily reconfigurable and that can be accessed on very short notice,” says Alexander Valishev, head of department that manages FAST. “You can roll in one experiment and roll the other out in a matter of days, maybe months, without expensive construction and operation costs.”

This flexibility is part of what makes FAST a useful place for training up new accelerator scientists. If a student has an idea, or something they want to study, there’s plenty of room for experimentation.

“We want students to come and do their thesis research at FAST, and we already have a number of students working.” Valishev says. “We have already had a PhD awarded on the basis of work done at FAST, but we want more of that.”

This yellow cyromodule will house the superconducting cavities that take the beam’s energy from 50 to 300 MeV.

Courtesy of Fermilab

### Small ring, bright beam

FAST will eventually include three parts: an electron injector, a proton injector and a particle storage ring called the Integrable Optics Test Accelerator, or IOTA. Although it will be small compared to other rings—only 40 meters long, while Fermilab’s Main Injector has a circumference of 3 kilometers—IOTA will be the centerpiece of FAST after its completion in 2019. And it will have a unique feature: the ability to switch from being an electron accelerator to a proton accelerator and back again.

“The sole purpose of this synchrotron is to test accelerator technology and develop that tech to test ideas and theories to improve accelerators everywhere,” says Dan Broemmelsiek, a scientist in the IOTA/FAST department.

One aspect of accelerator technology FAST focuses on is creating higher-intensity or “brighter” particle beams.

Brighter beams pack a bigger particle punch. A high-intensity beam could send a detector twice as many particles as is usually possible. Such an experiment could be completed in half the time, shortening the data collection period by several years.

IOTA will test a new concept for accelerators called integrable optics, which is intended to create a more concentrated, stable beam, possibly producing higher intensity beams than ever before.

“If this IOTA thing works, I think it could be revolutionary,” says Jamie Santucci, an engineering physicist working on FAST. “It’s going to allow all kinds of existing accelerators to pack in way more beam. More beam, more data.”

The beam starts here: Once electrons are sent down the beamline, they pass through the a set of solenoid magnets—the dark blue rings—before entering the first two superconducting cavities.

Courtesy of Fermilab

### Maximum energy milestone

Although the completion of IOTA is still a few years away, the electron injector will reach a milestone this summer: producing an electron beam with the energy of 300 million electronvolts (MeV).

The electron injector for IOTA is a research vehicle in its own right,” Valishev says. It provides scientists a chance to test superconducting accelerators, a key piece of technology for future physics machines that can produce intense acceleration at relatively low power.

“At this point, we can measure things about the beam, chop it up or focus it,” Broemmelsiek says. “We can use cameras to do beam diagnostics, and there’s space here in the beamline to put experiments to test novel instrumentation concepts.”

The electron beam’s previous maximum energy of 50 MeV was achieved by passing the beam through two superconducting accelerator cavities and has already provided opportunities for research. The arrival of the 300 MeV beam this summer—achieved by sending the beam through another eight superconducting cavities—will open up new possibilities for accelerator research, with some experiments already planned to start as soon as the beam is online.

Electronics for IOTA

Chip Edstrom

### FAST forward

The third phase of FAST, once IOTA is complete, will be the construction of the proton injector.

“FAST is unique because we will specifically target creating high-intensity proton beams,” Valishev says.

This high-intensity proton beam research will directly translate to improving research into elusive particles called neutrinos, Fermilab’s current focus.

“In five to 10 years, you’ll be talking to a neutrino guy and they’ll go, ‘I don’t know what the accelerator guys did, but it’s fabulous. We’re getting more neutrinos per hour than we ever thought we would,’” Broemmelsiek says.

Creating new accelerator technology is often an overlooked area in particle physics, but the freedom to try out new ideas and discover how to build better machines for research is inherently rewarding for people who work at FAST.

“Our business is science, and we’re supposed to make science, and we work really hard to do that,” Broemmelsiek says. “But it’s also just plain ol’ fun.”

## August 09, 2017

### Tommaso Dorigo - Scientificblogging

Higgs Decays To Tau Leptons: CMS Sees Them First
I have recently been reproached, by colleagues who are members of the competing ATLAS experiment, of misusing the word "see" in this blog, in the context of searches for physics signals. That was because I reported that CMS recently produced a very nice result where we measure the rate of H->bb decays in events where the Higgs boson recoils against a energetic jet; that signal is not statistically significant, so they could argue that CMS did not "see" anything, as I wrote in the blog title.

## August 08, 2017

### ZapperZ - Physics and Physicists

Hyperfine Splitting of Anti-Hydrogen Is Just Like Ordinary Hydrogen
More evidence that the antimatter world is practically identical to our regular matter world. The ALPHA collaboration at CERN has reported the first ever measurement of the anti-hydrogen hyperfine spectrum, and it is consistent to that measured for hydrogen.

Now, they have used microwaves to flip the spin of the positron. This resulted not only in the first precise determination of the antihydrogen hyperfine splitting, but also the first antimatter transition line shape, a plot of the spin flip probability versus the microwave frequency.

“The data reveal clear and distinct signatures of two allowed transitions, from which we obtain a direct, magnetic-field-independent measurement of the hyperfine splitting,” the researchers said.

“From a set of trials involving 194 detected atoms, we determine a splitting of 1,420.4 ± 0.5 MHz, consistent with expectations for atomic hydrogen at the level of four parts in 10,000.”

I am expecting a lot more studies on these anti-hydrogen, especially now that they have a very reliable way of sustaining these things.

The paper is an open access on Nature, so you should be able to read the entire thing for free.

Zz.

### Symmetrybreaking - Fermilab/SLAC

A new search for dark matter 6800 feet underground

Prototype tests of the future SuperCDMS SNOLAB experiment are in full swing.

When an extraordinarily sensitive dark matter experiment goes online at one of the world’s deepest underground research labs, the chances are better than ever that it will find evidence for particles of dark matter—a substance that makes up 85 percent of all matter in the universe but whose constituents have never been detected.

The heart of the experiment, called SuperCDMS SNOLAB, will be one of the most sensitive detectors for hypothetical dark matter particles called WIMPs, short for “weakly interacting massive particles.” SuperCDMS SNOLAB is one of two next-generation experiments (the other one being an experiment called LZ) selected by the US Department of Energy and the National Science Foundation to take the search for WIMPs to the next level, beginning in the early 2020s.

“The experiment will allow us to enter completely unexplored territory,” says Richard Partridge, head of the SuperCDMS SNOLAB group at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and SLAC National Accelerator Laboratory. “It’ll be the world’s most sensitive detector for WIMPs with relatively low mass, complementing LZ, which will look for heavier WIMPs.”

The experiment will operate deep underground at Canadian laboratory SNOLAB inside a nickel mine near the city of Sudbury, where 6800 feet of rock provide a natural shield from high-energy particles from space, called cosmic rays. This radiation would not only cause unwanted background in the detector; it would also create radioactive isotopes in the experiment’s silicon and germanium sensors, making them useless for the WIMP search. That’s also why the experiment will be assembled from major parts at its underground location.

A detector prototype is currently being tested at SLAC, which oversees the efforts of the SuperCDMS SNOLAB project.

### Colder than the universe

The only reason we know dark matter exists is that its gravity pulls on regular matter, affecting how galaxies rotate and light propagates. But researchers believe that if WIMPs exist, they could occasionally bump into normal matter, and these collisions could be picked up by modern detectors.

SuperCDMS SNOLAB will use germanium and silicon crystals in the shape of oversized hockey pucks as sensors for these sporadic interactions. If a WIMP hits a germanium or silicon atom inside these crystals, two things will happen: The WIMP will deposit a small amount of energy, causing the crystal lattice to vibrate, and it’ll create pairs of electrons and electron deficiencies that move through the crystal and alter its electrical conductivity. The experiment will measure both responses.

“Detecting the vibrations is very challenging,” says KIPAC’s Paul Brink, who oversees the detector fabrication at Stanford. “Even the smallest amounts of heat cause lattice vibrations that would make it impossible to detect a WIMP signal. Therefore, we’ll cool the sensors to about one hundredth of a Kelvin, which is much colder than the average temperature of the universe.”

These chilly temperatures give the experiment its name: CDMS stands for “Cryogenic Dark Matter Search.” (The prefix “Super” indicates that the experiment is more sensitive than previous detector generations.)

The use of extremely cold temperatures will be paired with sophisticated electronics, such as transition-edge sensors that switch from a superconducting state of zero electrical resistance to a normal-conducting state when a small amount of energy is deposited in the crystal, as well as superconducting quantum interference devices, or SQUIDs, that measure these tiny changes in resistance.

The experiment will initially have four detector towers, each holding six crystals. For each crystal material—silicon and germanium—there will be two different detector types, called high-voltage (HV) and interleaved Z-sensitive ionization phonon (iZIP) detectors. Future upgrades can further boost the experiment’s sensitivity by increasing the number of towers to 31, corresponding to a total of 186 sensors.

### Working hand in hand

The work under way at SLAC serves as a system test for the future SuperCDMS SNOLAB experiment. Researchers are testing the four different detector types, the way they are integrated into towers, their superconducting electrical connectors and the refrigerator unit that cools them down to a temperature of almost absolute zero.

“These tests are absolutely crucial to verify the design of these new detectors before they are integrated in the experiment underground at SNOLAB,” says Ken Fouts, project manager for SuperCDMS SNOLAB at SLAC. “They will prepare us for a critical DOE review next year, which will determine whether the project can move forward as planned.” DOE is expected to cover about half of the project costs, with the other half coming from NSF and a contribution from the Canadian Foundation for Innovation.

Important work is progressing at all partner labs of the SuperCDMS SNOLAB project. Fermi National Accelerator Laboratory is responsible for the cryogenics infrastructure and the detector shielding—both will enable searching for faint WIMP signals in an environment dominated by much stronger unwanted background signals. Pacific Northwest National Laboratory will lend its expertise in understanding background noise in highly sensitive precision experiments. A number of US universities are involved in various aspects of the project, including detector fabrication, tests, data analysis and simulation.

The project also benefits from international partnerships with institutions in Canada, France, the UK and India. The Canadian partners are leading the development of the experiment’s data acquisition and will provide the infrastructure at SNOLAB.

“Strong partnerships create a lot of synergy and make sure that we’ll get the best scientific value out of the project,” says Fermilab’s Dan Bauer, spokesperson of the SuperCDMS collaboration, which consists of 109 scientists from 22 institutions, including numerous universities. “Universities have lots of creative students and principal investigators, and their talents are combined with the expertise of scientists and engineers at the national labs, who are used to successfully manage and build large projects.”

SuperCDMS SNOLAB will be the fourth generation of experiments, following CDMS-I at Stanford, CDMS-II at the Soudan mine in Minnesota, and a first version of SuperCDMS at Soudan, which completed operations in 2015.

“Over the past 20 years we’ve been pushing the limits of our detectors to make them more and more sensitive for our search for dark matter particles,” says KIPAC’s Blas Cabrera, project director of SuperCDMS SNOLAB. “Understanding what constitutes dark matter is as fundamental and important today as it was when we started, because without dark matter none of the known structures in the universe would exist—no galaxies, no solar systems, no planets and no life itself.”

### John Baez - Azimuth

Applied Algebraic Topology 2017

In the comments on this blog post I’m taking some notes on this conference:

Applied Algebraic Topology 2017, August 8-12, 2017, Hokkaido University, Sapporo, Japan.

Unfortunately these notes will not give you a good summary of the talks—and almost nothing about applications of algebraic topology. Instead, I seem to be jotting down random cool math facts that I’m learning and don’t want to forget.

## August 07, 2017

### CERN Bulletin

Interfon

Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

### CERN Bulletin

Yoga Club

Les activités du club de yoga reprennent le 1er septembre

Yoga, Sophrologie, Tai Chi, Méditation

Êtes-vous à la recherche de bien-être, sérénité,
forme physique, souplesse de corps et d’esprit ?

Voulez-vous réduire votre stress ?

Rejoignez le club de yoga!

Des cours tous les jours de la semaine,
10 professeurs différents

cern.ch/club-yoga/

### CERN Bulletin

Cine Club

Wednesday 9 August 2017 at 20.00
CERN Council Chamber

The Fifth Element

Directed by Luc Besson
France, 1997, 126 min

Two hundred and fifty years in the future, life as we know it is threatened by the arrival of Evil. Only the Fifth Element can stop the Evil from extinguishing life, as it tries to do every five thousand years. She is assisted by a former elite commando turned cab driver, Korben Dallas, who is, in turn, helped by Prince/Arsenio clone, Ruby Rhod. Unfortunately, Evil is being assisted by Mr. Zorg, who seeks to profit from the chaos that Evil will bring, and his alien mercenaries.

Original version English; French subtitles

*  *  *  *  *  *  *  *

Wednesday 16 August 2017 at 20.00
CERN Council Chamber

Directed by George Miller
Australia, 1982, 94 min

A former Australian policeman now living in the post-apocalyptic Australian outback as a warrior agrees to help a community of survivors living in a gasoline refinery to defend them and their gasoline supplies from evil barbarian warriors.

Original version English; French subtitles

*  *  *  *  *  *  *  *

Wednesday 23 August 2017 at 20.00
CERN Council Chamber

THX 1138

Directed by George Lucas
USA, 1971, 86 min

The human race has been relocated to an underground city located beneath the Earth's surface. There, the population is entertained by holographic TV which broadcasts sex and violence and robotic police force enforces the law. In the underground city, all citizens are drugged to control their emotions and their behaviour and sex is a crime. Factory worker THX-1138 stops taking the drugs and he breaks the law when he finds himself falling in love with his room-mate LUH 3417 and is imprisoned when LUH 3417 is pregnant. Escaping from jail with illegal programmer SEN 5241 and a hologram named SRT, THX 1138 goes in search of LUH 3417 and escapes to the surface, whilst being pursued by robotic policemen..

Original version English; French subtitles

### CERN Bulletin

Offers for our members

Summer is here, enjoy our offers for the water parks!

Walibi:

Tickets "Zone terrestre": 24 € instead of 30 €.

Bonus! Free for children under 100 cm, with limited access to the attractions.

Free car park.

*  *  *  *  *  *  *  *

Aquaparc:

Day ticket:
-  Children: 33 CHF instead of 39 CHF

Bonus! Free for children under 5 years old.

### CERN Bulletin

Golf Club

Would you like to learn a new sport and meet new people?

The CERN Golf Club organises golf lessons for beginners starting in August or September.

The lesson series consist of 6 lessons of 1h30 each week, in a group of 6 people and given by the instructor Cedric Steinmetz at the Jiva Hill golf course in Crozet: http://www.jivahillgolf.com

The cost for the golf lessons is 40 euros for CERN employees or family members plus the golf club membership fee of 30 CHF.

## August 05, 2017

### The n-Category Cafe

Instantaneous Dimension of Finite Metric Spaces via Magnitude and Spread

In June I went to the following conference.

This was held at the Będlewo Conference Centre which is run by the Polish Academy of Sciences’ Institute of Mathematics. Like Oberwolfach it is kind of in the middle of nowhere, being about half an hour’s bus ride from Poznan. (As our excursion guide told us, Poznan is 300km from anywhere: 300 km from Warsaw, 300 km from Berlin, 300 km from the sea and 300 km from the mountains.) You get to eat and drink in the palace pictured below; the seminar rooms and accommodation are in a somewhat less grand building out of shot of the photo.

I gave a 20-minute long, magnitude-related talk. You can download the slides below. Do try the BuzzFeed-like quiz at the end. How many of the ten spaces can just identify just from their dimension profile?

To watch the animation I think that you will have to use acrobat reader. If you don’t want to use that then there’s a movie-free version.

Here’s the abstract.

Some spaces seem to have different dimensions at different scales. A long thin strip might appear one-dimensional at a distance, then two-dimensional when zoomed in on, but when zoomed in on even closer it is seen to be made of a finite array of points, so at that scale it seems zero-dimensional. I will present a way of quantifying this phenomenon.

The main idea is to think of dimension as corresponding to growth rate of size: when you double distances, a line will double in size and a square will quadruple in size. You then just need some good notions of size of metric spaces. One such notion is ‘magnitude’ which was introduced by Leinster, using category theoretic ideas. but was found to have links to many other areas of maths such as biodiversity and potential theory. There’s a closely related, but computationally more tractable, family of notions of size called ‘spreads’ which I introduced following connections with biodiversity.

Meckes showed that the asymptotic growth rate of the magnitude of a metric space is the Minkowski dimension (i.e. the usual dimension for squares and lines and the usual fractal dimension for things like Cantor sets). But this is zero for finite metric spaces. However, by considering growth rate non-asymptotically you get interesting looking results for finite metric spaces, such as the phenomenon described in the first parargraph.

I have blogged about instantaneous dimension before at this post. One connection with applied topology is that as in for persistent homology, one is considering what is happens to a metric space as you scale the metric.

The talk was in the smallest room of three parallel talks, so I had a reasonably small audience. However, it was very nice that almost everyone who was in the talk came up and spoke to me about it afterwards; some even told me how I could calculate magnitude of large metric spaces much faster! For instance Brad Nelson showed me how you can use iterative methods, such as the Krylov subspace method, for solving large linear systems numerically. This is much faster than just naively asking Maple to solve the linear system.

Anyway, do say below how well you did in the quiz!

### Clifford V. Johnson - Asymptotia

The Big USC News You Haven’t Heard…

So here's some big USC news that you're probably not hearing about elsewhere. I think it's the best thing that's happened on campus for a long time, and it's well worth noting. As of today (4th August, when I wrote this), there's a Trader Joe's on campus!

It opened (relatively quietly) today and I stopped by on my way home to pick up a few things - something I've fantasized about doing for some time. It's a simple thing but it's also a major thing in my opinion. Leaving aside the fact that I can now sometimes get groceries on the way home (with a subway stop just a couple of blocks away) - and also now more easily stock up my office with long workday essentials like Scottish shortbread and sardines in olive oil, there's another reason this is big news. This part of the city (and points south) simply don't have as many good options (when it comes to healthy food) as other parts of the city. It is still big news when a grocery store like this opens south the 10 freeway. In fact, away from over on the West side (where the demographic changes significantly), there were *no* Trader Joe's stores south of the 10 until this one opened today**. (Yes, in 2017 - I can wait while you check your calendar.) I consider this at least as significant (if not more) as the Whole Foods opening in downtown at [...] Click to continue reading this post

The post The Big USC News You Haven’t Heard… appeared first on Asymptotia.

### The n-Category Cafe

The Rise and Spread of Algebraic Topology

People have been using algebraic topology in data analysis these days, so we’re starting to see conferences like this:

I’m giving the first talk at this one. I’ve done a lot of work on applied category theory, but only a bit on on applied algebraic topology. It was tempting to smuggle in some categories, operads and props under the guise of algebraic topology. But I decided it would be more useful, as a kind of prelude to the conference, to say a bit about the overall history of algebraic topology, and its inner logic: how it was inevitably driven to categories, and then 2-categories, and then $\infty \infty$-categories.

This may be the least ‘applied’ of all the talks at this conference, but I’m hoping it will at least trigger some interesting thoughts. We don’t want the ‘applied’ folks to forget the grand view that algebraic topology has to offer!

Here are my talk slides:

Abstract. As algebraic topology becomes more important in applied mathematics it is worth looking back to see how this subject has changed our outlook on mathematics in general. When Noether moved from working with Betti numbers to homology groups, she forced a new outlook on topological invariants: namely, they are often functors, with two invariants counting as ‘the same’ if they are naturally isomorphic. To formalize this it was necessary to invent categories, and to formalize the analogy between natural isomorphisms between functors and homotopies between maps it was necessary to invent 2-categories. These are just the first steps in the ‘homotopification’ of mathematics, a trend in which algebra more and more comes to resemble topology, and ultimately abstract ‘spaces’ (for example, homotopy types) are considered as fundamental as sets. It is natural to wonder whether topological data analysis is a step in the spread of these ideas into applied mathematics, and how the importance of ‘robustness’ in applications will influence algebraic topology.

I thank Mike Shulman with some help on model categories and quasicategories. Any mistakes are, of course, my own fault.

### John Baez - Azimuth

The Rise and Spread of Algebraic Topology

People have been using algebraic topology in data analysis these days, so we’re starting to see conferences like this:

Applied Algebraic Topology 2017, August 8-12, 2017, Hokkaido University, Sapporo, Japan.

I’m giving the first talk at this one. I’ve done a lot of work on applied category theory, but only a bit on on applied algebraic topology. It was tempting to smuggle in some categories, operads and props under the guise of algebraic topology. But decided it would be more useful, as a kind of prelude to the conference, to say a bit about the overall history of algebraic topology, and its inner logic: how it was inevitably driven to categories, and then 2-categories, and then ∞-categories.

This may be the least ‘applied’ of all the talks at this conference, but I’m hoping it will at least trigger some interesting thoughts. We don’t want the ‘applied’ folks to forget the grand view that algebraic topology has to offer!

Here are my talk slides:

Abstract. As algebraic topology becomes more important in applied mathematics it is worth looking back to see how this subject has changed our outlook on mathematics in general. When Noether moved from working with Betti numbers to homology groups, she forced a new outlook on topological invariants: namely, they are often functors, with two invariants counting as ‘the same’ if they are naturally isomorphic. To formalize this it was necessary to invent categories, and to formalize the analogy between natural isomorphisms between functors and homotopies between maps it was necessary to invent 2-categories. These are just the first steps in the ‘homotopification’ of mathematics, a trend in which algebra more and more comes to resemble topology, and ultimately abstract ‘spaces’ (for example, homotopy types) are considered as fundamental as sets. It is natural to wonder whether topological data analysis is a step in the spread of these ideas into applied mathematics, and how the importance of ‘robustness’ in applications will influence algebraic topology.

I thank Mike Shulman with some help on model categories and quasicategories. Any mistakes are, of course, my own fault.

## August 04, 2017

### Clifford V. Johnson - Asymptotia

Future Crowds…

Yeah, I still hate doing crowd scenes. (And the next panel is an even wider shot. Why do I do this to myself?)

Anyway, this is a glimpse of the work I'm doing on the final colour for a short science fiction story I wrote and drew for an anthology collection to appear soon. I mentioned it earlier. (Can't say more yet because it's all hush-hush still, involving lots of fancy writers I've really no business keeping company with.) I've [...] Click to continue reading this post

The post Future Crowds… appeared first on Asymptotia.

### Lubos Motl - string vacua and pheno

T2K: a two-sigma evidence supporting CP-violation in neutrino sector
Let me write a short blog post by a linker, not a thinker:
T2K presents hint of CP violation by neutrinos
The strange acronym T2K stands for Tokai to Kamioka. So the T2K experiment is located in Japan but the collaboration is heavily multi-national. It works much like the older K2K, KEK to Kamioka. Indeed, it's no coincidence that Kamioka sounds like Kamiokande. Average Japanese people probably tend to know the former, average physicists tend to know the latter. ;-)

Dear physicists, Kamiokande was named after Kamioka, not vice versa! ;-)

Muon neutrinos are created at the source.

These muon neutrinos go under ground through 295 kilometers of rock and they have the opportunity to change themselves into electron neutrinos.

In 2011, T2K claimed evidence for neutrino oscillations powered by $$\theta_{13}$$, the last and least "usual" real angle in the mixing matrix. In Summer 2017, we still believe that this angle is nonzero, like the other two, $$12,23$$, and F-theory, a version of string theory, had predicted its approximate magnitude rather correctly.

In 2013, they found more than 7-sigma evidence for electron-muon neutrino oscillations and received a Breakthrough Prize for that.

By some physical and technical arrangements, they are able to look at the oscillations of antineutrinos as well and measure all the processes. The handedness (left-handed or right-handed) of the neutrinos we know is correlated with their being neutrinos or antineutrinos. But this correlation makes it possible to conserve the CP-symmetry. If you replace neutrinos with antineutrinos and reflect all the reality and images in the mirror, so that left-handed become right-handed, the allowed left-handed neutrinos become the allowed right-handed antineutrinos so everything is fine.

But we know that the CP-symmetry is also broken by elementary particles in Nature – even though the spectrum of known particles and their allowed polarizations doesn't make this breaking unavoidable. The only experimentally confirmed source of CP-violation we know is the complex phase in the CKM matrix describing the relationship between upper-type and lower-type quark mass eigenstates.

Well, T2K has done some measurement and they have found some two-sigma evidence – deviation from the CP-symmetric predictions – supporting the claim that a similar CP-violating phase $$\delta_{CP}$$, or another CP-violating effect, is nonzero even in the neutrino sector. So if it's true, the neutrinos' masses are qualitatively analogous to the quark masses. They have all the twists and phases and violations of naive symmetries that are allowed by the basic consistency.

Needless to say, the two-sigma evidence is very weak. Most such "weak caricatures of a discovery" eventually turn out to be coincidences and flukes. If they managed to collect 10 times more data and the two-sigma deviation would really follow from a real effect, a symmetry breaking, then it would be likely enough to discover the CP-violation in the neutrino sector at 5 sigma – which is considered sufficient evidence for experimental physicists to brag, get drunk, scream "discovery, discovery", accept a prize, and get drunk again (note that the 5-sigma process has 5 stages).

Ivan Mládek, Japanese [people] in [Czech town of] Jablonec, "Japonci v Jablonci". Japanese men are walking through a Jablonec bijou exhibition and buying corals for the government and the king. The girl sees that one of them has a crush on her. He gives her corals and she's immediately his. I don't understand it, you, my Japanese boy, even though you are not a man of Jablonec, I will bring you home. I will feed you nicely, to turn you into a man, and I won't let you leave to any Japan after that again. Visual arts by fifth-graders.

So while I think that most two-sigma claims ultimately fade away, this particular candidate for a discovery sounds mundane enough so that it could be true and 2 sigma could be enough for you to believe it is true. Theoretically speaking, there is no good reason to think that the complex phase should be absent in the neutrino sector. If quarks and leptons differ in such aspects, I think that neutrinos tend to have larger and more generic angles than the quarks, not vice versa.

### ZapperZ - Physics and Physicists

First Observation of Neutrinos Bouncing Off Atomic Nucleus
An amazing feat out of Oak Ridge.

And it’s really difficult to detect these gentle interactions. Collar’s group bombarded their detector with trillions of neutrinos per second, but over 15 months, they only caught a neutrino bumping against an atomic nucleus 134 times. To block stray particles, they put 20 feet of steel and a hundred feet of concrete and gravel between the detector and the neutrino source. The odds that the signal was random noise is less than 1 in 3.5 million—surpassing particle physicists’ usual gold standard for announcing a discovery. For the first time, they saw a neutrino nudge an entire atomic nucleus.

Currently, the entire paper is available from the Science website.

Zz.

## August 03, 2017

### Lubos Motl - string vacua and pheno

Dark Energy Survey rivals the accuracy of Planck
Yesterday, the Fermilab brought us the press release
Dark Energy Survey reveals most accurate measurement of dark matter structure in the universe
celebrating a new result by the Dark Energy Survey (DES), a multinational collaboration studying dark matter and dark energy using a telescope in Chile, at an altitude of 2,200 meters.

DES wants to produce similar results as Planck – the modern sibling of WMAP and COBE, a satellite that studies the cosmic microwave background temperature in various directions very finely – but its method is very different. The DES telescope looks at things in the infrared – but it is looking at "regular things" such as the number of galaxy clusters, weak gravitational lensing, type IA supernovae, and baryon acoustic oscillations.

It sounds incredible to me but the DES transnational team is capable of detecting tiny distortions of the images of distant galaxies that are caused by gravitational lensing and by measuring how much distortion there is in a given direction, they determine the density of dark matter in that direction.

At the end, they determine some of the same cosmological parameters as Planck, e.g. that dark energy makes about 70 percent of the energy density of our Universe in average. And especially if you focus on a two-dimensional plane, you may see a slight disagreement between Planck's measurement based on the CMB and "pretty much all other methods" to measure the cosmological parameters.

Planck (the blue blob) implies a slightly higher fraction of the matter in the Universe, perhaps 30-40 percent, and a slightly higher clumpiness of matter than DES whose fraction of the matter is between 24-30 percent. Meanwhile, all the measurements aside from the truly historicaly "pure CMB" Planck measurement – which includes DES and Planck's own analysis of clusters – seem to be in a better agreement with each other.

So it's disappointing that cosmology still allows us to measure the fraction of matter just as "something between 25 and 40 percent or so" – the accuracy is lousier than we used to say. On the other hand, the disagreement is just 1.4-2.3 sigma, depending on what is exactly considered and how. This is a very low signal-to-noise ratio – the disagreement is very far from a discovery (we often like 5 sigma).

More importantly, even if the disagreement could be calculated to be 4 sigma or something like that, what's troubling is that such a disagreement gives us almost no clue about "how we should modify our standard cosmological model" to improve the fit. An extra sterile neutrino could be the thing we need. Or some cosmic strings added to the Universe. Or a modified profile for some galactic dark matter. But maybe some holographic MOND-like modification of gravity is desirable. Or a different model of dark energy – some variable cosmological constant. Or something totally different – if you weren't impressed by the fundamental diversity of the possible explanations I have mentioned.

The disagreement in one or two parameters is just way too little information to give us (by us, I mean the theorists) useful clues. So even if I can imagine that in some distant future, perhaps in the year 2200, people will already agree that our model of the cosmological constant was seriously flawed in some way I can't imagine now, the observations provide us with no guide telling us where we should go from here.

Aside from the DES telescope, Chile has similar compartments and colors on their national flag as Czechia and they also have nice protocol pens with pretty good jewels that every wise president simply has to credibly appreciate. When I say "credibly", it means not just by words and clichés but by acts, too.

So even if the disagreement were 4 sigma, I just wouldn't switch to a revolutionary mode – partly because the statistical significance isn't quite persuasive, partly because I don't know what kind of a revolution I should envision or participate in.

That's why I prefer to interpret the result of DES as something that isn't quite new or ground-breaking but that still shows how nontrivially we understand the life of the Universe that has been around for 13.800002017 ;-) billion years so far and how very different ways to interpret the fields in the Universe seem to yield (almost) the same outcome.

You may look for some interesting relevant tweets by cosmologist Shaun Hotchkiss.

### Symmetrybreaking - Fermilab/SLAC

Our clumpy cosmos

The Dark Energy Survey reveals the most accurate measurement of dark matter structure in the universe.

Imagine planting a single seed and, with great precision, being able to predict the exact height of the tree that grows from it. Now imagine traveling to the future and snapping photographic proof that you were right.

If you think of the seed as the early universe, and the tree as the universe the way it looks now, you have an idea of what the Dark Energy Survey (DES) collaboration has just done. In a presentation today at the American Physical Society Division of Particles and Fields meeting at the US Department of Energy’s (DOE) Fermi National Accelerator Laboratory, DES scientists will unveil the most accurate measurement ever made of the present large-scale structure of the universe.

These measurements of the amount and “clumpiness” (or distribution) of dark matter in the present-day cosmos were made with a precision that, for the first time, rivals that of inferences from the early universe by the European Space Agency’s orbiting Planck observatory. The new DES result (the tree, in the above metaphor) is close to “forecasts” made from the Planck measurements of the distant past (the seed), allowing scientists to understand more about the ways the universe has evolved over 14 billion years.

“This result is beyond exciting,” says Scott Dodelson of Fermilab, one of the lead scientists on this result. “For the first time, we’re able to see the current structure of the universe with the same clarity that we can see its infancy, and we can follow the threads from one to the other, confirming many predictions along the way.”

Most notably, this result supports the theory that 26 percent of the universe is in the form of mysterious dark matter and that space is filled with an also-unseen dark energy, which is causing the accelerating expansion of the universe and makes up 70 percent.

Paradoxically, it is easier to measure the large-scale clumpiness of the universe in the distant past than it is to measure it today. In the first 400,000 years following the Big Bang, the universe was filled with a glowing gas, the light from which survives to this day. Planck’s map of this cosmic microwave background radiation gives us a snapshot of the universe at that very early time. Since then, the gravity of dark matter has pulled mass together and made the universe clumpier over time. But dark energy has been fighting back, pushing matter apart. Using the Planck map as a start, cosmologists can calculate precisely how this battle plays out over 14 billion years.

“The DES measurements, when compared with the Planck map, support the simplest version of the dark matter/dark energy theory,” says Joe Zuntz, of the University of Edinburgh, who worked on the analysis. “The moment we realized that our measurement matched the Planck result within 7 percent was thrilling for the entire collaboration.”

This map of dark matter is made from gravitational lensing measurements of 26 million galaxies in the Dark Energy Survey. The map covers about 1/30th of the entire sky and spans several billion light-years in extent. Red regions have more dark matter than average, blue regions less dark matter.

Chihway Chang of the Kavli Institute for Cosmological Physics at the University of Chicago and the DES collaboration.

The primary instrument for DES is the 570-megapixel Dark Energy Camera, one of the most powerful in existence, able to capture digital images of light from galaxies eight billion light-years from Earth. The camera was built and tested at Fermilab, the lead laboratory on the Dark Energy Survey, and is mounted on the National Science Foundation’s 4-meter Blanco telescope, part of the Cerro Tololo Inter-American Observatory in Chile, a division of the National Optical Astronomy Observatory. The DES data are processed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.

Scientists on DES are using the camera to map an eighth of the sky in unprecedented detail over five years. The fifth year of observation will begin in August. The new results released today draw from data collected only during the survey’s first year, which covers 1/30th of the sky.

“It is amazing that the team has managed to achieve such precision from only the first year of their survey,” says National Science Foundation Program Director Nigel Sharp. “Now that their analysis techniques are developed and tested, we look forward with eager anticipation to breakthrough results as the survey continues.”

DES scientists used two methods to measure dark matter. First, they created maps of galaxy positions as tracers, and second, they precisely measured the shapes of 26 million galaxies to directly map the patterns of dark matter over billions of light-years using a technique called gravitational lensing.

To make these ultra-precise measurements, the DES team developed new ways to detect the tiny lensing distortions of galaxy images, an effect not even visible to the eye, enabling revolutionary advances in understanding these cosmic signals. In the process, they created the largest guide to spotting dark matter in the cosmos ever drawn (see image). The new dark matter map is 10 times the size of the one DES released in 2015 and will eventually be three times larger than it is now.

“It’s an enormous team effort and the culmination of years of focused work,” says Erin Sheldon, a physicist at the DOE’s Brookhaven National Laboratory, who co-developed the new method for detecting lensing distortions.

These results and others from the first year of the Dark Energy Survey will be released today online and announced during a talk by Daniel Gruen, NASA Einstein fellow at the Kavli Institute for Particle Astrophysics and Cosmology at DOE’s SLAC National Accelerator Laboratory, at 5 pm Central time. The talk is part of the APS Division of Particles and Fields meeting at Fermilab and will be streamed live.

The results will also be presented by Kavli fellow Elisabeth Krause of the Kavli Insitute for Particle Astrophysics and Cosmology at SLAC at the TeV Particle Astrophysics Conference in Columbus, Ohio, on Aug. 9; and by Michael Troxel, postdoctoral fellow at the Center for Cosmology and AstroParticle Physics at Ohio State University, at the International Symposium on Lepton Photon Interactions at High Energies in Guanzhou, China, on Aug. 10. All three of these speakers are coordinators of DES science working groups and made key contributions to the analysis.

“The Dark Energy Survey has already delivered some remarkable discoveries and measurements, and they have barely scratched the surface of their data,” says Fermilab Director Nigel Lockyer. “Today’s world-leading results point forward to the great strides DES will make toward understanding dark energy in the coming years.”

## August 02, 2017

### ZapperZ - Physics and Physicists

RHIC Sees Another First
The quark-gluon plasma created at Brookhaven's Relativistic Heavy Ion Collider (RHIC) continues to produce a rich body of information. They have now announced that the quark-gluon plasma has produced the most rapidly-spinning fluid ever produced.

Collisions with heavy ions—typically gold or lead—put lots of protons and neutrons in a small volume with lots of energy. Under these conditions, the neat boundaries of those particles break down. For a brief instant, quarks and gluons mingle freely, creating a quark-gluon plasma. This state of matter has not been seen since an instant after the Big Bang, and it has plenty of unusual properties. "It has all sorts of superlatives," Ohio State physicist Mike Lisa told Ars. "It is the most easily flowing fluid in nature. It's highly explosive, much more than a supernova. It's hotter than any fluid that's known in nature."
.
.
.
We can now add another superlative to the quark-gluon plasma's list of "mosts:" it can be the most rapidly spinning fluid we know of. Much of the study of the material has focused on the results of two heavy ions smacking each other head-on, since that puts the most energy into the resulting debris, and these collisions spit the most particles out. But in many collisions, the two ions don't hit each other head-on—they strike a more glancing blow.

It is a fascinating article, and you may read the significance of this study, especially in relation to how it informs us on certain aspect of QCD symmetry.

But if you know me, I never fail to try to point something out that is more general in nature, and something that the general public should take note of. I like this statement in the article very much, and I'd like to highlight it here:

But a logical "should" doesn't always equal a "does," so it's important to confirm that the resulting material is actually spinning. And that's a rather large technical challenge when you're talking about a glob of material roughly the same size as an atomic nucleus.

This is what truly distinguish science with other aspects of our lives. There are many instances, especially in politics, social policies, etc., where certain assertions are made and appear to be "obvious" or "logical", and yet, these are simply statements made without any valid evidence to support it. I can think of many ("Illegal immigrants taking away jobs", or "gay marriages undermines traditional marriages", etc...etc). Yet, no matter how "logical" these may appear to be, they are simply statements that are devoid of evidence to support them. Still, whenever they are uttered, many in the public accept them as FACTS or valid, without seeking or requiring evidence to support them. One may believe that "A should cause B", but DOES IT REALLY?

Luckily, this is NOT how it is done in science. No matter how obvious it is, or how verified something is, there are always new boundaries to push and a retesting of the ideas, even ones that are known to be true under certain conditions. And a set of experimental evidence is the ONLY standard that will settle and verify any assertion and statements.

This is why everyone should learn science, not just for the material, but to understand the methodology and technique. It is too bad they don't require politicians to have such skills.

Zz.

### ZapperZ - Physics and Physicists

Is QM About To Revolutionize Biochemistry?
It is an intriguing thought, and if these authors are correct, a bunch of chemical reactions, even at higher temperatures, may be explained via quantum indistinguishibility.

The worlds of chemistry and indistinguishable physics have long been thought of as entirely separate. Indistinguishability generally occurs at low temperatures while chemistry requires relatively high temperatures where objects tend to lose their quantum properties. As a result, chemists have long felt confident in ignoring the effects of quantum indistinguishability.

Today, Matthew Fisher and Leo Radzihovsky at the University of California, Santa Barbara, say that this confidence is misplaced. They show for the first time that quantum indistinguishability must play a significant role in some chemical processes even at ordinary temperatures. And they say this influence leads to an entirely new chemical phenomenon, such as isotope separation and could also explain a previously mysterious phenomenon such as the enhanced chemical activity of reactive oxygen species.

They have uploaded their paper on arXiv.

Of course, this is still preliminary, but it provides the motivation to really explore this aspect that had not been seriously considered before. And with this latest addition, it is just another example on where physics, especially QM, are being further explored in biology and chemistry.

Zz.

## August 01, 2017

### Symmetrybreaking - Fermilab/SLAC

Tuning in for science

The sprawling Square Kilometer Array radio telescope hunts signals from one of the quietest places on earth.

When you think of radios, you probably think of noise. But the primary requirement for building the world’s largest radio telescope is keeping things almost perfectly quiet.

Radio signals are constantly streaming to Earth from a variety of sources in outer space. Radio telescopes are powerful instruments that can peer into the cosmos—through clouds and dust—to identify those signals, picking them up like a signal from a radio station. To do it, they need to be relatively free from interference emitted by cell phones, TVs, radios and their kin.

That’s one reason the Square Kilometer Array is under construction in the Great Karoo, 400,000 square kilometers of arid, sparsely populated South African plain, along with a component in the Outback of Western Australia. The Great Karoo is also a prime location because of its high altitude—radio waves can be absorbed by atmospheric moisture at lower altitudes. SKA currently covers some 1320 square kilometers of the landscape.

Even in the Great Karoo, scientists need careful filtering of environmental noise. Effects from different levels of radio frequency interference (RFI) can range from “blinding” to actually damaging the instruments. Through South Africa’s Astronomy Geographic Advantage Act, SKA is working toward “radio protection,” which would dedicate segments of the bandwidth for radio astronomy while accommodating other private and commercial RF service requirements in the region.

“Interference affects observational data and makes it hard and expensive to remove or filter out the introduced noise,” says Bernard Duah Asabere, Chief Scientist of the Ghana team of the African Very Long Baseline Interferometry Network (African VLBI Network, or AVN), one of the SKA collaboration groups in eight other African nations participating in the project.

SKA “will tackle some of the fundamental questions of our time, ranging from the birth of the universe to the origins of life,” says SKA Director-General Philip Diamond. Among the targets: dark energy, Einstein’s theory of gravity and gravitational waves, and the prevalence of the molecular building blocks of life across the cosmos.

SKA-South Africa can detect radio spectrum frequencies from 350 megahertz to 14 gigahertz. Its partner Australian component will observe the lower-frequency scale, from 50 to 350 megahertz. Visible light, for comparison, has frequencies ranging from 400 to 800 million megahertz. SKA scientists will process radiofrequency waves to form a picture of their source.

A precursor instrument to SKA called MeerKAT (named for the squirrel-sized critters indigenous to the area), is under construction in the Karoo. This array of 16 dishes in South Africa achieved first light on June 19, 2016. MeerKAT focused on 0.01 percent of the sky for 7.5 hours and saw 1300 galaxies—nearly double the number previously known in that segment of the cosmos.

Since then, MeerKAT met another milestone with 32 integrated antennas. MeerKat will also reach its full array of 64 dishes early next year, making it one of the world’s premier radio telescopes. MeerKAT will eventually be integrated into SKA Phase 1, where an additional 133 dishes will be built. That will bring the total number of antennas for SKA Phase I in South Africa to 197 by 2023. So far, 32 dishes are fully integrated and are being commissioned for science operations.

On completion of SKA 2 by 2030, the detection area of the receiver dishes will exceed 1 square kilometer, or about 11,000,000 square feet. Its huge size will make it 50 times more sensitive than any other radio telescope. It is expected to operate for 50 years.

SKA is managed by a 10-nation consortium, including the UK, China, India and Australia as well as South Africa, and receives support from another 10 countries, including the US. The project is headquartered at Jodrell Bank Observatory in the UK.

The full SKA will use radio dishes across Africa and Australia, and collaboration members say it will have a farther reach and more detailed images than any existing radio telescope.

In preparation for the SKA, South Africa and its partner countries developed AVN to establish a network of radiotelescopes across the African continent. One of its projects is the refurbishing of redundant 30-meter-class antennas, or building new ones across the partner countries, to operate as networked radio telescopes.

The first project of its kind is the AVN Ghana project, where an idle 32-meter diameter dish has been refurbished and revamped with a dual receiver system at 5 and 6.7 gigahertz central frequencies for use as a radio telescope. The dish was previously owned and operated by the government and the company Vodafone Ghana as a telecommunications facility. Now it will explore celestial objects such as extragalactic nebulae, pulsars and other RF sources in space, such as molecular clouds, called masers.

Asabere’s group will be able to tap into areas of SKA’s enormous database (several supercomputers’ worth) over the Internet. So will groups in Botswana, Kenya, Madagascar, Mauritius, Mozambique, Namibia and Zambia. SKA is also offering extensive outreach in participating countries and has already awarded 931 scholarships, fellowships and grants.

Other efforts in Ghana include introducing astronomy in the school curricula, training students in astronomy and related technologies, doing outreach in schools and universities, receiving visiting students at the telescope site and hosting programs such as the West African International Summer School for Young Astronomers taking place this week.

Asabere, who achieved his advanced degrees in Sweden (Chalmers University of Technology) and South Africa (University of Johannesburg), would like to see more students trained in Ghana, and would like get more researchers on board. He also hopes for the construction of the needed infrastructure, more local and foreign partnerships and strong governmental backing.

“I would like the opportunity to practice my profession on my own soil,” he says.

That day might not be far beyond the horizon. The Leverhulme-Royal Society Trust and Newton Fund in the UK are co-funding extensive human capital development programs in the SKA-AVN partner countries. A seven-member Ghanaian team, for example, has undergone training in South Africa and has been instructed in all aspects of the project, including the operation of the telescope.

Several PhD students and one MSc student from Ghana have received SKA-SA grants to pursue further education in astronomy and engineering. The Royal Society has awarded funding in collaboration with Leeds University to train two PhDs and 60 young aspiring scientists in the field of astrophysics.

Based on the success of the Leverhulme-Royal Society program, a joint UK-South Africa Newton Fund intervention (DARA—the Development in Africa with Radio Astronomy) has since been initiated in other partner countries to grow high technology skills that could lead to broader economic development in Africa.

As SKA seeks answers to complex questions over the next five decades, there should be plenty of opportunities for science throughout the Southern Hemisphere. Though it lives in one of the quietest places, SKA hopes to be heard loud and clear.

## July 31, 2017

### Symmetrybreaking - Fermilab/SLAC

An underground groundbreaking

A physics project kicks off construction a mile underground.

For many government officials, groundbreaking ceremonies are probably old hat—or old hardhat. But how many can say they’ve been to a groundbreaking that’s nearly a mile underground?

A group of dignitaries, including a governor and four members of Congress, now have those bragging rights. On July 21, they joined scientists and engineers 4850 feet beneath the surface at the Sanford Underground Research Facility to break ground on the Long-Baseline Neutrino Facility (LBNF).

LBNF will house massive, four-story-high detectors for the Deep Underground Neutrino Experiment (DUNE) to learn more about neutrinos—invisible, almost massless particles that may hold the key to how the universe works and why matter exists.  Fourteen shovels full of dirt marked the beginning of construction for a project that could be, well, groundbreaking.

The Sanford Underground Research Facility in Lead, South Dakota resides in what was once the deepest gold mine in North America, which has been repurposed as a place for discovery of a different kind.

“A hundred years ago, we mined gold out of this hole in the ground. Now we’re going to mine knowledge,” said US Representative Kristi Noem of South Dakota in an address at the groundbreaking.

Transforming an old mine into a lab is more than just a creative way to reuse space. On the surface, cosmic rays from the sun constantly bombard us, causing cosmic noise in the sensitive detectors scientists use to look for rare particle interactions. But underground, shielded by nearly a mile of rock, there’s cosmic quiet. Cosmic rays are rare, making it easier for scientists to see what’s going on in their detectors without being clouded by interference.

### Going down?

It may be easier to analyze data collected underground, but entering the subterranean science facility can be a chore. Nearly 60 people took a trip underground to the groundbreaking site, requiring some careful elevator choreography.

Before venturing into the deep below, reporters and representatives alike donned safety glasses, hardhats and wearable flashlights. They received two brass tags engraved with their names—one to keep and another to hang on a corkboard—a process called “brassing in.” This helps keep track of who’s underground in case of emergency.

The first group piled into the open-top elevator, known as a cage, to begin the descent. As the cage glides through a mile of mountain, it’s easy to imagine what it must have been like to be a miner back when Sanford Lab was the Homestake Mine. What’s waiting below may have changed, but the method of getting there hasn’t: The winch lowering the cage at 500-feet-a-minute is 80 years old and still works perfectly.

The ride to the 4850-level takes about 10 minutes in the cramped cage—it fits 35, but even with 20 people it feels tight. Water drips in through the ceiling as the open elevator chugs along, occasionally passing open mouths in the rock face of drifts once mined for gold.

“When you go underground, you start to think ‘It has never rained in here. And there’s never been daylight,’” says Tim Meyer, Chief Operating Officer of Fermilab, who attended the groundbreaking. “When you start thinking about being a mile below the surface, it just seems weird, like you’re walking through a piece of Swiss cheese.”

Where the cage stops at the 4850-level would be the destination of most elevator occupants on a normal day, since the shaft ends near the entrance of clean research areas housing Sanford Lab experiments. But for the contingent traveling to the future site of LBNF/DUNE on the other end of the mine, the journey continued, this time in an open-car train. It’s almost like a theme-park ride as the motor (as it’s usually called by Sanford staff) clips along through a tunnel, but fortunately, no drops or loop-the-loops are involved.

“The same rails now used to transport visitors and scientists were once used by the Homestake miners to remove gold from the underground facility,” says Jim Siegrist, Associate Director of High Energy Physics at the Department of Energy. “During the ride, rock bolts and protective screens attached to the walls were visible by the light of the headlamp mounted on our hardhats.”

After a 15-minute ride, the motor reached its destination and it was business as usual for a groundbreaking ceremony: speeches, shovels and smiling for photos. A fresh coat of white paint (more than 100 gallons worth) covered the wall behind the officials, creating a scene that almost could have been on the surface.

“Celebrating the moment nearly a mile underground brought home the enormity of the task and the dedication required for such precise experiments,” says South Dakota Governor Dennis Daugaard. “I know construction will take some time, but it will be well worth the wait for the Sanford Underground Research Facility to play such a vital role in one of the most significant physics experiments of our time."

### What’s the big deal?

The process to reach the groundbreaking site is much more arduous than reaching most symbolic ceremonies, so what would possess two senators, two representatives, a White House representative, a governor and delegates from three international science institutions (to mention a few of the VIPs) to make the trip? Only the beginning of something huge—literally.

“This milestone represents the start of construction of the largest mega-science project in the United States,” said Mike Headley, executive director of Sanford Lab.

The 14 shovelers at the groundbreaking made the first tiny dent in the excavation site for LBNF, which will require the extraction of more than 870,000 tons of rock to create huge caverns for the DUNE detectors. These detectors will catch neutrinos sent 800 miles through the earth from Fermi National Accelerator Laboratory in the hopes that they will tell us something more about these strange particles and the universe we live in.

“We have the opportunity to see truly world-changing discovery,” said US Representative Randy Hultgren of Illinois. “This is unique—this is the picture of incredible discovery and experimentation going into the future.”

### The n-Category Cafe

A Compositional Framework for Reaction Networks

For a long time Blake Pollard and I have been working on ‘open’ chemical reaction networks: that is, networks of chemical reactions where some chemicals can flow in from an outside source, or flow out. The picture to keep in mind is something like this:

where the yellow circles are different kinds of chemicals and the aqua boxes are different reactions. The purple dots in the sets X and Y are ‘inputs’ and ‘outputs’, where certain kinds of chemicals can flow in or out.

Our paper on this stuff just got accepted, and it should appear soon:

But thanks to the arXiv, you don’t have to wait: beat the rush, click and download now!

Or at least read the rest of this blog post….

Blake and I gave talks about this stuff in Luxembourg this June, at a nice conference called Dynamics, thermodynamics and information processing in chemical networks. So, if you’re the sort who prefers talk slides to big scary papers, you can look at those:

But I want to say here what we do in our paper, because it’s pretty cool, and it took a few years to figure it out. To get things to work, we needed my student Brendan Fong to invent the right category-theoretic formalism: ‘decorated cospans’. But we also had to figure out the right way to think about open dynamical systems!

In the end, we figured out how to first ‘gray-box’ an open reaction network, converting it into an open dynamical system, and then ‘black-box’ it, obtaining the relation between input and output flows and concentrations that holds in steady state. The first step extracts the dynamical behavior of an open reaction network; the second extracts its static behavior. And both these steps are functors! So, we’re applying Lawvere’s ideas on functorial semantics to chemistry.

Now Blake has passed his thesis defense based on this work, and he just needs to polish up his thesis a little before submitting it. This summer he’s doing an internship at the Princeton branch of the engineering firm Siemens. He’s working with Arquimedes Canedo on ‘knowledge representation’.

But I’m still eager to dig deeper into open reaction networks. They’re a small but nontrivial step toward my dream of a mathematics of living systems. My working hypothesis is that living systems seem ‘messy’ to physicists because they operate at a higher level of abstraction. That’s what I’m trying to explore.

Here’s the idea of our paper.

### The idea

Reaction networks are a very general framework for describing processes where entities interact and transform int other entities. While they first showed up in chemistry, and are often called ‘chemical reaction networks’, they have lots of other applications. For example, a basic model of infectious disease, the ‘SIRS model’, is described by this reaction network:

$S+I\stackrel{\iota }{⟶}2I\phantom{\rule{2em}{0ex}}I\stackrel{\rho }{⟶}R\stackrel{\lambda }{⟶}S S + I \stackrel\left\{\iota\right\}\left\{\longrightarrow\right\} 2 I \qquad I \stackrel\left\{\rho\right\}\left\{\longrightarrow\right\} R \stackrel\left\{\lambda\right\}\left\{\longrightarrow\right\} S $

We see here three types of entity, called species:

• $SS$: susceptible,
• $II$: infected,
• $RR$: resistant.

We also have three reactions’:

• $\iota :S+I\to 2I\iota : S + I \to 2 I$: infection, in which a susceptible individual meets an infected one and becomes infected;
• $\rho :I\to R\rho : I \to R$: recovery, in which an infected individual gains resistance to the disease;
• $\lambda :R\to S\lambda : R \to S$: loss of resistance, in which a resistant individual becomes susceptible.

In general, a reaction network involves a finite set of species, but reactions go between complexes, which are finite linear combinations of these species with natural number coefficients. The reaction network is a directed graph whose vertices are certain complexes and whose edges are called reactions.

If we attach a positive real number called a rate constant to each reaction, a reaction network determines a system of differential equations saying how the concentrations of the species change over time. This system of equations is usually called the rate equation. In the example I just gave, the rate equation is

$\begin{array}{ccl}\frac{dS}{dt}& =& {r}_{\lambda }R-{r}_{\iota }SI\\ \\ \frac{dI}{dt}& =& {r}_{\iota }SI-{r}_{\rho }I\\ \\ \frac{dR}{dt}& =& {r}_{\rho }I-{r}_{\lambda }R\end{array}\begin\left\{array\right\}\left\{ccl\right\} \displaystyle\left\{\frac\left\{d S\right\}\left\{d t\right\}\right\} &=& r_\lambda R - r_\iota S I \\ \\ \displaystyle\left\{\frac\left\{d I\right\}\left\{d t\right\}\right\} &=& r_\iota S I - r_\rho I \\ \\ \displaystyle\left\{\frac\left\{d R\right\}\left\{d t\right\}\right\} &=& r_\rho I - r_\lambda R \end\left\{array\right\}$

Here ${r}_{\iota },{r}_{\rho }r_\iota, r_\rho$ and ${r}_{\lambda }r_\lambda$ are the rate constants for the three reactions, and $S,I,RS, I, R$ now stand for the concentrations of the three species, which are treated in a continuum approximation as smooth functions of time:

$S,I,R:ℝ\to \left[0,\infty \right)S, I, R: \mathbb\left\{R\right\} \to \left[0,\infty\right)$

The rate equation can be derived from the law of mass action, which says that any reaction occurs at a rate equal to its rate constant times the product of the concentrations of the species entering it as inputs.

But a reaction network is more than just a stepping-stone to its rate equation! Interesting qualitative properties of the rate equation, like the existence and uniqueness of steady state solutions, can often be determined just by looking at the reaction network, regardless of the rate constants. Results in this direction began with Feinberg and Horn’s work in the 1960’s, leading to the Deficiency Zero and Deficiency One Theorems, and more recently to Craciun’s proof of the Global Attractor Conjecture.

In our paper, Blake and I present a ‘compositional framework’ for reaction networks. In other words, we describe rules for building up reaction networks from smaller pieces, in such a way that its rate equation can be figured out knowing those those of the pieces. But this framework requires that we view reaction networks in a somewhat different way, as ‘Petri nets’.

Petri nets were invented by Carl Petri in 1939, when he was just a teenager, for the purposes of chemistry. Much later, they became popular in theoretical computer science, biology and other fields. A Petri net is a bipartite directed graph: vertices of one kind represent species, vertices of the other kind represent reactions. The edges into a reaction specify which species are inputs to that reaction, while the edges out specify its outputs.

You can easily turn a reaction network into a Petri net and vice versa. For example, the reaction network above translates into this Petri net:

Beware: there are a lot of different names for the same thing, since the terminology comes from several communities. In the Petri net literature, species are called places and reactions are called transitions. In fact, Petri nets are sometimes called ‘place-transition nets’ or ‘P/T nets’. On the other hand, chemists call them ‘species-reaction graphs’ or ‘SR-graphs’. And when each reaction of a Petri net has a rate constant attached to it, it is often called a ‘stochastic Petri net’.

While some qualitative properties of a rate equation can be read off from a reaction network, others are more easily read from the corresponding Petri net. For example, properties of a Petri net can be used to determine whether its rate equation can have multiple steady states.

Petri nets are also better suited to a compositional framework. The key new concept is an ‘open’ Petri net. Here’s an example:

The box at left is a set X of ‘inputs’ (which happens to be empty), while the box at right is a set Y of ‘outputs’. Both inputs and outputs are points at which entities of various species can flow in or out of the Petri net. We say the open Petri net goes from X to Y. In our paper, we show how to treat it as a morphism $f:X\to Yf : X \to Y$ in a category we call $\mathrm{RxNet}\left\{RxNet\right\}$.

Given an open Petri net with rate constants assigned to each reaction, our paper explains how to get its ‘open rate equation’. It’s just the usual rate equation with extra terms describing inflows and outflows. The above example has this open rate equation:

$\begin{array}{ccr}\frac{dS}{dt}& =& -{r}_{\iota }SI-{o}_{1}\\ \\ \frac{dI}{dt}& =& {r}_{\iota }SI-{o}_{2}\end{array}\begin\left\{array\right\}\left\{ccr\right\} \displaystyle\left\{\frac\left\{d S\right\}\left\{d t\right\}\right\} &=& - r_\iota S I - o_1 \\ \\ \displaystyle\left\{\frac\left\{d I\right\}\left\{d t\right\}\right\} &=& r_\iota S I - o_2 \end\left\{array\right\}$

Here ${o}_{1},{o}_{2}:ℝ\to ℝo_1, o_2 : \mathbb\left\{R\right\} \to \mathbb\left\{R\right\}$ are arbitrary smooth functions describing outflows as a function of time.

Given another open Petri net $g:Y\to Z,g: Y \to Z,$ for example this:

it will have its own open rate equation, in this case

$\begin{array}{ccc}\frac{dS}{dt}& =& {r}_{\lambda }R+{i}_{2}\\ \\ \frac{dI}{dt}& =& -{r}_{\rho }I+{i}_{1}\\ \\ \frac{dR}{dt}& =& {r}_{\rho }I-{r}_{\lambda }R\end{array}\begin\left\{array\right\}\left\{ccc\right\} \displaystyle\left\{\frac\left\{d S\right\}\left\{d t\right\}\right\} &=& r_\lambda R + i_2 \\ \\ \displaystyle\left\{\frac\left\{d I\right\}\left\{d t\right\}\right\} &=& - r_\rho I + i_1 \\ \\ \displaystyle\left\{\frac\left\{d R\right\}\left\{d t\right\}\right\} &=& r_\rho I - r_\lambda R \end\left\{array\right\} $

Here ${i}_{1},{i}_{2}:ℝ\to ℝi_1, i_2: \mathbb\left\{R\right\} \to \mathbb\left\{R\right\}$ are arbitrary smooth functions describing inflows as a function of time. Now for the first bit of category theory: we can compose $ff$ and $gg$ by gluing the outputs of $ff$ to the inputs of $g.g.$ This gives a new open Petri net $gf:X\to Z,g f: X \to Z,$ as follows:

But this open Petri net $gfg f$ has an empty set of inputs, and an empty set of outputs! So it amounts to an ordinary Petri net, and its open rate equation is a rate equation of the usual kind. Indeed, this is the Petri net we have already seen.

As it turns out, there’s a systematic procedure for combining the open rate equations for two open Petri nets to obtain that of their composite. In the example we’re looking at, we just identify the outflows of $ff$ with the inflows of $gg$ (setting ${i}_{1}={o}_{1}i_1 = o_1$ and ${i}_{2}={o}_{2}i_2 = o_2$) and then add the right hand sides of their open rate equations.

The first goal of our paper is to precisely describe this procedure, and to prove that it defines a functor

$\diamond :\mathrm{RxNet}\to \mathrm{Dynam}\diamond: \left\{RxNet\right\} \to \left\{Dynam\right\} $

from $\mathrm{RxNet}\left\{RxNet\right\}$ to a category $\mathrm{Dynam}\left\{Dynam\right\}$ where the morphisms are ‘open dynamical systems’. By a dynamical system, we essentially mean a vector field on ${ℝ}^{n},\mathbb\left\{R\right\}^n,$ which can be used to define a system of first-order ordinary differential equations in $nn$ variables. An example is the rate equation of a Petri net. An open dynamical system allows for the possibility of extra terms that are arbitrary functions of time, such as the inflows and outflows in an open rate equation.

In fact, we prove that $\mathrm{RxNet}\left\{RxNet\right\}$ and $\mathrm{Dynam}\left\{Dynam\right\}$ are symmetric monoidal categories and that $dd$ is a symmetric monoidal functor. To do this, we use Brendan Fong’s theory of ‘decorated cospans’.

Decorated cospans are a powerful general tool for describing open systems. A cospan in any category is just a diagram like this:

We are mostly interested in cospans in $\mathrm{FinSet},\left\{FinSet\right\},$ the category of finite sets and functions between these. The set $SS$, the so-called apex of the cospan, is the set of states of an open system. The sets $XX$ and $YY$ are the inputs and outputs of this system. The legs of the cospan, meaning the morphisms $i:X\to Si: X \to S$ and $o:Y\to S,o: Y \to S,$ describe how these inputs and outputs are included in the system. In our application, $SS$ is the set of species of a Petri net.

For example, we may take this reaction network:

$A+B\stackrel{\alpha }{⟶}2C\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}C\stackrel{\beta }{⟶}DA+B \stackrel\left\{\alpha\right\}\left\{\longrightarrow\right\} 2C \quad \quad C \stackrel\left\{\beta\right\}\left\{\longrightarrow\right\} D $

treat it as a Petri net with $S=\left\{A,B,C,D\right\}S = \\left\{A,B,C,D\\right\}$:

and then turn that into an open Petri net by choosing any finite sets $X,YX,Y$ and maps $i:X\to Si: X \to S$, $o:Y\to So: Y \to S$, for example like this:

(Notice that the maps including the inputs and outputs into the states of the system need not be one-to-one. This is technically useful, but it introduces some subtleties that I don’t feel like explaining right now.)

An open Petri net can thus be seen as a cospan of finite sets whose apex $SS$ is ‘decorated’ with some extra information, namely a Petri net with $SS$ as its set of species. Fong’s theory of decorated cospans lets us define a category with open Petri nets as morphisms, with composition given by gluing the outputs of one open Petri net to the inputs of another.

We call the functor

$\diamond :\mathrm{RxNet}\to \mathrm{Dynam}\diamond: \left\{RxNet\right\} \to \left\{Dynam\right\}$

gray-boxing because it hides some but not all the internal details of an open Petri net. (In the paper we draw it as a gray box, but that’s too hard here!)

We can go further and black-box an open dynamical system. This amounts to recording only the relation between input and output variables that must hold in steady state. We prove that black-boxing gives a functor

$\blacksquare :\mathrm{Dynam}\to \mathrm{SemiAlgRel} \blacksquare: \left\{Dynam\right\} \to \left\{SemiAlgRel\right\} $

Here $\mathrm{SemiAlgRel}\left\{SemiAlgRel\right\}$ is a category where the morphisms are semi-algebraic relations between real vector spaces, meaning relations defined by polynomials and inequalities. This relies on the fact that our dynamical systems involve algebraic vector fields, meaning those whose components are polynomials; more general dynamical systems would give more general relations.

That semi-algebraic relations are closed under composition is a nontrivial fact, a spinoff of the Tarski–Seidenberg theorem. This says that a subset of ${ℝ}^{n+1}\mathbb\left\{R\right\}^\left\{n+1\right\}$ defined by polynomial equations and inequalities can be projected down onto ${ℝ}^{n}\mathbb\left\{R\right\}^n$, and the resulting set is still definable in terms of polynomial identities and inequalities. This wouldn’t be true if we didn’t allow inequalities. It’s neat to see this theorem, important in mathematical logic, showing up in chemistry!

### Structure of the paper

Okay, now you’re ready to read our paper! Here’s how it goes:

In Section 2 we review and compare reaction networks and Petri nets. In Section 3 we construct a symmetric monoidal category $\mathrm{RNet}\left\{RNet\right\}$ where an object is a finite set and a morphism is an open reaction network (or more precisely, an isomorphism class of open reaction networks). In Section 4 we enhance this construction to define a symmetric monoidal category $\mathrm{RxNet}\left\{RxNet\right\}$ where the transitions of the open reaction networks are equipped with rate constants. In Section 5 we explain the open dynamical system associated to an open reaction network, and in Section 6 we construct a symmetric monoidal category $\mathrm{Dynam}\left\{Dynam\right\}$ of open dynamical systems. In Section 7 we construct the gray-boxing functor

$\diamond :\mathrm{RxNet}\to \mathrm{Dynam}\diamond: \left\{RxNet\right\} \to \left\{Dynam\right\}$

In Section 8 we construct the black-boxing functor

$\blacksquare :\mathrm{Dynam}\to \mathrm{SemiAlgRel}\blacksquare: \left\{Dynam\right\} \to \left\{SemiAlgRel\right\}$

We show both of these are symmetric monoidal functors.

Finally, in Section 9 we fit our results into a larger ‘network of network theories’. This is where various results in various papers I’ve been writing in the last few years start assembling to form a big picture! But this picture needs to grow….

### ZapperZ - Physics and Physicists

Believe it or not, there are still people out there who get scared witless and going out of their minds with their phobia about "radiation". I get questions related to this often enough that whenever I find info like this one, I want to post it here.

Don Lincoln decides to tackle this issue regarding "radiation". If you have little knowledge and idea about this, this is the video to watch.

Zz.

## July 30, 2017

### John Baez - Azimuth

A Compositional Framework for Reaction Networks

For a long time Blake Pollard and I have been working on ‘open’ chemical reaction networks: that is, networks of chemical reactions where some chemicals can flow in from an outside source, or flow out. The picture to keep in mind is something like this:

where the yellow circles are different kinds of chemicals and the aqua boxes are different reactions. The purple dots in the sets X and Y are ‘inputs’ and ‘outputs’, where certain kinds of chemicals can flow in or out.

Our paper on this stuff just got accepted, and it should appear soon:

• John Baez and Blake Pollard, A compositional framework for reaction networks, to appear in Reviews in Mathematical Physics.

But thanks to the arXiv, you don’t have to wait: beat the rush, click and download now!

Blake and I gave talks about this stuff in Luxembourg this June, at a nice conference called Dynamics, thermodynamics and information processing in chemical networks. So, if you’re the sort who prefers talk slides to big scary papers, you can look at those:

• John Baez, The mathematics of open reaction networks.

• Blake Pollard, Black-boxing open reaction networks.

But I want to say here what we do in our paper, because it’s pretty cool, and it took a few years to figure it out. To get things to work, we needed my student Brendan Fong to invent the right category-theoretic formalism: ‘decorated cospans’. But we also had to figure out the right way to think about open dynamical systems!

In the end, we figured out how to first ‘gray-box’ an open reaction network, converting it into an open dynamical system, and then ‘black-box’ it, obtaining the relation between input and output flows and concentrations that holds in steady state. The first step extracts the dynamical behavior of an open reaction network; the second extracts its static behavior. And both these steps are functors!

Lawvere had the idea that the process of assigning ‘meaning’ to expressions could be seen as a functor. This idea has caught on in theoretical computer science: it’s called ‘functorial semantics’. So, what we’re doing here is applying functorial semantics to chemistry.

Now Blake has passed his thesis defense based on this work, and he just needs to polish up his thesis a little before submitting it. This summer he’s doing an internship at the Princeton branch of the engineering firm Siemens. He’s working with Arquimedes Canedo on ‘knowledge representation’.

But I’m still eager to dig deeper into open reaction networks. They’re a small but nontrivial step toward my dream of a mathematics of living systems. My working hypothesis is that living systems seem ‘messy’ to physicists because they operate at a higher level of abstraction. That’s what I’m trying to explore.

Here’s the idea of our paper.

### The idea

Reaction networks are a very general framework for describing processes where entities interact and transform int other entities. While they first showed up in chemistry, and are often called ‘chemical reaction networks’, they have lots of other applications. For example, a basic model of infectious disease, the ‘SIRS model’, is described by this reaction network:

$S + I \stackrel{\iota}{\longrightarrow} 2 I \qquad I \stackrel{\rho}{\longrightarrow} R \stackrel{\lambda}{\longrightarrow} S$

We see here three types of entity, called species:

$S$: susceptible,
$I$: infected,
$R$: resistant.

We also have three reactions’:

$\iota : S + I \to 2 I$: infection, in which a susceptible individual meets an infected one and becomes infected;
$\rho : I \to R$: recovery, in which an infected individual gains resistance to the disease;
$\lambda : R \to S$: loss of resistance, in which a resistant individual becomes susceptible.

In general, a reaction network involves a finite set of species, but reactions go between complexes, which are finite linear combinations of these species with natural number coefficients. The reaction network is a directed graph whose vertices are certain complexes and whose edges are called reactions.

If we attach a positive real number called a rate constant to each reaction, a reaction network determines a system of differential equations saying how the concentrations of the species change over time. This system of equations is usually called the rate equation. In the example I just gave, the rate equation is

$\begin{array}{ccl} \displaystyle{\frac{d S}{d t}} &=& r_\lambda R - r_\iota S I \\ \\ \displaystyle{\frac{d I}{d t}} &=& r_\iota S I - r_\rho I \\ \\ \displaystyle{\frac{d R}{d t}} &=& r_\rho I - r_\lambda R \end{array}$

Here $r_\iota, r_\rho$ and $r_\lambda$ are the rate constants for the three reactions, and $S, I, R$ now stand for the concentrations of the three species, which are treated in a continuum approximation as smooth functions of time:

$S, I, R: \mathbb{R} \to [0,\infty)$

The rate equation can be derived from the law of mass action, which says that any reaction occurs at a rate equal to its rate constant times the product of the concentrations of the species entering it as inputs.

But a reaction network is more than just a stepping-stone to its rate equation! Interesting qualitative properties of the rate equation, like the existence and uniqueness of steady state solutions, can often be determined just by looking at the reaction network, regardless of the rate constants. Results in this direction began with Feinberg and Horn’s work in the 1960’s, leading to the Deficiency Zero and Deficiency One Theorems, and more recently to Craciun’s proof of the Global Attractor Conjecture.

In our paper, Blake and I present a ‘compositional framework’ for reaction networks. In other words, we describe rules for building up reaction networks from smaller pieces, in such a way that its rate equation can be figured out knowing those those of the pieces. But this framework requires that we view reaction networks in a somewhat different way, as ‘Petri nets’.

Petri nets were invented by Carl Petri in 1939, when he was just a teenager, for the purposes of chemistry. Much later, they became popular in theoretical computer science, biology and other fields. A Petri net is a bipartite directed graph: vertices of one kind represent species, vertices of the other kind represent reactions. The edges into a reaction specify which species are inputs to that reaction, while the edges out specify its outputs.

You can easily turn a reaction network into a Petri net and vice versa. For example, the reaction network above translates into this Petri net:

Beware: there are a lot of different names for the same thing, since the terminology comes from several communities. In the Petri net literature, species are called places and reactions are called transitions. In fact, Petri nets are sometimes called ‘place-transition nets’ or ‘P/T nets’. On the other hand, chemists call them ‘species-reaction graphs’ or ‘SR-graphs’. And when each reaction of a Petri net has a rate constant attached to it, it is often called a ‘stochastic Petri net’.

While some qualitative properties of a rate equation can be read off from a reaction network, others are more easily read from the corresponding Petri net. For example, properties of a Petri net can be used to determine whether its rate equation can have multiple steady states.

Petri nets are also better suited to a compositional framework. The key new concept is an ‘open’ Petri net. Here’s an example:

The box at left is a set X of ‘inputs’ (which happens to be empty), while the box at right is a set Y of ‘outputs’. Both inputs and outputs are points at which entities of various species can flow in or out of the Petri net. We say the open Petri net goes from X to Y. In our paper, we show how to treat it as a morphism $f : X \to Y$ in a category we call $\textrm{RxNet}$.

Given an open Petri net with rate constants assigned to each reaction, our paper explains how to get its ‘open rate equation’. It’s just the usual rate equation with extra terms describing inflows and outflows. The above example has this open rate equation:

$\begin{array}{ccr} \displaystyle{\frac{d S}{d t}} &=& - r_\iota S I - o_1 \\ \\ \displaystyle{\frac{d I}{d t}} &=& r_\iota S I - o_2 \end{array}$

Here $o_1, o_2 : \mathbb{R} \to \mathbb{R}$ are arbitrary smooth functions describing outflows as a function of time.

Given another open Petri net $g: Y \to Z,$ for example this:

it will have its own open rate equation, in this case

$\begin{array}{ccc} \displaystyle{\frac{d S}{d t}} &=& r_\lambda R + i_2 \\ \\ \displaystyle{\frac{d I}{d t}} &=& - r_\rho I + i_1 \\ \\ \displaystyle{\frac{d R}{d t}} &=& r_\rho I - r_\lambda R \end{array}$

Here $i_1, i_2: \mathbb{R} \to \mathbb{R}$ are arbitrary smooth functions describing inflows as a function of time. Now for a tiny bit of category theory: we can compose $f$ and $g$ by gluing the outputs of $f$ to the inputs of $g.$ This gives a new open Petri net $gf: X \to Z,$ as follows:

But this open Petri net $gf$ has an empty set of inputs, and an empty set of outputs! So it amounts to an ordinary Petri net, and its open rate equation is a rate equation of the usual kind. Indeed, this is the Petri net we have already seen.

As it turns out, there’s a systematic procedure for combining the open rate equations for two open Petri nets to obtain that of their composite. In the example we’re looking at, we just identify the outflows of $f$ with the inflows of $g$ (setting $i_1 = o_1$ and $i_2 = o_2$) and then add the right hand sides of their open rate equations.

The first goal of our paper is to precisely describe this procedure, and to prove that it defines a functor

$\diamond: \textrm{RxNet} \to \textrm{Dynam}$

from $\textrm{RxNet}$ to a category $\textrm{Dynam}$ where the morphisms are ‘open dynamical systems’. By a dynamical system, we essentially mean a vector field on $\mathbb{R}^n,$ which can be used to define a system of first-order ordinary differential equations in $n$ variables. An example is the rate equation of a Petri net. An open dynamical system allows for the possibility of extra terms that are arbitrary functions of time, such as the inflows and outflows in an open rate equation.

In fact, we prove that $\textrm{RxNet}$ and $\textrm{Dynam}$ are symmetric monoidal categories and that $d$ is a symmetric monoidal functor. To do this, we use Brendan Fong’s theory of ‘decorated cospans’.

Decorated cospans are a powerful general tool for describing open systems. A cospan in any category is just a diagram like this:

We are mostly interested in cospans in $\mathrm{FinSet},$ the category of finite sets and functions between these. The set $S$, the so-called apex of the cospan, is the set of states of an open system. The sets $X$ and $Y$ are the inputs and outputs of this system. The legs of the cospan, meaning the morphisms $i: X \to S$ and $o: Y \to S,$ describe how these inputs and outputs are included in the system. In our application, $S$ is the set of species of a Petri net.

For example, we may take this reaction network:

$A+B \stackrel{\alpha}{\longrightarrow} 2C \quad \quad C \stackrel{\beta}{\longrightarrow} D$

treat it as a Petri net with $S = \{A,B,C,D\}$:

and then turn that into an open Petri net by choosing any finite sets $X,Y$ and maps $i: X \to S$, $o: Y \to S$, for example like this:

(Notice that the maps including the inputs and outputs into the states of the system need not be one-to-one. This is technically useful, but it introduces some subtleties that I don’t feel like explaining right now.)

An open Petri net can thus be seen as a cospan of finite sets whose apex $S$ is ‘decorated’ with some extra information, namely a Petri net with $S$ as its set of species. Fong’s theory of decorated cospans lets us define a category with open Petri nets as morphisms, with composition given by gluing the outputs of one open Petri net to the inputs of another.

We call the functor

$\diamond: \textrm{RxNet} \to \textrm{Dynam}$

gray-boxing because it hides some but not all the internal details of an open Petri net. (In the paper we draw it as a gray box, but that’s too hard here!)

We can go further and black-box an open dynamical system. This amounts to recording only the relation between input and output variables that must hold in steady state. We prove that black-boxing gives a functor

$\square: \textrm{Dynam} \to \mathrm{SemiAlgRel}$

(yeah, the box here should be black, and in our paper it is). Here $\mathrm{SemiAlgRel}$ is a category where the morphisms are semi-algebraic relations between real vector spaces, meaning relations defined by polynomials and inequalities. This relies on the fact that our dynamical systems involve algebraic vector fields, meaning those whose components are polynomials; more general dynamical systems would give more general relations.

That semi-algebraic relations are closed under composition is a nontrivial fact, a spinoff of the Tarski–Seidenberg theorem. This says that a subset of $\mathbb{R}^{n+1}$ defined by polynomial equations and inequalities can be projected down onto $\mathbb{R}^n$, and the resulting set is still definable in terms of polynomial identities and inequalities. This wouldn’t be true if we didn’t allow inequalities. It’s neat to see this theorem, important in mathematical logic, showing up in chemistry!

### Structure of the paper

Okay, now you’re ready to read our paper! Here’s how it goes:

In Section 2 we review and compare reaction networks and Petri nets. In Section 3 we construct a symmetric monoidal category $\textrm{RNet}$ where an object is a finite set and a morphism is an open reaction network (or more precisely, an isomorphism class of open reaction networks). In Section 4 we enhance this construction to define a symmetric monoidal category $\textrm{RxNet}$ where the transitions of the open reaction networks are equipped with rate constants. In Section 5 we explain the open dynamical system associated to an open reaction network, and in Section 6 we construct a symmetric monoidal category $\textrm{Dynam}$ of open dynamical systems. In Section 7 we construct the gray-boxing functor

$\diamond: \textrm{RxNet} \to \textrm{Dynam}$

In Section 8 we construct the black-boxing functor

$\square: \textrm{Dynam} \to \mathrm{SemiAlgRel}$

We show both of these are symmetric monoidal functors.

Finally, in Section 9 we fit our results into a larger ‘network of network theories’. This is where various results in various papers I’ve been writing in the last few years start assembling to form a big picture! But this picture needs to grow….

## July 28, 2017

### Clifford V. Johnson - Asymptotia

I Went Walking, and…

Well, that was nice. Was out for a walk with my son and ran into Walter Isaacson. (The Aspen Center for Physics, which I'm currently visiting, is next door to the Aspen Institute. He's the president and CEO of it.) He wrote the excellent Einstein biography that was the official book of the Genius series I worked on as science advisor. We chatted, and it turns out we have mutual friends and acquaintances.

He was pleased to hear that they got a science advisor on board and that the writers (etc) did such a good job with the science. I also learned that he has a book on Leonardo da Vinci coming out [...] Click to continue reading this post

The post I Went Walking, and… appeared first on Asymptotia.

## July 27, 2017

### Tommaso Dorigo - Scientificblogging

An ATLAS 240 GeV Higgs-Like Fluctuation Meets Predictions From Independent Researcher
A new analysis by the ATLAS collaboration, based of the data collected in 13 TeV proton-proton collisions delivered by the LHC in 2016, finds an excess of X-->4 lepton events at a mass of 240 GeV, with a local significance of 3.6 standard deviations. The search, which targeted objects of similar phenomenology to the 125 GeV Higgs boson discovered in 2012, is published in ATLAS CONF-2017-058. Besides the 240 GeV excess, another one at 700 GeV is found, with the same statistical significance.

## July 26, 2017

### Symmetrybreaking - Fermilab/SLAC

Angela Fava: studying neutrinos around the globe

This experimental physicist has followed the ICARUS neutrino detector from Gran Sasso to Geneva to Chicago.

Physicist Angela Fava has been at the enormous ICARUS detector’s side for over a decade. As an undergraduate student in Italy in 2006, she worked on basic hardware for the neutrino hunting experiment: tightening bolts and screws, connecting and reconnecting cables, learning how the detector worked inside and out.

ICARUS (short for Imaging Cosmic And Rare Underground Signals) first began operating for research in 2010, studying a beam of neutrinos created at European laboratory CERN and launched straight through the earth hundreds of miles to the detector’s underground home at INFN Gran Sasso National Laboratory.

In 2014, the detector moved to CERN for refurbishing, and Fava relocated with it. In June ICARUS began a journey across the ocean to the US Department of Energy’s Fermi National Accelerator Laboratory to take part in a new neutrino experiment. When it arrives today, Fava will be waiting.

Fava will go through the installation process she helped with as a student, this time as an expert.

Caraban Gonzalez, Noemi Ordan, Julien Marius, CERN

### Journey to ICARUS

As a child growing up between Venice and the Alps, Fava always thought she would pursue a career in math. But during a one-week summer workshop before her final year of high school in 2000, she was drawn to experimental physics.

At the workshop, she realized she had more in common with physicists. Around the same time, she read about new discoveries related to neutral, rarely interacting particles called neutrinos. Scientists had recently been surprised to find that the extremely light particles actually had mass and that different types of neutrinos could change into one another. And there was still much more to learn about the ghostlike particles.

At the start of college in 2001, Fava immediately joined the University of Padua neutrino group. For her undergraduate thesis research, she focused on the production of hadrons, making measurements essential to studying the production of neutrinos. In 2004, her research advisor Alberto Guglielmi and his group joined the ICARUS collaboration, and she’s been a part of it ever since.

Fava jests that the relationship actually started much earlier: “ICARUS was proposed for the first time in 1983, which is the year I was born. So we are linked from birth.”

Fava remained at the University of Padua in the same research group for her graduate work. During those years, she spent about half of her time at the ICARUS detector, helping bring it to life at Gran Sasso.

Once all the bolts were tightened and the cables were attached, ICARUS scientists began to pursue their goal of using the detector to study how neutrinos change from one type to another.

During operation, Fava switched gears to create databases to store and log the data. She wrote code to automate the data acquisition system and triggering, which differentiates between neutrino events and background such as passing cosmic rays. “I was trying to take part in whatever activity was going on just to learn as much as possible,” she says.

That flexibility is a trait that Claudio Silverio Montanari, the technical director of ICARUS, praises. “She has a very good capability to adapt,” he says. “Our job, as physicists, is putting together the pieces and making the detector work.”

Caraban Gonzalez, Noemi Ordan, Julien Marius, CERN

### Changing it up

Adapting to changing circumstances is a skill both Fava and ICARUS have in common. When scientists proposed giving the detector an update at CERN and then using it in a suite of neutrino experiments at Fermilab, Fava volunteered to come along for the ride.

Once installed and operating at Fermilab, ICARUS will be used to study neutrinos from a source a few hundred meters away from the detector. In its new iteration, ICARUS will search for sterile neutrinos, a hypothetical kind of neutrino that would interact even more rarely than standard neutrinos. While hints of these low-mass particles have cropped up in some experiments, they have not yet been detected.

At Fermilab, ICARUS also won’t be buried below more than half a mile of rock, a feature of the INFN setup that shielded it from cosmic radiation from space. That means the triggering system will play an even bigger role in this new experiment, Fava says.

“We have a great challenge ahead of us.” She’s up to the task.

### Tommaso Dorigo - Scientificblogging

Revenge Of The Slimeballs - Part 2
This is the second part of a section taken from Chapter 3 of the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab". The chapter recounts the pioneering measurement of the Z mass by the CDF detector, and the competition with SLAC during the summer of 1989. The title of the post is the same as the one of chapter 3, and it refers to the way some SLAC physicists called their Fermilab colleagues, whose hadron collider was to their eyes obviously inferior to the electron-positron linear collider.

## July 25, 2017

### Symmetrybreaking - Fermilab/SLAC

Turning plots into stained glass

Hubert van Hecke, a heavy-ion physicist, transforms particle physics plots into works of art.

At first glance, particle physicist Hubert van Hecke’s stained glass windows simply look like unique pieces of art. But there is much more to them than pretty shapes and colors. A closer look reveals that his creations are actually renditions of plots from particle physics experiments.

Van Hecke learned how to create stained glass during his undergraduate years at Louisiana State University. “I had an artistic background—my father was a painter, so I thought, if I need a humanities credit, I'll just sign up for this,” van Hecke recalls. “So in order to get my physics’ bachelors, I took stained glass.”

Over the course of two semesters, van Hecke learned how to cut pieces of glass from larger sheets, puzzle them together, then solder and caulk the joints. “There were various assignments that gave you an enormous amount of elbow room,” he says. “One of them was to do something with Fibonacci numbers, and one was pick your favorite philosopher and made a window related to their work.”

Van Hecke continued to create windows and mirrors throughout graduate school but stopped for many years while working as a full-time heavy-ion physicist at Los Alamos National Laboratory and raising a family. Only recently did he return to his studio—this time, to create pieces inspired by physics.

“I had been thinking about designs for a long time—then it struck me that occasionally, you see plots that are interesting, beautiful shapes,” van Hecke says. “So I started collecting pictures as I saw them.”

His first plot-based window, a rectangle-shaped piece with red, orange and yellow glass, was inspired by the results of a neutrino flavor oscillation study from the MiniBooNE experiment at Fermi National Accelerator Laboratory. He created two pieces after that, one from a plot generated during the hunt for the Higgs boson at the Tevatron, also at Fermilab and the other based on an experiment with quarks and gluons.

According to van Hecke, what inspires him about these plots is “purely the shapes.”

“In terms of the physics, it's what I run across—for example, I see talks about heavy ion physics, elementary particle physics, and neutrinos, [but] I haven't really gone out and searched in other fields,” he says. “Maybe there are nice plots in biology or astronomy.”

Although van Hecke has not yet displayed his pieces publicly, if he does one day, he plans to include explanations for the phenomena the plots illustrate, such as neutrinos and the Standard Model, as a unique way to communicate science.

But before that, van Hecke plans to create more stained glass windows. As of two months ago, he is semiretired—and in between runs to Fermilab, where he is helping with the effort to use Argonne National Laboratory's SeaQuest experiment to search for dark photons, he hopes to spend more time in the studio creating the pieces left on the drawing board, which include plots found in experiments investigating the Standard Model, neutrinoless double decay and dark matter interactions.

“I hope to make a dozen or more,” he says. “As I bump into plots, I'll collect them and hopefully, turn them all into windows.”

### Lubos Motl - string vacua and pheno

Wrong turns, basins, GUT critics, and creationists
A notorious holy warrior against physics recently summarized a talk by Nima Arkani-Hamed as follows:
I think Arkani-Hamed is right to identify the 1974 GUT hypothesis as the starting point that led the field into this wrong basin.
As far as I can see, Nima has never made a discovery – or claimed a discovery – that would show that grand unification was wrong or the center of a "wrong basin". Instead, Nima made the correct general point that if you try to improve your state-of-the-art theoretical picture gradually and by small changes that look like improvements, you may find a local minimum (or optimum) but that may be different from the global minimum (or optimum) – from the correct theory. So sometimes one needs to make big steps.

Is grand unification correct? Are three non-gravitational forces that we know merged into one at a high energy scale? My answer is that we don't know at this moment – the picture has appealing properties, especially in combination with SUSY, but nothing is firmly established and pictures without it may be good enough, too – and I am rather confident that Nima agrees with answer, Peter W*it's classic lies notwithstanding. Even if we take the latest stringy constructions and insights for granted, there exist comparably attractive compactifications where the electroweak and strong forces are unified at a higher scale; and compactifications where they aren't. String theory always morally unifies all forces, including gravity, but this type of unification is more general and may often be non-unification according to the technical, specific, field-theoretical definition of unification.

Nevertheless, W*it made this untrue statement in his blog post and the discussion started among the crackpots who visit that website: Was grand unification the first "wrong turn"?

Funnily enough, the N*t Even Wr*ng crackpots get divided to two almost equally large camps. In fact, if this community ever managed to discuss at least this basic technical question – what was the first wrong turn in theoretical physics – their estimated thresholds would fill a nearly perfect continuum. For many of them, Einstein's relativity was already the collapse of physics. For others, it was quantum mechanics. Another group would pick quantum field theory. Another group would pick renormalization. One more clique would pick the confining QCD. Those would be the groups that deny the theories that are rather clearly experimentally established.

But nothing special would happen at that place. There would be "more moderate" groups that would identify the grand unification as the first wrong turn, or supersymmetric field theories as the first wrong turn, or bosonic string theory, or superstring theory, or non-perturbative string theory, or M-theory, or the flux vacua, or something else.

I've met members of every single one of these groups. Needless to say, as we go towards more far-reaching or newer ideas that haven't been experimentally established, we're genuinely increasingly uncertain whether they're right. But because we can't rule out these ideas, they unavoidably keep on reappearing in research and proposed new theories. It can't be otherwise!

In May, I pointed out that the criticisms of inflation are silly because the true breakthrough of inflation was to notice a mechanism that is really "generic" in the kind of theories we normally use and that have been successfully tested (presence of scalar fields; existence of points away from the minima of the potential; de-Sitter-like cosmic expansion at these places of the configuration space), and that seems to be damn useful to improve certain perceived defects of the Big Bang Theory. Although people aren't 100.000% sure about inflation and especially its technical details, they have eaten the forbidden apple and figured out that the taste is so good that they keep on returning to the tree and pick some fruits from it.

To a large extent, exactly the same comment may be made about grand unification, supersymmetry, string theory, and all these other ideas that the crackpots often like to attack as heresies. Even though we're not 100% certain that either of these ideas holds in the Universe around us, we are 100% sure that because these possible theories and new structures have already been theoretically discovered and they seem to make lots of sense as parts of our possible explanation of physical phenomena, a community of honest theoretical physicists simply cannot outlaw or erase these possibilities again. To ban them would mean to lie into our eyes.

That's exactly what the N*t Even Wrong crackpots want to do – they would love to ban much of theoretical physics, although they haven't agreed whether the ban would apply to all physics after 1900, 1905, 1915, 1925, 1945, 1973, 1977, 1978, 1984, 1995, 2000, 2003, or another number. ;-) But they're obsessed with bans on ideas just like the Catholic Inquisition was obsessed with bans on ideas. This approach is fundamentally incompatible with the scientific approach to our knowledge.

New evidence – or a groundbreaking new theory or experiment(s) – may emerge that will make some or all ideas studied e.g. since the 1970s irrelevant for physics of the world around us. But because such an event hasn't taken place yet, physicists simply can't behave as if it has already taken place. In particular, no new physics beyond the Standard Model has been discovered yet which makes it clear that all conceivable theories of physics beyond the Standard Model would suffer from the same drawback, namely their not having been proven yet.

By the way, the disagreement about the identification of the "first wrong turn" is completely analogous to the "continuum of creationist and intelligent design theories" as it was discussed by Eugenie Scott, an anti-creationist activist.

Just like you can ask what was the first wrong turn in high energy physics, you may ask what is the first or most modest claim by Darwin's theory that is wrong – or the most recent event in the Darwinian picture of the history of species that couldn't happen according to the Darwinian story.

If you collect the answers from the critics of evolution, you will find out that they're equally split as Peter W*it's fellow crackpots. In fact, the hypothesized "first wrong statement" of the standard picture of the history of Earth and life may be anything and all the choices of these wrong statements fill a continuum – they cover all statements of cosmology, geology, biology, macroevolution, and microevolution that have ever been made.

Some people deny that the Universe is more than thousands of years old. Others do accept it but they don't accept that life on Earth is old. Some people accept that but they claim that many "kinds" of animals and plants had to be born simultaneously and independently because they're too different.

In general, "kinds" are supposed to be more general, larger, and more Biblical taxonomic groups than "species" – although "kinds" isn't one of the groups that are used by the conventional scientific taxonomy. However, when you ask how large these "kinds" groups are (questions like whether horses belong to the same "kind" as zebras), various critics of evolution will give you all conceivable answers. Some of them will say that "kinds" are just somewhat bigger than scientific species (those critics of evolution are the most radical ones and many of their statements may really be falsified "almost in the lab"), others will say that they are substantially bigger. Another group will say that "kinds" are vastly larger and they will "only" ban the evolution that would relate birds and lizards or dinosaurs and mammals etc. These "most moderate intelligent designers" might tell you the same thing as the evolutionists concerning the evolution of all vertebrates, for example, but they still leave some of the "largest division of organisms" to an intelligent creator.

The actual reason for the absence of an agreed upon boundary is obviously the absence of any evidence for any such boundary. In fact, it looks almost certain that no such boundary actually exists – and all life on Earth indeed has the common origin.

Summary: continuum of alternative theories shows that none of them is defensible

Again, to summarize, critics of theoretical physics just like critics of evolution form a continuum.

All of them have to believe in some very important new "boundaries" but any specific location of such a boundary looks absolutely silly and unjustified. Some critics of the evolutionary biology say that zebras and horses may have a common ancestor but zebras and llamas can't. Does it make any sense? Why would you believe that two completely analogous differences – zebra-horse and zebra-llama differences – must have totally, qualitatively, metaphysically different explanations? Such a theory looks extremely awkward and inefficient. Once Nature has mechanisms to create zebras and horses from a common ancestor, why shouldn't the same mechanism be enough to explain the rise of llamas and zebras from common ancestors, too?

The case of the critics of physics is completely analogous. If grand unification were the first wrong turn, how do you justify that the group $$SU(3)\times SU(2)\times U(1)$$ is "allowed" to be studied in physics, while $$SO(10)$$ is already blasphemous or "unscientific" (their word for "blasphemous")? It doesn't make the slightest sense. They're two groups and both of them admit models that are consistent with everything we know. $$SO(10)$$ is really simpler and prettier – while its models arguably have to use an uglier (and more extended) spectrum of matter (the new Higgs bosons etc.).

Well, the only rational conclusion is that the efforts to postulate any "red lines" of this kind are utterly stupid. Biologists must be allowed to study arbitrarily deep evolutionary processes and theoretical high energy physicists must be allowed to study all ideas that have ever emerged, look tantalizing, and haven't been ruled out. And critics of theoretical physics must be acknowledged to be intellectually inconsequential deluded animals.

### Tommaso Dorigo - Scientificblogging

ALPIDE: The New CMOS Pixel Chip For ALICE And IMPACT
Last week-end Padova researchers tested the first calorimeter and tracker prototypes of the iMPACT project at the APSS/TIFPA Proton Therapy Facility in Trento (Italy).

iMPACT (innovative Medical Proton Achromatic Calorimeter and Tracker) is a project led by Piero Giubilato, who won an ERC consolidator grant from the European Union. The project aims to develop a high resolution and high rate (>100 kHz/cm2) proton Computed Tomography (pCT) scanner. The scanner will combine a highly-segmented range calorimeter made of PVT scintillators, for energy measurements, and a silicon pixel tracker, for trajectory reconstructions.

## July 21, 2017

### Lubos Motl - string vacua and pheno

Does weak gravity conjecture predict neutrino type, masses and cosmological constant?
String cosmologist Gary Shiu and his junior collaborator Yuta Hamada (Wisconsin) released a rather fascinating hep-th preprint today
Weak Gravity Conjecture, Multiple Point Principle and the Standard Model Landscape
They are combining some of the principles that are seemingly most abstract, most stringy, and use them in such a way that they seem to deduce an estimate for utterly observable quantities such as a realistic magnitude of neutrino masses, their being Dirac, and a sensible estimate for the cosmological constant, too.

What have they done?

In 2005, when I watched him happily, Cumrun Vafa coined the term swampland for the "lore" that was out there but wasn't clearly articulated before that. Namely the lore that even in the absence of the precise identified vacuum of string theory, string theory seems to make some general predictions and ban certain things that would be allowed in effective quantum field theories. According to Vafa, the landscape may be large but it is still just an infinitely tiny, precious fraction embedded in a much larger and less prestigious region, the swampland, the space of possible effective field theories which is full of mud, feces, and stinking putrefying corpses of critics of string theory such as Mr Šmoits. Vafa's paper is less colorful but be sure that this is what he meant. ;-)

The weak gravity conjecture – the hypothesis (justified by numerous very different and complementary pieces of evidence) that consistency of quantum gravity really demands gravity among elementary particles to be weaker than other forces – became the most well-known example of the swampland reasoning. But Cumrun and his followers have pointed out several other general predictions that may be made in string theory but not without it.

Aside from the weak gravity conjecture, Shiu and Hamada use one particular observation: that theories of quantum gravity (=string/M-theory in the most general sense) should be consistent not only in their original spacetime but it should also be possible to compactify them while preserving the consistency.

Shiu and Hamada use this principle for the Core Theory, as Frank Wilczek calls the Standard Model combined with gravity. Well, it's only the Standard Model part that is "really" exploited by Shiu and Hamada. However, the fact that the actual theory also contains quantum gravity is needed to justify the application of the quantum gravity anti-swampland principle. Their point is highly creative. When the surrounding Universe including the Standard Model is a vacuum of string/M-theory, some additional operations – such as extra compactification – should be possible with this vacuum.

On top of these swampland things, Shiu and Hamada also adopt another principle, Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle. The principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something special seems to happen over there. This principle has been used to argue that the fine-structure constant should be around $$\alpha\approx 1/(136.8\pm 9)$$, the top quark mass should be $$m_t\approx 173\pm 5 \GeV$$, the Higgs mass should be $$m_h\approx 135\pm 9 \GeV$$, and so on. The track record of this principle looks rather impressive to me. In some sense, this principle isn't just inequivalent to naturalness; it is close to its opposite. Naturalness could favor points in the bulk of a "single phase"; the multiple criticality principle favors points in the parameter space that are of "measure zero" to a maximal power, in fact.

Fine. So Shiu and Hamada take our good old Standard Model and compactify one or two spatial dimensions on a circle $$S^1$$ or the torus $$T^2$$ because you shouldn't be afraid of doing such things with the string theoretical vacua, and our Universe is one of them. When they compactify it, they find out that aside from the well-known modest Higgs vev, there is also a stationary point where the Higgs vev is Planckian.

So they analyze the potential as the function of the scalar fields and find out that depending on the unknown facts about the neutrinos, these extra stationary points may be unstable because of various new instabilities. Now, they also impose the multiple point criticality principle and demand our 4-dimensional vacuum to be degenerate with the 3-dimensional compactification – where one extra spatial dimension becomes a short circle. This degeneracy is an unusual, novel, stringy application of the multiple criticality principle that was previously used for boring quantum field theories only.

This degeneracy basically implies that the neutrino masses must be of order $$1-10\meV$$. Obviously, they knew in advance that they wanted to get a similar conclusion because this conclusion seems to be most consistent with our knowledge about neutrinos. And neutrinos should be Dirac fermions, not Majorana fermions. Dirac neutrinos are needed for the spin structure to disable a decay by Witten's bubble of nothing. On top of that, the required vacua only exist if the cosmological constant is small enough, so they have a new justification for the smallness of the cosmological constant that must be comparable to the fourth power of these neutrino masses, too – and as you may know, this is a good approximate estimate of the cosmological constant, too.

Note that back in 1994, Witten still believed that the cosmological constant had to be zero and he used a compactification of our 4D spacetime down to 3D to get an argument. In some sense, Shiu and Hamada are doing something similar – they don't cite that paper by Witten, however – except that their setup is more advanced and it produces a conclusion that is compatible with the observer nonzero cosmological constant.

Jožin from the Swampland mainly eats the inhabitants of Prague. And who could have thought? He can only be dealt with effectively with the help of a crop duster.

So although these principles are abstract and at least some of them seem unproven or even "not sufficiently justified", there seems to be something correct about them because Shiu and Hamada may extract rather realistic conclusions out of these principles. But if they are right, I think that they did much more than an application of existing principles. They applied them in truly novel, creative ways.

If their apparent success were more than just a coincidence, I would love to understand the deeper reasons why the multiple criticality principle is right and many other things that are needed for a satisfactory explanation why this "had to work".

### Symmetrybreaking - Fermilab/SLAC

Watch the underground groundbreaking

This afternoon, watch a livestream of the start of excavation for the future home of the Deep Underground Neutrino Experiment.

Today in South Dakota, dignitaries, scientists and engineers will mark the start of construction of the future home of America's flagship neutrino experiment with a groundbreaking ceremony.

Participants will hold shovels and give speeches. But this will be no ordinary groundbreaking. It will take place a mile under the earth at Sanford Underground Research Facility, the deepest underground physics lab in the United States.

The groundbreaking will celebrate the beginning of excavation for the Long-Baseline Neutrino Facility, which will house the Deep Underground Neutrino Experiment. When complete, LBNF/DUNE will be the largest experiment ever built in the US to study the properties of mysterious particles called neutrinos. Unlocking the mysteries of these particles could help explain more about how the universe works and why matter exists at all.

Watch the underground groundbreaking at 2:20 p.m. Mountain Time (3:20 p.m. Central) via livestream.

## July 20, 2017

### Axel Maas - Looking Inside the Standard Model

Getting better
One of our main tools in our research are numerical simulations. E.g. the research of the previous entry would have been impossible without.

Numerical simulations require computers to run them. And even though computers become continuously more powerful, they are limited in the end. Not to mention that they cost money to buy and to use. Yes, also using them is expensive. Think of the electricity bill or even having space available for them.

So, to reduce the costs, we need to use them efficiently. That is good for us, because we can do more research in the same time. And that means that we as a society can make scientific progress faster. But it also reduces financial costs, which in fundamental research almost always means the taxpayer's money. And it reduces the environmental stress which we exercise by having and running the computers. That is also something which should not be forgotten.

So what does efficiently mean?

Well, we need to write our own computer programs. What we do nobody did before us. Most of what we do is really the edge of what we understand. So nobody was here before us and could have provided us with computer programs. We do them ourselves.

For that to be efficient, we need three important ingredients.

The first seems to be quite obvious. The programs should be correct before we use them to make a large scale computation. It would be very wasteful to run on a hundred computers for several months, just to figure out it was all for naught, because there was an error. Of course, we need to test them somewhere, but this can be done with much less effort. But this takes actually quite some time. And is very annoying. But it needs to be done.

The next two issues seems to be the same, but are actually subtly different. We need to have fast and optimized algorithms. The important difference is: The quality of the algorithm decides how fast it can be in principle. The actual optimization decides to which extent it uses this potential.

The latter point is something which requires a substantial amount of experience with programming. It is not something which can be learned theoretically. And it is more of a craftsmanship than anything else. Being good in optimization can make a program a thousand times faster. So, this is one reason why we try to teach students programming early, so that they can acquire the necessary experience before they enter research in their thesis work. Though there is still today research work which can be done without computers, it has become markedly less over the decades. It will never completely vanish, though. But it may well become a comparatively small fraction.

But whatever optimization can do, it can do only so much without good algorithms. And now we enter the main topic of this entry.

It is not only the code which we develop by ourselves. It is also the algorithms. Because again, they are new. Nobody did this before. So it is also up to us to make them efficient. But to really write a good algorithm requires knowledge about its background. This is called domain-specific knowledge. Knowing the scientific background. One reason more why you cannot get it off-the-shelf. Thus, if you want to calculate something new in research using computer simulations that means usually sitting down and writing a new algorithm.

But even once an algorithm is written down this does not mean that it is necessarily already the fastest possible one. Also this requires on the one hand experience, but even more so it is something new. And it is thus research as well to make it fast. So they can, and need to be, made better.

Right now I am supervising two bachelor theses where exactly this is done. The algorithms are indeed directly those which are involved with the research mentioned in the beginning. While both are working on the same algorithm, they do it with quite different emphasis.

The aim in one project is to make the algorithm faster, without changing its results. It is a classical case of improving an algorithm. If successful, it will make it possible to push the boundaries of what projects can be done. Thus, it makes computer simulations more efficient, and thus satisfies allows to do more research. One goal reached. Unfortunately the 'if' already tells that, as always with research, there is never a guarantee that it is possible. But if this kind of research should continue, it is necessary. The only alternative is waiting for a decade for the computers to become faster, and doing something different in the time in between. Not a very interesting option.

The other one is a little bit different. Here, the algorithm should be modified to serve a slightly different goal. It is not a fundamentally different goal, but subtly different so. Thus, while it does not create a fundamentally new algorithm, it still does create something new. Something, which will make a different kind of research possible. Without the modification, the other kind of research may not be possible for some time to come. But just as it is not possible to guarantee that an algorithm can be made more efficient, it is also not always possible that an algorithm with any reasonable amount of potential can be created at all. So this is also true research.

Thus, it remains exciting of what both theses will ultimately lead to.

So, as you see, behind the scenes research is quite full of the small things which make the big things possible. Both of these projects are probably closer to our everyday work than most of the things I have been posting before. The everyday work in research is quite often grinding. But, as always, this is what makes the big things ultimately possible. Without such projects as these two theses, our progress would be slowed down to a snail's speed.

### Andrew Jaffe - Leaves on the Line

Python Bug Hunting

This is a technical, nerdy post, mostly so I can find the information if I need it later, but possibly of interest to others using a Mac with the Python programming language, and also since I am looking for excuses to write more here. (See also updates below.)

It seems that there is a bug in the latest (mid-May 2017) release of Apple’s macOS Sierra 10.12.5 (ok, there are plenty of bugs, as there in any sufficiently complex piece of software).

It first manifested itself (to me) as an error when I tried to load the jupyter notebook, a web-based graphical front end to Python (and other languages). When the command is run, it opens up a browser window. However, after updating macOS from 10.12.4 to 10.12.5, the browser didn’t open. Instead, I saw an error message:

    0:97: execution error: "http://localhost:8888/tree?token=<removed>" doesn't understand the "open location" message. (-1708)


A little googling found that other people had seen this error, too. I was able to figure out a workaround pretty quickly: this behaviour only happens when I wanted to use the “default” browser, which is set in the “General” tab of the “System Preferences” app on the Mac (I have it set to Apple’s own “Safari” browser, but you can use Firefox or Chrome or something else). Instead, there’s a text file you can edit to explicitly set the browser that you want jupyter to use, located at ~/.jupyter/jupyter_notebook_config.py, by including the line

c.NotebookApp.browser = u'Safari'


(although an unrelated bug in Python means that you can’t currently use “Chrome” in this slot).

But it turns out this isn’t the real problem. I went and looked at the code in jupyter that is run here, and it uses a Python module called webbrowser. Even outside of jupyter, trying to use this module to open the default browser fails, with exactly the same error message (though I’m picking a simpler URL at http://python.org instead of the jupyter-related one above):

>>> import webbrowser
>>> br = webbrowser.get()
>>> br.open("http://python.org")
0:33: execution error: "http://python.org" doesn't understand the "open location" message. (-1708)
False


So I reported this as an error in the Python bug-reporting system, and hoped that someone with more experience would look at it.

But it nagged at me, so I went and looked at the source code for the webbrowser module. There, it turns out that the programmers use a macOS command called “osascript” (which is a command-line interface to Apple’s macOS automation language “AppleScript”) to launch the browser, with a slightly different syntax for the default browser compared to explicitly picking one. Basically, the command is osascript -e 'open location "http://www.python.org/"'. And this fails with exactly the same error message. (The similar code osascript -e 'tell application "Safari" to open location "http://www.python.org/"' which picks a specific browser runs just fine, which is why explicitly setting “Safari” back in the jupyter file works.)

But there is another way to run the exact same AppleScript command. Open the Mac app called “Script Editor”, type open location "http://python.org"` into the window, and press the “run” button. From the experience with “osascript”, I expected it to fail, but it didn’t: it runs just fine.

So the bug is very specific, and very obscure: it depends on exactly how the offending command is run, so appears to be a proper bug, and not some sort of security patch from Apple (and it certainly doesn’t appear in the 10.12.5 release notes). I have filed a bug report with Apple, but these are not publicly accessible, and are purported to be something of a black hole, with little feedback from the still-secretive Apple development team.

## July 19, 2017

### Axel Maas - Looking Inside the Standard Model

Tackling ambiguities
I have recently published a paper with a rather lengthy and abstract title. I wanted to enlighten in this entry a little bit what is going on.

The paper is actually on a problem which occupies me by now since more than a decade. And this is the problem how to really define what we mean when we talk about gluons. The reason for this problem is a certain ambiguity. This ambiguity arises because it is often much more convenient to have auxiliary additional stuff around to make calculations simple. But then you have to deal with this additional stuff. In a paper last year I noted that the amount of stuff is much larger than originally anticipated. So you have to deal with more stuff.

The aim of the research leading to the paper was to make progress with that.

So what did I do? To understand this, it is first necessary to say a few words about how we describe gluons. We describe them by mathematical functions. The simplest such mathematical functions makes, loosely speaking, a statement about how probable it is that a gluon moves from one point to another. Since a fancy word for moving is propagating, this function is called a propagator.

So the first question I posed was whether the ambiguity in dealing with the stuff affects this. You may ask whether this should happen at all. Is a gluon not a particle? Should this not be free of ambiguities? Well, yes and no. A particle which we actually detect should be free of ambiguities. But gluons are not detected. Gluons are, in fact, never seen directly. They are confined. This is a very peculiar feature of the strong force. And one which is not satisfactorily fully understood. But it is experimentally well established.

Since therefore something happens to gluons before we can observe them, there is now a way out. If the gluon is ambiguous, then this ambiguity has to be canceled by whatever happens to it. Then whatever we detect is not ambiguous. But cancellations are fickle things. If you are not careful in your calculations, something is left uncanceled. And then your results become ambiguous. This has to be avoided. Of course, this is purely a problem for us theoreticians. The experimentalists never have this problem. A long time ago I actually already wrote together with a few other people a paper on this, showing how it may proceed.

So, the natural first step is to figure out what you have to cancel. And therefore to map the ambiguity in its full extent. The possibilities discussed since decades look roughly like this:

As you see, at short distances there is (essentially) no ambiguity. This is actually quite well understood. It is a feature very deeply embedded in the strong interaction. It has to do with the fact that, despite its name, the strong interaction makes itself less known the shorter the distance. But for weak effects we have very precise tools, and we therefore understand it.

On the other hand at long distances - well, there we knew for a long time not even qualitatively what is going on for sure. But, finally, over the decades, we were able to constrain the behavior at least partly. Now, I tested a large part of the remaining range of ambiguities. In the end, it indeed mattered little. There is almost no effect left of the ambiguity on the behavior of the gluon. So, it seems we have this under control.

Or do we? One of the important things in research is that it is never sufficient to confirm your result just by looking at a single thing. Either your explanation fits everything we see and measure, or it cannot be the full story. Or may even be wrong and the agreement with part of the observations is just a lucky coincidence. Well, actually not lucky. Rather terrible, since this misguides you.

Of course, doing all in one go is a horrendous amount of work, and so you work on a few at the time. Preferably, you first work on those where the most problems are expected. It is just ultimately that you need to have covered everything. But you cannot stop and claim victory before you did.

So I did, and looked in the paper at a handful of other quantities. And indeed, in some of them there remain effects. Especially, if you look at how strong the strong interaction is, depending on the distance where you measure it, something remains:

The effects of the ambiguity are thus not qualitative. So it does not change our qualitative understanding of how the strong force works. But there remains some quantitative effect, which we need to take into account.

There is one more important side effect. When I calculated the effects of the ambiguity, I learned also to control how the ambiguity manifests. This does not alter that there is an ambiguity, nor that it has consequences. But it allows others to reproduce how I controlled the ambiguity. This is important because now two results from different sources can be put together, and when using the same control they will fit such that for experimental observables the ambiguity cancels. And thus we have achieved the goal.

To be fair, however, this is currently at the level of an operative control. It is not yet a mathematically well-defined and proven procedure. As with so many cases, this still needs to be developed. But having operative control allows to develop the rigorous control easier than starting without it. So, progress has been made.

### Axel Maas - Looking Inside the Standard Model

Using evolution for particle physics
(I will start to illustrate the entries with some simple sketches. I am not very experienced with it, and thus, they will be quite basic. But with making more of them I should gain experience, and they should become better eventually)

This entry will be on the recently started bachelor thesis of Raphael Wagner.

He is addressing the following problem. One of the mainstays of our research are computer simulations. But our computer simulations are not exact. They work by simulating a physical system many times with different starts. The final result is then an average over all the simulations. There is an (almost) infinite number of starts. Thus, we cannot include them all. As a consequence, our average is not the exact value we are looking for. Rather, it is an estimate. We can also estimate in which range around the real result should be.

This is sketched in the following picture

The black line is our estimate and the red lines give the range were the true value should be. From left to right some parameter runs. In the case of the thesis, the parameter is the time. The value is roughly the probability for a particle to survive this time. So we have an estimate for the survivability probability.

Fortunately, we know a little more. From quite basic principles we know that this survivability cannot depend in an arbitrary way on the time. Rather, it has a particular mathematical form. This function depends only on a very small set of numbers. The most important one is the mass of the particle.

What we then do is to start with some theory. We simulate it. And then we extract from such a survival probability the masses of the particles. Yes, we do not know them beforehand. This is because the masses of particles are changed in a quantum theory by quantum effects. These are which we simulate, to get a final value of the masses.

Up to now, we try to determine the mass in a very simple-minded way: We determined them by just looking for numbers for the mathematical functions which are closest to the data. That seems reasonable. Unfortunately, the function is not so simple. Thus, you can mathematically show that this does not give necessarily the best result. You can imagine this in the following way: Imagine you want to find the deepest valley in area. Surely, walking down hill will get you in a valley. But only walking down hill this will usually not be the deepest one:

But this is the way we determine the numbers so far. So there may be other options.

There is a different possibility. In the picture of the hills, you could rather deploy a number of ants, of which some prefer to walk up, some down, and some sometimes so and otherwise opposite. The ants live, die, and reproduce. Now, if you give the ants more to eat if they live in a deeper valley, at some time evolution will bring the population to live in the deepest valley:

And then you have what you want.

This is called a genetic algorithm. It is used in many areas of engineering. The processor of the computer or smartphone you use to read this has likely been optimized using such algorithms.

The bachelor thesis is now to apply the same idea to find better estimates for the masses of the particles in our simulations. This requires to understand what would be the equivalent to the deepness of the valley and the food for the ants. And how long we let evolution run its course. Then, we have only to monitor the (virtual) ants to find our prize.

## July 18, 2017

### Symmetrybreaking - Fermilab/SLAC

A theory about gravity challenges our understanding of the universe.

For millennia, humans held a beautiful belief. Our planet, Earth, was at the center of a vast universe, and all of the planets and stars and celestial bodies revolved around us. This geocentric model, though it had floated around since 6th century BCE, was written in its most elegant form by Claudius Ptolemy in 140 AD.

When this model encountered problems, such as the retrograde motions of planets, scientists reworked the data to fit the model by coming up with phenomena such as epicycles, mini orbits.

It wasn’t until 1543, 1400 years later, that Nicolaus Copernicus set in motion a paradigm shift that would give way to centuries of new discoveries. According to Copernicus’ radical theory, Earth was not the center of the universe but simply one of a long line of planets orbiting around the sun.

But even as evidence that we lived in a heliocentric system piled up and scientists such as Galileo Galilei perfected the model, society held onto the belief that the entire universe orbited around Earth until the early 19th century.

To Erik Verlinde, a theoretical physicist at the University of Amsterdam, the idea of dark matter is the geocentric model of the 21st century.

“What people are doing now is allowing themselves free parameters to sort of fit the data,” Verlinde says. “You end up with a theory that has so many free parameters it's hard to disprove.”

Dark matter, an as-yet-undetected form of matter that scientists believe makes up more than a quarter of the mass and energy of the universe, was first theorized when scientists noticed that stars at the outer edges of galaxies and galaxy clusters were moving much faster than Newton’s theory of gravity said they should. Up until this point, scientists have assumed that the best explanation for this is that there must be missing mass in the universe holding those fast-moving stars in place in the form of dark matter.

But Verlinde has come up with a set of equations that explains these galactic rotation curves by viewing gravity as an emergent force — a result of the quantum structure of space.

The idea is related to dark energy, which scientists think is the cause for the accelerating expansion of our universe. Verlinde thinks that what we see as dark matter is actually just interactions between galaxies and the sea of dark energy in which they’re embedded.

“Before I started working on this I never had any doubts about dark matter,” Verlinde says. “But then I started thinking about this link with quantum information and I had the idea that dark energy is carrying more of the dynamics of reality than we realize.”

Verlinde is not the first theorist to come up with an alternative to dark matter. Many feel that his theory echoes the sentiment of physicist Mordehai Milgrom’s equations of “modified Newtonian dynamics,” or MOND. Just as Einstein modified Newton’s laws of gravity to fit to the scale of planets and solar systems, MOND modifies Einstein’s laws of gravity to fit to the scale of galaxies and galaxy clusters.

Verlinde, however, makes the distinction that he’s not deriving the equations of MOND, rather he’s deriving what he calls a “scaling relation,” or a volume effect of space-time that only becomes important at large distances.

Stacy McGaugh, an astrophysicist at Case Western Reserve University, says that while MOND is primarily the notion that the effective force of gravity changes with acceleration, Verlinde’s ideas are more of a ground-up theoretical work.

“He's trying to look at the structure of space-time and see if what we call gravity is a property that emerges from that quantum structure, hence the name emergent gravity,” McGaugh says. “In principle, it's a very different approach that doesn't necessarily know about MOND or have anything to do with it.”

One of the appealing things about Verlinde’s theory, McGaugh says, is that it naturally produces evidence of MOND in a way that “just happens.”

“That's the sort of thing that one looks for,” McGaugh says. “There needs to be some basis of why MOND happens, and this theory might provide it.”

Verlinde’s ideas have been greeted with a fair amount of skepticism in the scientific community, in part because, according to Kathryn Zurek, a theoretical physicist at the US Department of Energy’s Lawrence Berkeley National Laboratory, his theory leaves a lot unexplained.

“Theories of modified gravity only attempt to explain galactic rotation curves [those fast-moving planets],” Zurek says. “As evidence for dark matter, that's only one very small part of the puzzle. Dark matter explains a whole host of observations from the time of the cosmic microwave background when the universe was just a few hundred thousand years old through structure formation all the way until today.”

Illustration by Ana Kova

Zurek says that in order for scientists to start lending weight to his claims, Verlinde needs to build the case around his theory and show that it accommodates a wider range of observations. But, she says, this doesn’t mean that his ideas should be written off.

“One should always poke at the paradigm,” Zurek says, “even though the cold dark matter paradigm has been hugely successful, you always want to check your assumptions and make sure that you're not missing something that could be the tip of the iceberg.”

McGaugh had a similar crisis of faith in dark matter when he was working on an experiment wherein MOND’s predictions were the only ones that came true in his data. He had been making observations of low-surface-brightness galaxies, wherein stars are spread more thinly than galaxies such as the Milky Way where the stars are crowded relatively close together.

McGaugh says his results did not make sense to him in the standard dark matter context, and it turned out that the properties that were confusing to him had already been predicted by Milgrom’s MOND equations in 1983, before people had even begun to take seriously the idea of low-surface-brightness galaxies.

Although McGaugh’s experience caused him to question the existence of dark matter and instead argue for MOND, others have not been so quick to join the cause.

“We subscribe to a particular paradigm and most of our thinking is constrained within the boundaries of that paradigm, and so if we encounter a situation in which there is a need for a paradigm shift, it's really hard to think outside that box,” McGaugh says. “Even though we have rules for the game as to when you're supposed to change your mind and we all in principle try to follow that, in practice there are some changes of mind that are so big that we just can't overcome our human nature.”

McGaugh says that many of his colleagues believe that there’s so much evidence for dark matter that it’s a waste of time to consider any alternatives. But he believes that all of the evidence for dark matter might instead be an indication that there is something wrong with our theories of gravity.

“I kind of worry that we are headed into another thousand years of dark epicycles,” McGaugh says.

But according to Zurek, if MOND came up with anywhere near the evidence that has been amassed for the dark matter paradigm, people would be flocking to it. The problem, she says, is that at the moment MOND just does not come anywhere near to passing the number of tests that cold dark matter has. She adds that there are some physicists who argue that the cold dark matter paradigm can, in fact, explain those observations about low-surface-brightness galaxies.

Recently, Case Western held a workshop wherein they gathered together representatives from different communities, including those working on dark matter models, to discuss dwarf galaxies and the external field effect, which is the notion that very low-density objects will be affected by what’s around them. MOND predicts that the dynamics of a small satellite galaxy will depend on its proximity to its giant host in a way that doesn't happen with dark matter.

McGaugh says that in attendance at the workshop were a group of more philosophically inclined people who use a set of rules to judge theories, which they’ve put together by looking back at how theories have developed in the past.

“One of the interesting things that came out of that was that MOND is doing better on that score card,” he says. “It’s more progressive in the sense that it's making successful predictions for new phenomena whereas in the case of dark matter we've had to repeatedly invoke ad hoc fixes to patch things up.”

Verlinde’s ideas, however, didn’t come up much within the workshop. While McGaugh says that the two theories are closely enough related that he would hope the same people pursuing MOND would be interested in Verlinde’s theory, he added that not everyone shares that attitude. Many are waiting for more theoretical development and further observational tests.

“The theory needs to make a clear prediction so that we can then devise a program to go out and test it,” he says. “It needs to be further worked out to get beyond where we are now.”

Verlinde says he realizes that he still needs to develop his ideas further and extend them to explain things such as the formation of galaxies and galaxy clusters. Although he has mostly been working on this theory on his own, he recognizes the importance of building a community around his ideas.

Over the past few months, he has been giving presentations at different universities, including Princeton, Harvard, Berkeley, Stanford, and Caltech. There is currently a large community of people working on ideas of quantum information and gravity, he says, and his main goal is to get more people, in particular string theorists, to start thinking about his ideas to help him improve them.

“I think that when we understand gravity better and we use those equations to describe the evolution of the universe, we may be able to answer questions more precisely about how the universe started,” Verlinde says. “I really think that the current description is only part of the story and there's a much deeper way of understanding it—maybe an even more beautiful way.”

## July 16, 2017

### Matt Strassler - Of Particular Significance

Ongoing Chance of Northern (or Southern) Lights

As forecast, the cloud of particles from Friday’s solar flare (the “coronal mass emission”, or “CME”) arrived at our planet a few hours after my last post, early in the morning New York time. If you’d like to know how I knew that it had reached Earth, and how I know what’s going on now, scroll down to the end of this post and I’ll show you the data I was following, which is publicly available at all times.

So far the resulting auroras have stayed fairly far north, and so I haven’t seen any — though they were apparently seen last night in Washington and Wyoming, and presumably easily seen in Canada and Alaska. [Caution: sometimes when people say they’ve been “seen”, they don’t quite mean that; I often see lovely photos of aurora that were only visible to a medium-exposure camera shot, not to the naked eye.]  Or rather, I should say that the auroras have stayed fairly close to the Earth’s poles; they were also seen in New Zealand.

Russia and Europe have a good opportunity this evening. As for the U.S.? The storm in the Earth’s magnetic field is still going on, so tonight is still a definite possibility for northern states. Keep an eye out! Look for what is usually a white or green-hued glow, often in swathes or in stripes pointing up from the northern horizon, or even overhead if you’re lucky.  The stripes can move around quite rapidly.

Now, here’s how I knew all this.  I’m no expert on auroras; that’s not my scientific field at all.   But the U.S. Space Weather Prediction Center at the National Oceanic and Atmospheric Administration, which needs to monitor conditions in space in case they should threaten civilian and military satellites or even installations on the ground, provides a wonderful website with lots of relevant data.

The first image on the site provides the space weather overview; a screenshot from the present is shown below, with my annotations.  The upper graph indicates a blast of x-rays (a form of light not visible to the human eye) which is generated when the solar flare, the magnetically-driven explosion on the sun, first occurs.  Then the slower cloud of particles (protons, electrons, and other atomic nuclei, all of which have mass and therefore can’t travel at light’s speed) takes a couple of days to reach Earth.  It’s arrival is shown by the sudden jump in the middle graph.  Finally, the lower graph measures how active the Earth’s magnetic field is.  The only problem with that plot is it tends to be three hours out of date, so beware of that! A “Kp index” of 5 shows significant activity; 6 means that auroras are likely to be moving away from the poles, and 7 or 8 mean that the chances in a place like the north half of the United States are pretty good.  So far, 6 has been the maximum generated by the current flare, but things can fluctuate a little, so 6 or 7 might occur tonight.  Keep an eye on that lower plot; if it drops back down to 4, forget it, but it it’s up at 7, take a look for sure!

Also on the site is data from the ACE satellite.  This satellite sits 950 thousand miles [1.5 million kilometers] from Earth, between Earth and the Sun, which is 93 million miles [150 million kilometers] away.  At that vantage point, it gives us (and our other satellites) a little early warning, of up to an hour, before the cloud of slow particles from a solar flare arrives.  That provides enough lead-time to turn off critical equipment that might otherwise be damaged.  And you can see, in the plot below, how at a certain time in the last twenty-four hours the readings from the satellite, which had been tepid before, suddenly started fluctuating wildly.  That was the signal that the flare had struck the satellite, and would arrive shortly at our location.

It’s a wonderful feature of the information revolution that you can get all this scientific data yourself, and not wait around hoping for a reporter or blogger to process it for you.  None of this was available when I was a child, and I missed many a sky show.  A big thank you to NOAA, and to the U.S. taxpayers who make their work possible.

Filed under: Astronomy Tagged: astronomy, auroras, space

## July 15, 2017

### Matt Strassler - Of Particular Significance

Lights in the Sky (maybe…)

The Sun is busy this summer. The upcoming eclipse on August 21 will turn day into deep twilight and transfix millions across the United States.  But before we get there, we may, if we’re lucky, see darkness transformed into color and light.

On Friday July 14th, a giant sunspot in our Sun’s upper regions, easily visible if you project the Sun’s image onto a wall, generated a powerful flare.  A solar flare is a sort of magnetically powered explosion; it produces powerful electromagnetic waves and often, as in this case, blows a large quantity of subatomic particles from the Sun’s corona. The latter is called a “coronal mass ejection.” It appears that the cloud of particles from Friday’s flare is large, and headed more or less straight for the Earth.

Light, visible and otherwise, is an electromagnetic wave, and so the electromagnetic waves generated in the flare — mostly ultraviolet light and X-rays — travel through space at the speed of light, arriving at the Earth in eight and a half minutes. They cause effects in the Earth’s upper atmosphere that can disrupt radio communications, or worse.  That’s another story.

But the cloud of subatomic particles from the coronal mass ejection travels a few hundred times slower than light, and it takes it about two or three days to reach the Earth.  The wait is on.

Bottom line: a huge number of high-energy subatomic particles may arrive in the next 24 to 48 hours. If and when they do, the electrically charged particles among them will be trapped in, and shepherded by, the Earth’s magnetic field, which will drive them spiraling into the atmosphere close to the Earth’s polar regions. And when they hit the atmosphere, they’ll strike atoms of nitrogen and oxygen, which in turn will glow. Aurora Borealis, Northern Lights.

So if you live in the upper northern hemisphere, including Europe, Canada and much of the United States, keep your eyes turned to the north (and to the south if you’re in Australia or southern South America) over the next couple of nights. Dark skies may be crucial; the glow may be very faint.

You can also keep abreast of the situation, as I will, using NOAA data, available for instance at

http://www.swpc.noaa.gov/communities/space-weather-enthusiasts

The plot on the upper left of that website, an example of which is reproduced below, shows three types of data. The top graph shows the amount of X-rays impacting the atmosphere; the big jump on the 14th is Friday’s flare. And if and when the Earth’s magnetic field goes nuts and auroras begin, the bottom plot will show the so-called “Kp Index” climbing to 5, 6, or hopefully 7 or 8. When the index gets that high, there’s a much greater chance of seeing auroras much further away from the poles than usual.

Keep an eye also on the data from the ACE satellite, lower down on the website; it’s placed to give Earth an early warning, so when its data gets busy, you’ll know the cloud of particles is not far away.

Wishing you all a great sky show!

Filed under: LHC News

## July 13, 2017

### Clifford V. Johnson - Asymptotia

It Can be Done

For those interested in giving more people access to science, and especially those who act as gate-keepers, please pause to note that* a primetime drama featuring tons of real science in nearly every episode can get 10 Emmy nominations. Congratulations National Geographic’s Genius! (Full list here. See an earlier post … Click to continue reading this post

The post It Can be Done appeared first on Asymptotia.

### Symmetrybreaking - Fermilab/SLAC

SLAC accelerator plans appear in Smithsonian art exhibit

The late artist June Schwarcz found inspiration in some unusual wrapping paper her husband brought home from the lab.

Leroy Schwarcz, one of the first engineers hired to build SLAC National Accelerator Laboratory’s original 2-mile-long linear accelerator, thought his wife might like to use old mechanical drawings of the project as wrapping paper. So, he brought them home.

His wife, acclaimed enamelist June Schwarcz, had other ideas.

Today, works called SLAC Drawing III, VII and VIII, created in 1974 and 1975 from electroplated copper and enamel, form a unique part of a retrospective at the Smithsonian’s Renwick Gallery in Washington, D.C.

Among the richly formed and boldly textured and colored vessels that make up the majority of June’s oeuvre, the SLAC-inspired panels stand out for their fidelity to the mechanical design of their inspiration.

The description next to the display at the gallery describe the “SLAC Blueprints” as resembling “ancient pictographs drawn on walls of a cave or glyphs carved in stone.” The designs appear to depict accelerator components, such as electromagnets and radio frequency structures.

According to Harold B. Nelson, who curated the exhibit with Bernard N. Jazzar, “The panels are quite unusual in the subtle color palette she chose; in her use of predominantly opaque enamels; in her reliance on a rectilinear, geometric format for her compositions; and in her reference in the work to machines, plans, numbers, and mechanical parts.

“We included them because they are extremely beautiful and visually powerful. Together they form an important group within her body of work.”

### Making history

June and Leroy Schwarcz met in the late 1930s and were married in 1943. Two years later they moved to Chicago where Leroy would become chief mechanical engineer for the University of Chicago’s synchrocyclotron, which was at the time the highest-energy proton accelerator in the world.

Having studied art and design at the Pratt Institute in Brooklyn several years earlier, June found her way into a circle of notable artists in Chicago, including Bauhaus legend László Moholy-Nagy, founder of Chicago’s Institute of Design.

Around 1954, June was introduced to enameling and shortly thereafter began to exhibit her art. She and her husband had two children and relocated several times during the 1950s for Leroy’s work. In 1958 they settled in Sausalito, California, where June set up her studio in the lower level of their hillside home.

In 1961, Leroy became the first mechanical engineer hired by Stanford University to work on “Project M,” which would become the famous 2-mile-long linear accelerator at SLAC. He oversaw the engineers during early design and construction of the linac, which eventually enabled Nobel-winning particle physics research.

June and Leroy’s daughter, Kim Schwarcz, who made a living as a glass blower and textile artist until the mid 1980s and occasionally exhibited with her mother, remembers those early days at the future lab.

“Before SLAC was built, the offices were in Quonset huts, and my father used to bring me down, and I would bicycle all over the campus,” she recalled. “Pief was a family friend and so was Bob Mozley. Mom introduced Bob to his future wife…It was a small community and a really nice community.”

W.K.H. “Pief” Panofsky was the first director of SLAC; he and Mozley were renowned SLAC physicists and national arms control experts.

### Finding beauty

Kim was not surprised that her mother made art based on the SLAC drawings. She remembers June photographing the foggy view outside their home and getting inspiration from nature, ethnic art and Japanese clothing.

“She would take anything and make something out of it,” Kim said. “She did an enamel of an olive oil can once and a series called Adam’s Pants that were based on the droopy pants my son wore as a teen.”

But the fifteen SLAC-inspired compositions were unique and a family favorite; Kim and her brother Carl both own some of them, and others are at museums.

In a 2001 oral history interview with the Smithsonian Institution's Archives of American Art, June explained the detailed work involved in creating the SLAC drawings by varnishing, scribing, electroplating and enameling a copper sheet: “I'm primarily interested in having things that are beautiful, and of course, beauty is a complicated thing to devise, to find.”

### Engineering art

Besides providing inspiration in the form of technical drawings, Leroy was influential in June’s career in other ways.

Around 1962 he introduced her to Jimmy Pope at the SLAC machine shop, who showed June how to do electroplating, a signature technique of her work. Electroplating involves using an electric current to deposit a coating of metal onto another material. She used it to create raised surfaces and to transform thin sheets of copper—which she stitched together using copper wire—into substantial, free-standing vessel-like forms. She then embellished these sculptures with colored enamel.

Leroy built a 30-gallon plating bath and other tools for June’s art-making at their shared workshop.

“Mom was tiny, 5 feet tall, and she had these wobbly pieces on the end of a fork that she would put into a hot kiln. It was really heavy. Dad made a stand so she could rest her arm and slide the piece in,” Kim recalls.

“He was very inventive in that way, and very creative himself,” she said. “He did macramé in the 1960s, made wooden spoons and did scrimshaw carvings on bone that were really good.”

Kim remembers the lower-level workshop as a chaotic and inventive space. “For the longest time, there was a wooden beam in the middle of the workshop we would trip over. It was meant for a boat dad wanted to build—and eventually did build after he retired,” she said.

At SLAC Leroy’s work was driven by his “amazingly good intuition,” according to a tribute written by Mozley upon his colleague’s death in 1993. Even when he favored crude drawings to exact math, “his intuitive designs were almost invariably right,” he wrote.

After the accelerator was built, Leroy turned his attention to the design, construction and installation of a streamer chamber scientists at SLAC used as a particle detector. In 1971 he took a leave of absence from the California lab to go back to Chicago and move the synchrocyclotron’s 2000-ton magnet from the university to Fermi National Accelerator Laboratory.

“[Leroy] was the only person who could have done this because, although drawings existed, knowledge of the assembly procedures existed only in the minds of Leroy and those who had helped him put the cyclotron together,” Mozley wrote.

### Beauty on display

June continued making art at her Sausalito home studio up until two weeks before her death in 2015 at the age of 97. A 2007 video shows the artist at work there 10 years prior to her passing.

After Leroy died, her own art collection expanded on the shelves and walls of her home.

“As a kid, the art was just what mom did, and it never changed,” Kim remembers. “She couldn’t wait for us to go to school so she could get to work, and she worked through health challenges in later years.”

The Smithsonian exhibit is a unique collection of June’s celebrated work, with its traces of a shared history with SLAC and one of the lab’s first mechanical engineers.

“June had an exceptionally inquisitive mind, and we think you get a sense of the rich breadth of her vision in this wonderful body of work,” says curator Jazzar.

June Schwarcz: Invention and Variation is the first retrospective of the artist’s work in 15 years and includes almost 60 works. The exhibit runs through August 27 at the Smithsonian American Art Museum Renwick Gallery.

Editor's note: Some of the information from this article was derived from an essay written by Jazzar and Nelson that appears in a book based on the exhibition with the same title.

## July 12, 2017

### Marco Frasca - The Gauge Connection

Something to say but not yet…

Last week I have been in Montpellier to attend QCD 17 Conference hosted at the CNRS and whose mainly organizer is Stephan Narison. At this conference participates a lot of people from CERN presenting new results very nearly to the main summer conferences. This year, QCD 17 was in conjuction with EPSHEP 2017 were the new results coming from LHC were firstly presented. This means that the contents of the talks in the two conferences just superposed in a matter of few hours.

On Friday, the last day of conference, I posted the following twitter after attending the talk by Shunsuke Honda on behalf of ATLAS at QCD 17:

The title of the talk was “Cross sections and couplings of the Higgs Boson from ATLAS”. As you can read from it, there is a deviation of about 2 sigmas from the Standard Model for the Higgs decaying to ZZ(4l) for VBF. Indeed, they can claim agreement yet but it is interesting anyway (maybe are we missing anything?). The previous day at EPSHEP 2017, Ruchi Gupta on behalf of ATLAS presented an identical talk with the title “Measurement of the Higgs boson couplings and properties in the diphoton, ZZ and WW decay channels using the ATLAS detector” and the slide was the following:

The result is still there but with a somewhat sober presentation. What does this mean? Presently, this amounts to very few. We are still within the Standard Model even if something seems to peep out. In order to claim a discovery, this effect should be seen with a lower error and at CMS too. The implications would be that there could be a more complex spectrum of the Higgs sector with a possible new understanding of naturalness if such a spectrum would not have a formal upper bound. People at CERN promised more data coming in the next weeks. Let us see what will happen to this small effect.

Filed under: Conference, Particle Physics, Physics Tagged: ATLAS, CERN, Higgs decay

## July 11, 2017

### Symmetrybreaking - Fermilab/SLAC

A new model for standards

In an upcoming refresh, particle physics will define units of measurement such as the meter, the kilogram and the second.

While America remains obstinate about using Imperial units such as miles, pounds and degrees Fahrenheit, most of the world has agreed that using units that are actually divisible by 10 is a better idea. The metric system, also known as the International System of Units (SI), is the most comprehensive and precise system for measuring the universe that humans have developed.

In 2018, the 26th General Conference on Weights and Measures will convene and likely adopt revised definitions for the seven base metric system units for measuring: length, mass, time, temperature, electric current, luminosity and quantity.

The modern metric system owes its precision to particle physics, which has the tools to investigate the universe more precisely than any microscope. Measurements made by particle physicists can be used to refine the definitions of metric units. In May, a team of German physicists at the Physikalisch-Technische Bundesanstalt made the most precise measurements yet of the Boltzmann constant, which will be used to define units of temperature.

Since the metric system was established in the 1790s, scientists have attempted to give increasingly precise definitions to these units. The next update will define every base unit using fundamental constants of the universe that have been derived by particle physics.

### meter (distance):

Starting in 1799, the meter was defined by a prototype meter bar, which was just a platinum bar. Physicists eventually realized that distance could be defined by the speed of light, which has been measured with an accuracy to one part in a billion using an interferometer (interestingly, the same type of detector the LIGO collaboration used to discover gravitational waves). The meter is currently defined as the distance traveled by light (in a vacuum) for 1/299,792,458 of a second, and will remain effectively unchanged in 2018.

### kilogram (mass):

For over a century, the standard kilogram has been a small platinum-iridium cylinder housed at the International Bureau of Weights and Measures in France. But even its precise mass fluctuates due to factors such as accumulation of microscopic dust. Scientists hope to redefine the kilogram in 2018 by setting the value of Planck’s constant to exactly 6.626070040×10-34 kilograms times meters squared per second. Planck’s constant is the smallest amount of quantized energy possible. This fundamental value, which is represented with the letter h, is integral to calculating energies in particle physics.

### second (time):

The earliest seconds were defined as divisions of time between full moons. Later, seconds were defined by solar days, and eventually the time it took Earth to revolve around the sun. Today, seconds are defined by atomic time, which is precise to 1 part in 10 billion. Atomic time is calculated by periods of radiation by atoms, a measurement that relies heavily on particle physics techniques. One second is currently defined as 9,192,631,770 periods of the radiation for a Cesium-133 atom and will remain effectively unchanged.

### kelvin (temperature):

Kelvin is the temperature scale that starts at the coldest possible state of matter. Currently, a kelvin is defined by the triple point of water—where water can exist as a solid, liquid and gas. The triple point is 273.16 Kelvin, so a single kelvin is 1/273.16 of the triple point. But because water can never be completely pure, impurities can influence the triple point. In 2018 scientists hope to redefine kelvin by setting the value of Boltzmann’s constant to exactly 1.38064852×10−23 joules per kelvin. Boltzmann’s constant links the movement of particles in a gas (the average kinetic energy) to the temperature of the gas. Denoted by the symbol k, the Boltzmann constant is ubiquitous throughout physics calculations that involve temperature and entropy.

### ampere (electric current):

André-Marie Ampère, who is often considered the father of electrodynamics, has the honor of having the basic unit of electric current named after him. Right now, the ampere is defined by the amount of current required to produce of a force of 2×10−7 newtons for each meter between two parallel conductors of infinite length. Naturally, it’s a bit hard to come by things of infinite length, so the proposed definition is instead to define amperes by the fundamental charge of a particle. This new definition would rely on the charge of the electron, which will be set to 1.6021766208×10−19 amperes times seconds.

### candela (luminosity):

The last of the base SI units to be established, the candela measures luminosity—what we typically refer to as brightness. Early standards for the candela used a phenomenon from quantum mechanics called “black body radiation.” This is the light that all objects radiate as a function of their heat. Currently, the candela is defined more fundamentally as 1/683 watt per square radian at a frequency of 540×1012 herz over a certain area, a definition which will remain effectively unchanged. Hard to picture? A candle, conveniently, emits about one candela of luminous intensity.

### mole (quantity):

Different from all the other base units, the mole measures quantity alone. Over hundreds of years, scientists starting from Amedeo Avogadro worked to better understand how the number of atoms was related to mass, leading to the current definition of the mole: the number of atoms in 12 grams of carbon-12. This number, which is known as Avogadro’s constant and used in many calculations of mass in particle physics, is about 6 x 1023. To make the mole more precise, the new definition would set Avogadro’s constant to exactly 6.022140857×1023, decoupling it from the kilogram.

## July 07, 2017

### Symmetrybreaking - Fermilab/SLAC

Quirks of the arXiv

Sometimes, physics papers turn funny.

Since it went up in 1991, the arXiv (pronounced like the word “archive”) has been a hub for scientific papers in quantitative fields such as physics, math and computer science. Many of its million-plus papers are serious products of intense academic work that are later published in peer-reviewed journals. Still, some manage to have a little more character than the rest. For your consideration, we’ve gathered seven of the quirkiest physics papers on the arXiv.

### Can apparent superluminal neutrino speeds be explained as a quantum weak measurement?

In 2011, an experiment appeared to find particles traveling faster than the speed of light. To spare readers uninterested in lengthy calculations demonstrating the unlikeliness of this probably impossible phenomenon, the abstract for this analysis cut to the chase.

### Quantum Tokens for Digital Signatures

Sometimes the best way to explain something is to think about how you might explain it to a child—for example, as a fairy tale.

### A dialog on quantum gravity

Unless you’re intimately familiar with string theory and quantum loop gravity, this Socratic dialogue is like Plato’s Republic: It’s all Greek to you.

### The Proof of Innocence

Pulled over after he was apparently observed failing to halt at a stop sign, the author of this paper, Dmitri Krioukov, was determined to prove his innocence—as only a scientist would.

Using math, he demonstrated that, to a police officer measuring the angular speed of Krioukov’s car, a brief obstruction from view could cause an illusion that the car did not stop. Krioukov submitted his proof to the arXiv; the judge ruled in his favor.

### Quantum weak coin flipping with arbitrarily small bias

Not many papers in the arXiv illustrate their point with a tale involving human sacrifice. There’s something about quantum informatics that brings out the weird side of physicists.

### 10 = 6 + 4

A theorist calculated an alternative decomposition of 10 dimensions into 6 spacetime dimensions with local Conformal symmetry and 4-dimensional compact Internal Symmetry Space. For the title of his paper, he decided to go with something a little simpler.

### Would Bohr be born if Bohm were born before Born?

This tricky tongue-twisting treatise theorizes a tangential timeline to testify that taking up quantum theories turns on timeliness.

## July 03, 2017

### Symmetrybreaking - Fermilab/SLAC

When was the Higgs actually discovered?

The announcement on July 4 was just one part of the story. Take a peek behind the scenes of the discovery of the Higgs boson.

Joe Incandela sat in a conference room at CERN and watched with his arms folded as his colleagues presented the latest results on the hunt for the Higgs boson. It was December 2011, and they had begun to see the very thing they were looking for—an unexplained bump emerging from the data.

“I was far from convinced,” says Incandela, a professor at the University of California, Santa Barbara and the former spokesperson of the CMS experiment at the Large Hadron Collider.

For decades, scientists had searched for the elusive Higgs boson: the holy grail of modern physics and the only piece of the robust and time-tested Standard Model that had yet to be found.

The construction of the LHC was motivated in large part by the absence of this fundamental component from our picture of the universe. Without it, physicists couldn’t explain the origin of mass or the divergent strengths of the fundamental forces.

“Without the Higgs boson, the Standard Model falls apart,” says Matthew McCullough, a theorist at CERN. “The Standard Model was fitting the experimental data so well that most of the theory community was convinced that something playing the role of Higgs boson would be discovered by the LHC.”

The Standard Model predicted the existence of the Higgs but did not predict what the particle’s mass would be. Over the years, scientists had searched for it across a wide range of possible masses. By 2011, there was only a tiny region left to search; everything else had been excluded by previous generations of experimentation. If the predicted Higgs boson were anywhere, it had to be there, right where the LHC scientists were looking.

But Incandela says he was skeptical about these preliminary results. He knew that the Higgs could manifest itself in many different forms, and this particular channel was extremely delicate.

“A tiny mistake or an unfortunate distribution of the background events could make it look like a new particle is emerging from the data when in reality, it’s nothing,” Incandela says.

A common mantra in science is that extraordinary claims require extraordinary evidence. The challenge isn’t just collecting the data and performing the analysis; it’s deciding if every part of the analysis is trustworthy. If the analysis is bulletproof, the next question is whether the evidence is substantial enough to claim a discovery. And if a discovery can be claimed, the final question is what, exactly, has been discovered? Scientists can have complete confidence in their results but remain uncertain about how to interpret them.

In physics, it’s easy to say what something is not but nearly impossible to say what it is. A single piece of corroborated, contradictory evidence can discredit an entire theory and destroy an organization’s credibility.

“We’ll never be able to definitively say if something is exactly what we think it is, because there’s always something we don’t know and cannot test or measure,” Incandela says. “There could always be a very subtle new property or characteristic found in a high-precision experiment that revolutionizes our understanding.”

With all of that in mind, Incandela and his team made a decision: From that point on, everyone would refine their scientific analyses using special data samples and a patch of fake data generated by computer simulations covering the interesting areas of their analyses. Then, when they were sure about their methodology and had enough data to make a significant observation, they would remove the patch and use their algorithms on all the real data in a process called unblinding.

“This is a nice way of providing an unbiased view of the data and helps us build confidence in any unexpected signals that may be appearing, particularly if the same unexpected signal is seen in different types of analyses,” Incandela says.

A few weeks before July 4, all the different analysis groups met with Incandela to present a first look at their unblinded results. This time the bump was very significant and showing up at the same mass in two independent channels.

“At that point, I knew we had something,” Incandela says. “That afternoon we presented the results to the rest of the collaboration. The next few weeks were among the most intense I have ever experienced.”

Meanwhile, the other general-purpose experiment at the LHC, ATLAS, was hot on the trail of the same mysterious bump.

Andrew Hard was a graduate student at The University of Wisconsin, Madison working on the ATLAS Higgs analysis with his PhD thesis advisor Sau Lan Wu.

“Originally, my plan had been to return home to Tennessee and visit my parents over the winter holidays,” Hard says. “Instead, I came to CERN every day for five months—even on Christmas. There were a few days when I didn't see anyone else at CERN. One time I thought some colleagues had come into the office, but it turned out to be two stray cats fighting in the corridor.”

Hard was responsible for writing the code that selected and calibrated the particles of light the ATLAS detector recorded during the LHC’s high-energy collisions. According to predictions from the Standard Model, the Higgs can transform into two of these particles when it decays, so scientists on both experiments knew that this project would be key to the discovery process.

“We all worked harder than we thought we could,” Hard says. “People collaborated well and everyone was excited about what would come next. All in all, it was the most exciting time in my career. I think the best qualities of the community came out during the discovery.”

At the end of June, Hard and his colleagues synthesized all of their work into a single analysis to see what it revealed. And there it was again—that same bump, this time surpassing the statistical threshold the particle physics community generally requires to claim a discovery.

“Soon everyone in the group started running into the office to see the number for the first time,” Hard says. “The Wisconsin group took a bunch of photos with the discovery plot.”

Hard had no idea whether CMS scientists were looking at the same thing. At this point, the experiments were keeping their latest results secret—with the exception of Incandela, Fabiola Gianotti (then ATLAS spokesperson) and a handful of CERN’s senior management, who regularly met to discuss their progress and results.

“I told the collaboration that the most important thing was for each experiment to work independently and not worry about what the other experiment was seeing,” Incandela says. “I did not tell anyone what I knew about ATLAS. It was not relevant to the tasks at hand.”

Still, rumors were circulating around theoretical physics groups both at CERN and abroad. Mccullough, then a postdoc at the Massachusetts Institute of Technology, was avidly following the progress of the two experiments.

“We had an update in December 2011 and then another one a few months later in March, so we knew that both experiments were seeing something,” he says. “When this big excess showed up in July 2012, we were all convinced that it was the guy responsible for curing the ails of the Standard Model, but not necessarily precisely that guy predicted by the Standard Model. It could have properties mostly consistent with the Higgs boson but still be not absolutely identical.”

The week before announcing what they’d found, Hard’s analysis group had daily meetings to discuss their results. He says they were excited but also nervous and stressed: Extraordinary claims require extraordinary confidence.

“One of our meetings lasted over 10 hours, not including the dinner break halfway through,” Hard says. “I remember getting in a heated exchange with a colleague who accused me of having a bug in my code.”

After both groups had independently and intensely scrutinized their Higgs-like bump through a series of checks, cross-checks and internal reviews, Incandela and Gianotti decided it was time to tell the world.

“Some people asked me if I was sure we should say something,” Incandela says. “I remember saying that this train has left the station. This is what we’ve been working for, and we need to stand behind our results.”

On July 4, 2012, Incandela and Gianotti stood before an expectant crowd and, one at a time, announced that decades of searching and generations of experiments had finally culminated in the discovery of a particle “compatible with the Higgs boson.”

Science journalists rejoiced and rushed to publish their stories. But was this new particle the long-awaited Higgs boson? Or not?

Discoveries in science rarely happen all at once; rather, they build slowly over time. And even when the evidence overwhelmingly points in a clear direction, scientists will rarely speak with superlatives or make definitive claims.

“There is always a risk of overlooking the details,” Incandela says, “and major revolutions in science are often born in the details.”

Immediately after the July 4 announcement, theorists from around the world issued a flurry of theoretical papers presenting alternative explanations and possible tests to see if this excess really was the Higgs boson predicted by the Standard Model or just something similar.

“A lot of theory papers explored exotic ideas,” McCullough says. “It’s all part of the exercise. These papers act as a straw man so that we can see just how well we understand the particle and what additional tests need to be run.”

For the next several months, scientists continued to examine the particle and its properties. The more data they collected and the more tests they ran, the more the discovery looked like the long-awaited Higgs boson. By March, both experiments had twice as much data and twice as much evidence.

“Amongst ourselves, we called it the Higgs,” Incandela says, “but to the public, we were more careful.”

It was increasingly difficult to keep qualifying their statements about it, though. “It was just getting too complicated,” Incandela says. “We didn’t want to always be in this position where we had to talk about this particle like we didn’t know what it was.”

On March 14, 2013—nine months and 10 days after the original announcement—CERN issued a press release quoting Incandela as saying, “to me, it is clear that we are dealing with a Higgs boson, though we still have a long way to go to know what kind of Higgs boson it is.”​

To this day, scientists are open to the possibility that the Higgs they found is not exactly the Higgs they expected.

“We are definitely, 100 percent sure that this is a Standard-Model-like Higgs boson,” Incandela says. “But we’re hoping that there’s a chink in that armor somewhere. The Higgs is a sign post, and we’re hoping for a slight discrepancy which will point us in the direction of new physics.”

## June 30, 2017

### Symmetrybreaking - Fermilab/SLAC

What’s really happening during an LHC collision?

It’s less of a collision and more of a symphony.

The Large Hadron Collider is definitely large. With a 17-mile circumference, it is the biggest collider on the planet. But the latter fraction of its name is a little misleading. That’s because what collides in the LHC are the tiny pieces inside the hadrons, not the hadrons themselves.

Hadrons are composite particles made up of quarks and gluons. The gluons carry the strong force, which enables the quarks to stick together and binds them into a single particle. The main fodder for the LHC are hadrons called protons. Protons are made up of three quarks and an indefinable number of gluons. (Protons in turn make up atoms, which are the building blocks of everything around us.)

If a proton were enlarged to the size of a basketball, it would look empty. Just like atoms, protons are mostly empty space. The individual quarks and gluons inside are known to be extremely small, less than 1/10,000th the size of the entire proton.

“The inside of a proton would look like the atmosphere around you,” says Richard Ruiz, a theorist at Durham University. “It’s a mixture of empty space and microscopic particles that, for all intents and purposes, have no physical volume.

“But if you put those particles inside a balloon, you’ll see the balloon expand. Even though the internal particles are microscopic, they interact with each other and exert a force on their surroundings, inevitably producing something which does have an observable volume.”

So how do you collide two objects that are effectively empty space? You can’t. But luckily, you don’t need a classical collision to unleash a particle’s full potential.

In particle physics, the term “collide” can mean that two protons glide through each other, and their fundamental components pass so close together that they can talk to each other. If their voices are loud enough and resonate in just the right way, they can pluck deep hidden fields that will sing their own tune in response—by producing new particles.

“It’s a lot like music,” Ruiz says. “The entire universe is a symphony of complex harmonies which call and respond to each other. We can easily produce the mid-range tones, which would be like photons and muons, but some of these notes are so high that they require a huge amount of energy and very precise conditions to resonate.”

Space is permeated with dormant fields that can briefly pop a particle into existence when vibrated with the right amount of energy. These fields play important roles but almost always work behind the scenes. The Higgs field, for instance, is always interacting with other particles to help them gain mass. But a Higgs particle will only appear if the field is plucked with the right resonance.

When protons meet during an LHC collision, they break apart and the quarks and gluons come spilling out. They interact and pull more quarks and gluons out of space, eventually forming a shower of fast-moving hadrons.

This subatomic symbiosis is facilitated by the LHC and recorded by the experiment, but it’s not restricted to the laboratory environment; particles are also accelerated by cosmic sources such as supernova remnants. “This happens everywhere in the universe,” Ruiz says. “The LHC and its experiments are not special in that sense. They’re more like a big concert hall that provides the energy to pop open and record the symphony inside each proton.”

## June 29, 2017

### Georg von Hippel - Life on the lattice

Lattice 2017, Day Six
On the last day of the 2017 lattice conference, there were plenary sessions only. The first plenary session opened with a talk by Antonio Rago, who gave a "community review" of lattice QCD on new chips. New chips in the case of lattice QCD means mostly Intel's new Knight's Landing architecture, to whose efficient use significant effort is devoted by the community. Different groups pursue very different approaches, from purely OpenMP-based C codes to mixed MPI/OpenMP-based codes maximizing the efficiency of the SIMD pieces using assembler code. The new NVidia Tesla Volta and Intel's OmniPath fabric also featured in the review.

The next speaker was Zoreh Davoudi, who reviewed lattice inputs for nuclear physics. While simulating heavier nuclei directly in the lattice is still infeasible, nuclear phenomenologists appear to be very excited about the first-principles lattice QCD simulations of multi-baryon systems now reaching maturity, because these can be use to tune and validate nuclear models and effective field theories, from which predictions for heavier nuclei can then be derived so as to be based ultimately on QCD. The biggest controversy in the multi-baryon sector at the moment is due to HALQCD's claim that the multi-baryon mass plateaux seen by everyone except HALQCD (who use their own method based on Bethe-Salpeter amplitudes) are probably fakes or "mirages", and that using the Lüscher method to determine multi-baryon binding would require totally unrealistic source-sink separations of over 10 fm. The volume independence of the bound-state energies determined from the allegedly fake plateaux, as contrasted to the volume dependence of the scattering-state energies so extracted, provides a fairly strong defence against this claim, however. There are also new methods to improve the signal-to-noise ratio for multi-baryon correlation functions, such as phase reweighting.

This was followed by a talk on the tetraquark candidate Zc(3900) by Yoichi Ikeda, who spent a large part of his talk on reiterating the HALQCD claim that the Lüscher method requires unrealistically large time separations. During the questions, William Detmold raised the important point that there would be no excited-state contamination at all if the interpolating operator created an eigenstate of the QCD Hamiltonian, and that for improved interpolating operators (such as generated by the variational method) one can get rather close to this situation, so that the HLAQCD criticism seems hardly applicable. As for the Zc(3900), HALQCD find it to be not a resonance, but a kinematic cusp, although this conclusion is based on simulations at rather heavy pion masses (mπ> 400 MeV).

The final plenary session was devoted to the anomalous magnetic moment of the muon, which is perhaps the most pressing topic for the lattice community, since the new (g-2) experiment is now running, and theoretical predictions matching the improved experimental precision will be needed soon. The first speaker was Christoph Lehner, who presented RBC/UKQCD's efforts to determine the hadronic vacuum polarization contribution to aμ with high precision. The strategy for this consists of two main ingredients: one is to minimize the statistical and systematic errors of the lattice calculation by using a full-volume low-mode average via a multigrid Lanczos method, explicitly including the leading effects of strong isospin breaking and QED, and the contribution from disconnected diagrams, and the other is to combine lattice and phenomenology to take maximum advantage of their respective strengths. This is achieved by using the time-momentum representation with a continuum correlator reconstructed from the R-ratio, which turns out to be quite precise at large times, but more uncertain at shorter times, which is exactly the opposite of the situation for the lattice correlator. Using a window which continuously switches over from the lattice to the continuum at time separations around 1.2 fm then minimizes the overall error on aμ.

The last plenary talk was given by Gilberto Colangelo, who discussed the new dispersive approach to the hadronic light-by-light scattering contribution to aμ. Up to now the theory results for this small, but important, contribution have been based on models, which will always have an a priori unknown and irreducible systematic error, although lattice efforts are beginning to catch up. For a dispersive approach based on general principles such as analyticity and unitarity, the hadronic light-by-light tensor first needs to be Lorentz decomposed, which gives 138 tensors, of which 136 are independent, and of which gauge invariance permits only 54, of which 7 are distinct, with the rest related by crossing symmetry; care has to be taken to choose the tensor basis such that there are no kinematic singularities. A master formula in terms of 12 linear combinations of these components has been derived by Gilberto and collaborators, and using one- and two-pion intermediate states (and neglecting the rest) in a systematic fashion, they have been able to produce a model-independent theory result with small uncertainties based on experimental data for pion form factors and scattering amplitudes.

The closing remarks were delivered by Elvira Gamiz, who advised participants that the proceedings deadline of 18 October will be strict, because this year's proceedings will not be published in PoS, but in EPJ Web of Conferences, who operate a much stricter deadline policy. Many thanks to Elvira for organizing such a splendid lattice conference! (I can appreciate how much work that is, and I think you should have received far more applause.)

Huey-Wen Lin invited the community to East Lansing, Michigan, USA, for the Lattice 2018 conference, which will take place 22-28 July 2018 on the campus of Michigan State University.

The IAC announced that Lattice 2019 will take place in Wuhan, China.

And with that the conference ended. I stayed in Granada for a couple more days of sightseeing and relaxation, but the details thereof will be of legitimate interest only to a very small subset of my readership (whom I keep updated via different channels), and I therefore conclude my coverage and return the blog to its accustomed semi-hiatus state.