Particle Physics Planet
August 17, 2017
Emily Lakdawalla - The Planetary Society Blog
Christian P. Robert - xi'an's og
A rather bland article by Gareth Stedman Jones in Nature reminded me that the first volume of Karl Marx’ Das Kapital is 150 years old this year. Which makes it appear quite close in historical terms [just before the Franco-German war of 1870] and rather remote in scientific terms. I remember going painstakingly through the books in 1982 and 1983, mostly during weekly train trips between Paris and Caen, and not getting much out of it! Even with the help of a cartoon introduction I had received as a 1982 Xmas gift! I had no difficulty in reading the text per se, as opposed to my attempt of Kant’s Critique of Pure Reason the previous summer [along with the other attempt to windsurf!], as the discourse was definitely grounded in economics and not in philosophy. But the heavy prose did not deliver a convincing theory of the evolution of capitalism [and of its ineluctable demise]. While the fundamental argument of workers’ labour being an essential balance to investors’ capital for profitable production was clearly if extensively stated, the extrapolations on diminishing profits associated with decreasing labour input [and the resulting collapse] were murkier and sounded more ideological than scientific. Not that I claim any competence in the matter: my attempts at getting the concepts behind Marxist economics stopped at this point and I have not been seriously thinking about it since! But it still seems to me that the theory did age very well, missing the increasing power of financial agents in running companies. And of course [unsurprisingly] the numerical revolution and its impact on the (des)organisation of work and the disintegration of proletariat as Marx envisioned it. For instance turning former workers into forced and poor entrepreneurs (Uber, anyone?!). Not that the working conditions are particularly rosy for many, from a scarsity of low-skill jobs, to a nurtured competition between workers for existing jobs (leading to extremes like the scandalous zero hour contracts!), to minimum wages turned useless by the fragmentation of the working space and the explosion of housing costs in major cities, to the hopelessness of social democracies to get back some leverage on international companies…
Filed under: Statistics Tagged: book reviews, comics, Das Kapital, economics, Immanuel Kant, Karl Marx, London, Marxism, Nature, Paris, philosophy, political economics
Christian P. Robert - xi'an's og
Peter Coles - In the Dark
Today’s the day! This year’s A-level results are out today, Thursday 17th August, with the consequent scramble as students across the country to confirm places at university. Good luck to all students everywhere waiting for your results. I hope they are what you expected!
For those of you who didn’t get the grades they needed, I have one piece of very clear advice:
The clearing system is very efficient and effective, as well as being quite straightforward to use, and there’s still every chance that you will find a place somewhere good. So keep a cool head and follow the instructions. You won’t have to make a decision straight away, and there’s plenty of time to explore all the options.
As a matter of fact there are a few places still left for various courses in the School of Physics & Astronomy at Cardiff University. Why should you choose Cardiff? Well, obviously I have a vested interest since I work here, but here’s a video of some students talking about the School.
For further information check here!
Follow @telescoperJohn Baez - Azimuth
It’s been a long time since I’ve blogged about the Complex Adaptive System Composition and Design Environment or CASCADE project run by John Paschkewitz. For a reminder, read these:
• Complex adaptive system design (part 1), Azimuth, 2 October 2016.
• Complex adaptive system design (part 2), Azimuth, 18 October 2016.
A lot has happened since then, and I want to explain it.
I’m working with Metron Scientific Solutions to develop new techniques for designing complex networks.
The particular problem we began cutting our teeth on is a search and rescue mission where a bunch of boats, planes and drones have to locate and save people who fall overboard during a boat race in the Caribbean Sea. Subsequently the Metron team expanded the scope to other search and rescue tasks. But the real goal is to develop very generally applicable new ideas on designing and ‘tasking’ networks of mobile agents—that is, designing these networks and telling the agents what to do.
We’re using the mathematics of ‘operads’, in part because Spivak’s work on operads has drawn a lot of attention and raised a lot of hopes:
• David Spivak, The operad of wiring diagrams: formalizing a graphical language for databases, recursion, and plug-and-play circuits.
An operad is a bunch of operations for sticking together smaller things to create bigger ones—I’ll explain this in detail later, but that’s the core idea. Spivak described some specific operads called ‘operads of wiring diagrams’ and illustrated some of their potential applications. But when we got going on our project, we wound up using a different class of operads, which I’ll call ‘network operads’.
Here’s our dream, which we’re still trying to make into a reality:
Network operads should make it easy to build a big network from smaller ones and have every agent know what to do. You should be able to ‘slap together’ a network, throwing in more agents and more links between them, and automatically have it do something reasonable. This should be more flexible than an approach where you need to know ahead of time exactly how many agents you have, and how they’re connected, before you can tell them what to do.
You don’t want a network to malfunction horribly because you forgot to hook it up correctly. You want to focus your attention on optimizing the network, not getting it to work at all. And you want everything to work so smoothly that it’s easy for the network to adapt to changing conditions.
To achieve this we’re using network operads, which are certain special ‘typed operads’. So before getting into the details of our approach, I should say a bit about typed operads. And I think that will be enough for today’s post: I don’t want to overwhelm you with too much information at once.
In general, a ‘typed operad’ describes ways of sticking together things of various types to get new things of various types. An ‘algebra’ of the operad gives a particular specification of these things and the results of sticking them together. For now I’ll skip the full definition of a typed operad and only highlight the most important features. A typed operad has:
• a set of types.
• collections of operations where . Here are the types of the inputs, while is the type of the output.
• ways to compose operations. Given an operation
and operations
we can compose them to get
These must obey some rules.
But if you haven’t seen operads before, you’re probably reeling in horror—so I need to rush in and save you by showing you the all-important pictures that help explain what’s going on!
First of all, you should visualize an operation as a little gizmo like this:
It has inputs at top and one output at bottom. Each input, and the output, has a ‘type’ taken from the set So, for example, if you operation takes two real numbers, adds them and spits out the closest integer, both input types would be ‘real’, while the output type would be ‘integer’.
The main thing we do with operations is compose them. Given an an operation we can compose it with operations
by feeding their outputs into the inputs of like this:
The result is an operation we call
Note that the input types of have to match the output types of the for this to work! This is the whole point of types: they forbid us from composing operations in ways that don’t make sense.
This avoids certain stupid mistakes. For example, you can take the square root of a positive number, but you may not want to take the square root of a negative number, and you definitely don’t want to take the square root of a hamburger. While you can land a plane on an airstrip, you probably don’t want to land a plane on a person.
The operations in an operad are quite abstract: they aren’t really operating on anything. To render them concrete, we need another idea: operads have ‘algebras’.
An algebra of the operad specifies a set of things of each type such that the operations of act on these sets. A bit more precisely, an algebra consists of:
• for each type a set of things of type
• an action of on that is, a collection of maps
obeying some rules.
In other words, an algebra turns each operation into a function that eats things of types and spits out a thing of type
When we get to designing systems with operads, the fact that the same operad can have many algebras will be useful. Our operad will have operations describing abstractly how to hook up networks to form larger networks. An algebra will give a specific implementation of these operations. We can use one algebra that’s fairly fine-grained and detailed about what the operations actually do, and another that’s less detailed. There will then be a map between from the first algebra to the second, called an ‘algebra homomorphism’, that forgets some fine-grained details.
There’s a lot more to say—all this is just the mathematical equivalent of clearing my throat before a speech—but I’ll stop here for now.
And as I do—since it also takes me time to stop talking—I should make it clear yet again that I haven’t even given the full definition of typed operads and their algebras! Besides the laws I didn’t write down, there’s other stuff I omitted. Most notably, there’s a way to permute the inputs of an operation in an operad, and operads have identity operations, one for each type.
To see the full definition of an ‘untyped’ operad, which is really an operad with just one type, go here:
• Wikipedia, Operad theory.
They just call it an ‘operad’. Note that they first explain ‘non-symmetric operads’, where you can’t permute the inputs of operations, and then explain operads, where you can.
If you’re mathematically sophisticated, you can easily guess the laws obeyed by a typed operad just by looking at this article and inserting the missing types. You can also see the laws written down in Spivak’s paper, but with some different terminology: he calls types ‘objects’, he calls operations ‘morphisms’, and he calls typed operads ‘symmetric colored operads’—or once he gets going, just ‘operads’.
You can also see the definition of a typed operad in Section 2.1 here:
• Donald Yau, Operads of wiring diagrams.
What I would call a typed operad with as its set of types, he calls an ‘-colored operad’.
I guess it’s already evident, but I’ll warn you that the terminology in this subject varies quite a lot from author to author: for example, a certain community calls typed operads ‘symmetric multicategories’. This is annoying at first but once you understand the subject it’s as ignorable as the fact that mathematicians have many different accents. The main thing to remember is that operads come in four main flavors, since they can either be typed or untyped, and they can either let you permute inputs or not. I’ll always be working with typed operads where you can permute inputs.
Finally, I’ll say that while the definition of operad looks lengthy and cumbersome at first, it becomes lean and elegant if you use more category theory.
Next time I’ll give you an example of an operad: the simplest ‘network
operad’.
August 16, 2017
Christian P. Robert - xi'an's og
While in Cambridge last month, I picked a few books from a local bookstore as fodder for my incoming vacations. Including this omnibus volume made of the first three books by Philip Kerr featuring Bernie Gunther, a private and Reich detective in Nazi Germany, namely, March Violets (1989), The Pale Criminal (1990), and A German Requiem (1991). (Book that I actually read before the vacations!) The stories take place before the war, in 1938, and right after, in 1946, in Berlin and Vienna. The books centre on a German version of Philip Marlowe, wise cracks included, with various degrees of success. (There actually is a silly comparison with Chandler on the back of the book! And I found somewhere else a similarly inappropriate comparison with Graham Greene‘s The Third Man…) Although I read the whole three books in a single week, which clearly shows some undeniable addictive quality in the plots, I find those plots somewhat shallow and contrived, especially the second one revolving around a serial killer of young girls that aims at blaming Jews for those crimes and at justifying further Nazi persecutions. Or the time spent in Dachau by Bernie Gunther as undercover agent for Heydrich. If anything, the third volume taking place in post-war Berlin and Wien is much better at recreating the murky atmosphere of those cities under Allied occupations. But overall there is much too much info-dump passages in those novels to make them a good read. The author has clearly done his documentation job correctly, from the early homosexual persecutions to Kristallnacht, to the fights for control between the occupying forces, but the information about the historical context is not always delivered in the most fluent way. And having the main character working under Heydrich, then joining the SS, does make relating to him rather unlikely, to say the least. It is hence unclear to me why those books are so popular, apart from the easy marketing line that stories involving Nazis are more likely to sell… Nothing to be compared with the fantastic Alone in Berlin, depicting the somewhat senseless resistance of a Berliner during the Nazi years, dropping hand-written messages against the regime under strangers’ doors.
Filed under: Statistics Tagged: Alone in Berlin, Berlin, Berlin noir, book reviews, Dachau, Graham Greene, Nazi State, Raymond Chandler, Reinhart Heydrich, Wien, WW II
Symmetrybreaking - Fermilab/SLAC
High school students nationwide will study the effects of the solar eclipse on cosmic rays.
While most people are marveling at Monday’s eclipse, a group of researchers will be measuring its effects on cosmic rays—particles from space that collide with the earth’s atmosphere to produce muons, heavy cousins of the electron. But these researchers aren’t the usual PhD-holding suspects: They’re still in high school.
More than 25 groups of high school students and teachers nationwide will use small-scale detectors to find out whether the number of cosmic rays raining down on Earth changes during an eclipse. Although the eclipse event will last only three hours, this student experiment has been a months-long collaboration.
The cosmic ray detectors used for this experiment were provided as kits by QuarkNet, an outreach program that gives teachers and students opportunities to try their hands at high-energy physics research. Through QuarkNet, high school classrooms can participate in a whole range of physics activities, such as analyzing real data from the CMS experiment at CERN and creating their own experiments with detectors.
“Really active QuarkNet groups run detectors all year and measure all sorts of things that would sound crazy to a physicist,” says Mark Adams, QuarkNet’s cosmic ray studies coordinator. “It doesn’t really matter what the question is as long as it allows them to do science.”
And this year’s solar eclipse will give students a rare chance to answer a cosmic question: Is the sun a major producer of the cosmic rays that bombard Earth, or do they come from somewhere else?
“We wanted to show that, if the rate of cosmic rays changes a lot during the eclipse, then the sun is a big source of cosmic rays,” Adams says. “We sort of know that the sun is not the main source, but it’s a really neat experiment. As far as we know, no one has ever done this with cosmic ray muons at the surface.”
Adams and QuarkNet teacher Nate Unterman will be leading a group of nine students and five adults to Missouri to the heart of the path of totality—where the moon will completely cover the sun—to take measurements of the event. Other QuarkNet groups will stay put, measuring what effect a partial eclipse might have on cosmic rays in their area.
Most cosmic rays are likely high-energy particles from exploding stars deep in space, which are picked up via muons in QuarkNet detectors. But the likely result of the experiment—that cosmic rays don’t change their rate when the moon moves in front of the sun—doesn’t eclipse the excitement for the students in the collaboration.
“They’ve been working for months and months to develop the design for the measurements and the detectors,” Adams says. “That’s the great part—they’re not focused on what the answer is but the best way to find it.”
Peter Coles - In the Dark
The other day I was surprised to see this tweet announcing the impending formation of a new council under the umbrella of the new organisation UK Research & Innovation (UKRI):
Welcome to the official Twitter account of Research England – a new council within @UKRI_news. Launching in April 2018!
— Research England (@ResEngland) August 7, 2017
These changes are consequences of the Higher Education and Research Act (2017) which was passed at the end of the last Parliament before the Prime Minister decided to reduce the Government’s majority by calling a General Election.
It seems to me that it’s very strange indeed to have a new council called Research England sitting inside an organisation that purports to be a UK-wide outfit without having a corresponding Research Wales, Research Scotland and Research Northern Ireland. The seven existing research councils which will henceforth sit alongside Research England within UKRI are all UK-wide.
This anomaly stems from the fact that Higher Education policy is ostensibly a devolved matter, meaning that England, Wales, Scotland and Northern Ireland each have separate bodies to oversee their universities. Included in the functions of these bodies is the so-called QR funding which is allocated on the basis of the Research Excellence Framework. This used to be administered by the Higher Education Funding Council for England (HEFCE), but each devolved council distributed its own funds in its own way. The new Higher Education and Research Act however abolishes HEFCE and replaces some of its functions into an organisation called the Office for Students, but not those connected with research. Hence the creation of the new `Research England’. This will not only distribute QR funding among English universities but also administer a number of interdisciplinary research programmes.
The dual support system of government funding consists of block grants of QR funding allocated as above alongside targeted at specific projects by the Research Councils (such as the Science and Technology Facilities Council, which is responsible for astronomy, particle physics and nuclear physics research). There is nervousness in England that the new structure will put both elements of the dual support system inside the same organisation, but my greatest concern is that by exlcuding Wales, Scotland and Northern Ireland, English universities will be given an unfair advantage when it comes to interdisciplinary research. Surely there should be representation within UKRI for Wales, Scotland and Northern Ireland too?
Incidentally, the Science and Technology Facilities Council (STFC) has started the process of recruiting a new Executive Chair. If you’re interested in this position you can find the advertisement here. Ominously, the only thing mentioned under `Skills Required’ is `Change Management’.
Follow @telescoperEmily Lakdawalla - The Planetary Society Blog
August 15, 2017
Christian P. Robert - xi'an's og
After last-year failed attempt at climbing a summit in the Monte Rosa group, we came back for hiking on the “other side” of Monte Rosa, at the Italian foot of Dufourspitze with less ambitious plans of hiking around this fantastic range. Alas the weather got set against us and thunderstorms followed one another, making altitude hikes unreasonable because of the fresh snow and frequent fog, and lower altitude walks definitely unpleasant, turning the trails into creeks.
While we had many nice chats with local people in a mixture of French and Italian, and sampled local cheese and beers, this still felt like a bit of a wasted vacations week, especially when remembering how reasonable the weather had been the week before in Scotland. However, all things considered, the previous week had seen a deadly heat wave cross the south of Europe that very week, so I should not be complaining about rain and snow! And running a whole week checking the altimeter instead of the chronometer is a nice experience.
Filed under: Mountains, pictures, Travel, Wines Tagged: climbing, Dufourspitze, hiking, Italia, Macugnana, Monte Rosa, rain, running, snow, Switzerland
Symmetrybreaking - Fermilab/SLAC
A video from SLAC National Accelerator Laboratory explains how the upcoming LZ experiment will search for the missing 85 percent of the matter in the universe.
What exactly is dark matter, the invisible substance that accounts for 85 percent of all the matter in the universe but can’t be seen even with our most advanced scientific instruments?
Most scientists believe it’s made of ghostly particles that rarely bump into their surroundings. That’s why billions of dark matter particles might zip right through our bodies every second without us even noticing. Leading candidates for dark matter particles are WIMPs, or weakly interacting massive particles.
Scientists at SLAC National Accelerator Laboratory are helping to build and test one of the biggest and most sensitive detectors ever designed to catch a WIMP: the LUX-ZEPLIN or LZ detector. The following video explains how it works.
Peter Coles - In the Dark
Well, I made it back to Cardiff on schedule last night, although that did involve getting home at 2am. I was pretty much exhausted by then so had a bit of a lie-in this morning. I think I’m getting too old for all this gallivanting about. I crashed out soon after getting home and had to spend an hour or so this morning sorting through the stack of mail that arrived while I was away (including some book tokens courtesy of another crossword prize).
I usually try to get to the airport plenty of time in advance when I’m flying somewhere, so got to Copenhagen airport yesterday a good three hours before my scheduled departure. I had checked in online before setting out so I could have left it later, but I’m obviously a creature of habit. As it happened I was able to leave my luggage at the bag drop immediately and it took no longer than 5 minutes to clear the security checks, which meant that I was left with time to kill but I had my iPod and plenty to read so it was all fine.
I was a little disturbed when I got to the departure gate to hear the announcement that `Tonight’s British Airways flight to London Heathrow is operated by Qatar Airways’, but at least it explained why it wasn’t a BA plane standing outside on the tarmac. As it happened the flight went smoothly and Qatar Airways do free food and drink for economy class passengers (unlike BA who nowadays sell expensive snacks and beverages supplied by Marks and Spencer). The only downside when we arrived at Heathrow was that we parked at a remote stand and had to wait 20 minutes or so for a bus to take us to Terminal 5. I could hear the ground crew unloading luggage while we waited, however, so that meant less time waiting at the carousels…
On previous occasions I’ve been greeted at Heathrow by a packed passport control area, but this time it was virtually deserted. In fact I’ve never seen it so empty. My bag was waiting for me when I got to the reclaim area so I got to the Heathrow Express terminal and thence to Paddington in time for the 10.45pm train to Cardiff.
When I got back to the Data Innovation Research Institute office around lunchtime I discovered that our big screen TV has been installed.
This will of course be used exclusively for skype calls and video conferences and in no way for watching cricket or football or any other inappropriate activity.
Well, I’d better get on. Marking resit exams is the order of the day.
Follow @telescoper
Emily Lakdawalla - The Planetary Society Blog
Emily Lakdawalla - The Planetary Society Blog
Tommaso Dorigo - Scientificblogging
John Baez - Azimuth
There’s a new paper on the arXiv that claims to solve a hard problem:
• Norbert Blum, A solution of the P versus NP problem.
Most papers that claim to solve hard math problems are wrong: that’s why these problems are considered hard. But these papers can still be fun to look at, at least if they’re not obviously wrong. It’s fun to hope that maybe today humanity has found another beautiful grain of truth.
I’m not an expert on the P versus NP problem, so I have no opinion on this paper. So don’t get excited: wait calmly by your radio until you hear from someone who actually works on this stuff.
I found the first paragraph interesting, though. Here it is, together with some highly non-expert commentary. Beware: everything I say could be wrong!
Understanding the power of negations is one of the most challenging problems in complexity theory. With respect to monotone Boolean functions, Razborov [12] was the first who could shown that the gain, if using negations, can be super-polynomial in comparision to monotone Boolean networks. Tardos [16] has improved this to exponential.
I guess a ‘Boolean network’ is like a machine where you feed in a string of bits and it computes new bits using the logical operations ‘and’, ‘or’ and ‘not’. If you leave out ‘not’ the Boolean network is monotone, since then making more inputs equal to 1, or ‘true’, is bound to make more of the output bits 1 as well. Blum is saying that including ‘not’ makes some computations vastly more efficient… but that this stuff is hard to understand.
For the characteristic function of an NP-complete problem like the clique function, it is widely believed that negations cannot help enough to improve the Boolean complexity from exponential to polynomial.
A bunch of nodes in a graph are a clique if each of these nodes is connected by an edge to every other. Determining whether a graph with vertices has a clique with more than nodes is a famous problem: the clique decision problem.
For example, here’s a brute-force search for a clique with at least 4 nodes:
The clique decision problem is NP-complete. This means that if you can solve it with a Boolean network whose complexity grows like some polynomial in n, then P = NP. But if you can’t, then P ≠ NP.
(Don’t ask me what the complexity of a Boolean network is; I can guess but I could get it wrong.)
I guess Blum is hinting that the best monotone Boolean network for solving the clique decision problem has a complexity that’s exponential in And then he’s saying it’s widely believed that not gates can’t reduce the complexity to a polynomial.
Since the computation of an one-tape Turing machine can be simulated by a non-monotone Boolean network of size at most the square of the number of steps [15, Ch. 3.9], a superpolynomial lower bound for the non-monotone network complexity of such a function would imply P ≠ NP.
Now he’s saying what I said earlier: if you show it’s impossible to solve the clique decision problem with any Boolean network whose complexity grows like some polynomial in n, then you’ve shown P ≠ NP. This is how Blum intends to prove P ≠ NP.
For the monotone complexity of such a function, exponential lower bounds are known [11, 3, 1, 10, 6, 8, 4, 2, 7].
Should you trust someone who claims they’ve proved P ≠ NP, but can’t manage to get their references listed in increasing order?
But until now, no one could prove a non-linear lower bound for the nonmonotone complexity of any Boolean function in NP.
That’s a great example of how helpless we are: we’ve got all these problems whose complexity should grow faster than any polynomial, and we can’t even prove their complexity grows faster than linear. Sad!
An obvious attempt to get a super-polynomial lower bound for the non-monotone complexity of the clique function could be the extension of the method which has led to the proof of an exponential lower bound of its monotone complexity. This is the so-called “method of approximation” developed by Razborov [11].
I don’t know about this. All I know is that Razborov and Rudich proved a whole bunch of strategies for proving P ≠ NP can’t possibly work. These strategies are called ‘natural proofs’. Here are some friendly blog articles on their result:
• Timothy Gowers, How not to prove that P is not equal to NP, 3 October 2013.
• Timothy Gowers, Razborov and Rudich’s natural proofs argument, 7 October 2013.
From these I get the impression that what Blum calls ‘Boolean networks’ may be what other people call ‘Boolean circuits’. But I could be wrong!
Continuing:
Razborov [13] has shown that his approximation method cannot be used to prove better than quadratic lower bounds for the non-monotone complexity of a Boolean function.
So, this method is unable to prove some NP problem can’t be solved in polynomial time and thus prove P ≠ NP. Bummer!
But Razborov uses a very strong distance measure in his proof for the inability of the approximation method. As elaborated in [5], one can use the approximation method with a weaker distance measure to prove a super-polynomial lower bound for the non-monotone complexity of a Boolean function.
This reference [5] is to another paper by Blum. And in the end, he claims to use similar methods to prove that the complexity of any Boolean network that solves the clique decision problem must grow faster than a polynomial.
So, if you’re trying to check his proof that P ≠ NP, you should probably start by checking that other paper!
The picture below, by Behnam Esfahbod on Wikicommons, shows the two possible scenarios. The one at left is the one Norbert Blum claims to have shown we’re in.
August 14, 2017
Clifford V. Johnson - Asymptotia
I finished that short story project for that anthology I told you about and submitted the final files to the editor on Sunday. Hurrah. It'll appear next year and I'll give you a warning about when it is to appear once they announce the book. It was fun to work on this story. The sample above is a couple of process shots of me working (on my iPad) on an imagining of the LA skyline as it might look some decades from now. I've added several buildings among the ones that might be familiar. It is for the opening establishing shot of the whole book. There's one of San Francisco later on, by the way. (I learned more about the SF skyline and the Bay Bridge than I care to admit now...)
I will admit that I went a bit overboard with the art for this project! I intended to do a lot rougher and looser style in both pencil work and colour and of course ended up with far too much obsessing over precision and detail in the end (as you can also see here, here and here). As an interesting technical landmark [...] Click to continue reading this post
The post A Skyline to Come? appeared first on Asymptotia.
Peter Coles - In the Dark
I just had my last lunch in the canteen in the Niels Bohr Institute and will shortly be heading off to the airport to begin the journey back to Blighty. It’s been a pretty intense couple of weeks but I’ve enjoyed it enormously and have learnt a lot, even though I’ve done hardly any of the things I originally planned to do!
I haven’t been staying in the building shown in the picture, but in one of the adjacent buildings not shown. In fact my office is directly above the canteen. I took this picture on the way home on Sunday, as I noticed that the main entrance has the date `1920′ written on it. I do hope they’re planning a 100th anniversary!
Anyway, farewell to everyone at the Niels Bohr Institute and elsewhere. I hope to return before too long.
Follow @telescoperEmily Lakdawalla - The Planetary Society Blog
Peter Coles - In the Dark
The angry chap on the right (appropriately enough) on this image taken at the violent demonstration at Charlottesville Va at the weekend is a white nationalist member of the alt-right white supremacist Nazi by the name of Peter Cvjetanovic.
Apparently Peter is unhappy that his picture is being shared so widely on the internet. Life is tough sometimes.
And, yes, I mean Nazi.
Follow @telescoperAugust 12, 2017
Lubos Motl - string vacua and pheno
Froggatt's and Nielsen's and Donald Bennett's multiple point criticality principle says that the parameters of quantum field theory are chosen on the boundaries of a maximum number of phases – i.e. so that something maximally special seems to happen over there.
This principle is supported by a reasonably impressive prediction of the fine-structure constant, the top quark mass, the Higgs boson mass, and perhaps the neutrino masses and/or the cosmological constant related to them.
In some sense, the principle modifies the naive "uniform measure" on the parameter space that is postulated by naturalness. We may say that the multiple point criticality principle not only modifies naturalness. It almost exactly negates it. The places with \(\theta=0\) where \(\theta\) is the distance from some phase transition are of measure zero, and therefore infinitely unlikely, according to naturalness. But the multiple point criticality principle says that they're really preferred. In fact, if there are several phase transitions and \(\theta_i\) measure the distances from several domain walls in the moduli space, the multiple point criticality principle wants to set all the parameters \(\theta_i\) equal to zero.
Is there an everyday life analogy for that? I think so. Look at the picture at the top and ignore the boat with the German tourist in it. What you see is the Arctic Ocean – with lots of water and ice over there. What is the temperature of the ice and the water? Well, it's about 0 °C, the melting point of water. In reality, the melting point is a bit different due to the salinity.
But in this case, there exists a very good reason to conclude that we're near the melting point. It's because we can see that the water and the ice co-exist. And the water may only exist above the melting point; and the ice may only exist beneath the melting point. The intersection of these two intervals is a narrow interval – basically the set containing the melting point only. If the water were much warmer than the melting point, it would have to cool quickly enough because the ice underneath is colder – it can't really be above the melting point.
(The heat needed for the ice to melt is equal to the heat needed to warm the same amount of water by some 80 °C if I remember well.)
How is it possible that the temperature 0 °C, although it's a special value of measure zero, is so popular in the Arctic Ocean? It's easy. If you study what's happening when you warm the ice – start with a body of ice only – you will ultimately get to the melting point and a part of ice will melt. You will obtain a mixture of the ice and water. Now, if you are adding additional heat, the ice no longer heats up. Instead, the extra heat will be used to transform an increasing fraction of the ice to the water – i.e. to melt the ice.
So the growth of the temperature stops at the melting point. Instead of the temperature, what the additional incoming heat increases is the fraction of the H_{2}O molecules that have already adopted the liquid state. Only when the fraction increases to 100%, you get pure liquid water and the additional heating may increase the temperature above 0 °C.
In theoretical physics, we want things like the top quark mass \(m_t\) to be analogous to the temperature \(T\) of the Arctic water. Can we find a similar mechanism in physics that would just explain why the multiple point criticality principle is right?
The easiest way is to take the analogy literally and consider the multiverse. The multiverse may be just like the Arctic Ocean. And parts of it may be analogous to the floating ice, parts of it may be analogous to the water underneath. There could be some analogy of the "heat transfer" that forces something like \(m_t\) to be nearly the same in the nearby parts of the multiverse. But the special values of \(m_t\) that allow several phases may occupy a finite fraction of the multiverse and what is varying in this region isn't \(m_t\) but rather the percentage of the multiverse occupied by the individual phases.
There may be regions of the multiverse where several phases co-exist and several parameters analogous to \(m_t\) appear to be fine-tuned to special values.
I am not sure whether an analysis of this sort may be quantified and embedded into a proper full-blown cosmological model. It would be nice. But maybe the multiverse isn't really needed. It seems to me that at these special values of the parameters where several phases co-exist, the vacuum states could naturally be superpositions of quantum states built on several classically very different configurations. Such a law would make it more likely that the cosmological constant is described by a seesaw mechanism, too.
If it's true and if the multiple-phase special points are favored, it's because of some "attraction of the eigenvalues". If you know random matrix theory, i.e. the statistical theory of many energy levels in the nuclei, you know that the energy levels tend to repel each other. It's because some Jacobian factor is very small in the regions where the energy eigenvalues approach each other. Here, we need the opposite effect. We need the values of parameters such as \(m_t\) to be attracted to the special values where phases may be degenerate.
So maybe even if you avoid any assumption about the existence of any multiverse, you may invent a derivation at the level of the landscape only. We normally assume that the parameter spaces of the low-energy effective field theory (or their parts allowed in the landscape, i.e. those draining the swamp) are covered more or less uniformly by the actual string vacua. We know that this can't quite be right. Sometimes we can't even say what the "uniform distribution" is supposed to look like.
But this assumption of uniformity could be flawed in very specific and extremely interesting ways. It could be that the actual string vacua actually love to be degenerate – "almost equal" superpositions of vacua that look classically very different from each other. In general, there should be some tunneling in between the vacua and the tunneling gives you off-diagonal matrix elements (between different phases) to many parameters describing the low-energy physics of the vacua (coupling constants, cosmological constant).
And because of the off-diagonal elements, the actual vacua we should find when we're careful aren't actually "straightforward quantum coherent states" built around some classical configurations. But very often, they may like to be superpositions – with non-negligible coefficients – of many phases. If that's so, even the single vacuum – in our visible Universe – could be analogous to the Arctic Ocean in my metaphor and an explanation of the multiple point criticality principle could exist.
If it were right qualitatively, it could be wonderful. One could try to look for a refinement of this Arctic landscape theory – a theory that tries to predict more realistic probability distributions on the low-energy effective field theories' parameter spaces, distributions that are non-uniform and at least morally compatible with the multiple point criticality principle. This kind of reasoning could even lead us to a calculation of some values of the parameters that are much more likely than others – and it could be the right ones which are compatible with our measurements.
A theory of the vacuum selection could exist. I tend to think that this kind of research hasn't been sufficiently pursued partly because of the left-wing bias of the research community. They may be impartial in many ways but the biases often do show up even in faraway contexts. Leftists may instinctively think that non-uniform distributions are politically incorrect so they prefer the uniformity of naturalness or the "typical vacua" in the landscape. I have always felt that these Ansätze are naive and on the wrong track – and the truth is much closer to their negations. The apparent numerically empirical success of the multiple point criticality principle is another reason to think so.
Note that while we're trying to calculate some non-uniform distributions, the multiple point criticality principle is a manifestation of egalitarianism and multiculturalism from another perspective – because several phases co-exist as almost equal ones. ;-)
by Luboš Motl (noreply@blogger.com) at August 12, 2017 04:49 PM
August 11, 2017
The n-Category Cafe
John and I are currently enjoying Applied Algebraic Topology 2017 in the city of Sapporo, on the northern Japanese island of Hokkaido.
I spoke about magnitude homology of metric spaces. A central concept in applied topology is persistent homology, which is also a homology theory of metric spaces. But magnitude homology is different.
It was brought into being one year ago on this very blog, principally by Mike Shulman, though Richard Hepworth and Simon Willerton had worked out a special case before. You can read a long post of mine about it from a year ago, which in turn refers back to a very long comments thread of an earlier post.
But for a short account, try my talk slides. They introduce both magnitude itself (including some exciting new developments) and magnitude homology. Both are defined in the wide generality of enriched categories, but I concentrated on the case of metric spaces.
Of course, John’s favourite slide was the one shown.
by leinster (Tom.Leinster@gmx.com) at August 11, 2017 08:23 PM
The n-Category Cafe
guest post by David Myers
Proarrow equipments (which also go by the names “fibrant double categories” and “framed bicategories”) are wonderful and fundamental category-like objects. If categories are the abstract algebras of functions, then equipments are the abstract algebras of functions and relations. They are a fantastic setting to do formal category theory, which you can learn about in Mike’s post about them on this blog!
For my undergraduate thesis, I came up with a graphical calculus for working with equipments. I wasn’t the first to come up with it (if you’re familiar with both string diagrams and equipments, it’s basically the only sort of thing that you’d try), but I did prove it sound using a proof similar to Joyal and Street’s proof of the soundness of the graphical calculus for monoidal categories. You can see the full paper on the arXiv, or see the slides from a talk I gave about it at CT2017 here. Below the fold, I’ll show you the diagrams and a bit of what you can do with them.
What is a Double Category?
A double category is a category internal to the category of categories. Now, this is fun to say, but takes a bit of unpacking. Here is a more elementary definition together with the string diagrams:
Definition: A double category has
- Objects $<semantics>A<annotation\; encoding="application/x-tex">A</annotation></semantics>$, $<semantics>B<annotation\; encoding="application/x-tex">B</annotation></semantics>$, $<semantics>C,<annotation\; encoding="application/x-tex">C,</annotation></semantics>$ $<semantics>\dots <annotation\; encoding="application/x-tex">\backslash ldots</annotation></semantics>$, which will be written as bounded plane regions of different colors , , , $<semantics>\dots <annotation\; encoding="application/x-tex">\backslash ldots</annotation></semantics>$.
- Vertical arrows $<semantics>f:<annotation\; encoding="application/x-tex">f\; :</annotation></semantics>$ $<semantics>\to <annotation\; encoding="application/x-tex">\backslash rightarrow</annotation></semantics>$ , $<semantics>\dots <annotation\; encoding="application/x-tex">\backslash ldots</annotation></semantics>$, which we will just call arrows and write as vertical lines , directed downwards, dividing the plane region from .
- Horizontal arrows $<semantics>J<annotation\; encoding="application/x-tex">J</annotation></semantics>$, $<semantics>K<annotation\; encoding="application/x-tex">K</annotation></semantics>$, $<semantics>H:<annotation\; encoding="application/x-tex">H\; :\; </annotation></semantics>$ $<semantics>\to <annotation\; encoding="application/x-tex">\backslash to</annotation></semantics>$ , $<semantics>\dots <annotation\; encoding="application/x-tex">\backslash ldots</annotation></semantics>$, which we will just call proarrows and write as horizontal lines dividing the plane region from .
- 2-cells ,$<semantics>\dots <annotation\; encoding="application/x-tex">\backslash ldots</annotation></semantics>$, are represented as beads between the arrows and proarrows ,$<semantics>\dots <annotation\; encoding="application/x-tex">\backslash ldots</annotation></semantics>$
The usual square notation is on the left, and the string diagrams are on the right.
There are two ways to compose 2-cells: horizontally, and vertically. These satisfy and interchange law saying that composing horizontally and then vertically is the same as composing vertically and then horizontally.
Note that when we compose 2-cells horizontally, we must compose the vertical arrows. Therefore, the vertical arrows will form a category. Similary, when we compose 2-cells vertically, we must compose the horizontal proarrows. Therefore, the horizontal proarrows will form a category. Except, this is not quite true; in most of our examples in the wild, the composition of proarrows will only commute up to isomorphism, so they will form a bicategory. I’ll just hand wave this away for the rest of the post.
This is about all there is to the graphical calculus for double categories. Any deformation of a double diagram that keeps the vertical arrows vertical and the horizontal proarrows horizontal will describe an equal composite in any double category.
Here are some examples of double categories:
In many double categories that we meet “in the wild”, the arrows will be function-like and the proarrows relation-like. These double categories are called equipments. In these cases, we can turn functions into relations by taking their graphs. This can be realized in the graphical calculus by bending vertical arrows horizontal.
Companions, Conjoints, and Equipments
An arrow has a companion if there is a proarrow together with two 2-cells and such that
= and = .
I call these the “kink identities”, because they are reminiscent of the “zig-zag identities” for adjunctions in string diagrams. We can think of as the graph of as a subset of its domain times its codomain.
Similarly, is said to have a conjoint if there is a proarrow together with two 2-cells and such that
$<semantics>=<annotation\; encoding="application/x-tex">=</annotation></semantics>$ and $<semantics>=<annotation\; encoding="application/x-tex">=</annotation></semantics>$ .
Definition: A proarrow equipment is a double category where every arrow has a conjoint and a companion.
The prototypical example of a proarrow equipment, and also the reason for the name, is the equipment of categories, functors, profunctors, and profunctor morphisms. In this equipment, companions are the restriction of the hom of the codomain by the functor on the left, and conjoints are the restriction of the hom of the codomain by the functor on the right.
In the equipment with objects sets, arrows functions, and proarrows relations, the companion and conjoint are the graph of a function as a relation from the domain to codomain or from the codomain to domain respectively.
The following lemma is a central elementary result of the theory of equipments:
Lemma (Spider Lemma): In an equipment, we can bend arrows. More formally, there is a bijective correspondence between diagrams of form of the left, and diagrams of the form of the right:
$<semantics>\approx <annotation\; encoding="application/x-tex">\backslash approx</annotation></semantics>$ .
Proof. The correspondence is given by composing the outermost vertical or horizontal arrows by their companion or conjoint (co)units, as suggested by the slight bends in the arrows above. The kink identities then ensure that these two processes are inverse to each other, giving the desired bijection.
In his post, Mike calls this the “fundamental lemma”. This is the engine humming under the graphical calculus; in short, the Spider Lemma says that we can bend vertical wires horizontal. We can use this bending to prove a classical result of category theory in a very general setting.
Hom-Set and Zig-Zag Adjunctions
It is a classical fact of category theory that an adjunction $<semantics>f\u22a3g:A\rightleftarrows B<annotation\; encoding="application/x-tex">f\; \backslash dashv\; g\; :\; A\; \backslash rightleftarrows\; B</annotation></semantics>$ may be defined using natural transformations $<semantics>\eta :id\to \mathrm{fg}<annotation\; encoding="application/x-tex">\backslash eta:\; \backslash id\; \backslash rightarrow\; fg</annotation></semantics>$ and $<semantics>\u03f5:\mathrm{gf}\to id<annotation\; encoding="application/x-tex">\backslash epsilon:\; gf\; \backslash rightarrow\; \backslash id</annotation></semantics>$ (which we will call a zig-zag adjunction, after the coherence conditions they have to satisfy – also called the triangle equations), or by giving a natural isomorphism $<semantics>\psi :B(f,1)\cong A(1,g)<annotation\; encoding="application/x-tex">\backslash psi\; :\; B(f,\; 1)\; \backslash cong\; A(1,\; g)</annotation></semantics>$. This equivalence holds in any proarrow equipment, which we can now show quickly and intuitively with string diagrams.
Suppose we have an adjunction $<semantics>\u22a3<annotation\; encoding="application/x-tex">\backslash dashv</annotation></semantics>$ , given by the vertical cells and , satisfying the zig-zag identities
= and = .
By bending the unit and counit, we get the horizontal cells and . Bending the zig-zag identities shows that these maps are inverse to each other
= = = ,
and are therefore the natural isomorphism $<semantics>\cong <annotation\; encoding="application/x-tex">\backslash cong</annotation></semantics>$ we wanted.
Going the other way, suppose is a natural isomorphism with inverse . That is,
= and = .
Then we can define a unit and counit by bending. These satisfy the zig-zag identities by pulling straight and using (1):
= = = ,
= = = .
Though this proof can be discovered graphically, it specializes to the usual argument in the case that the equipment is an equipment of enriched categories!
And Much, Much More!
In the paper, you’ll find that every deformation of an equipment diagram gives the same composite – the graphical calculus is sound. But you’ll also find an application of the calculus: a “Yoneda-style” embedding of every equipment into the equipment of categories enriched in it. The paper still definitely needs some work, so I welcome any feedback in the comments!
I hope these string diagrams make using equipments easier and more fun.
August 10, 2017
Symmetrybreaking - Fermilab/SLAC
The new Fermilab Accelerator Science and Technology facility at Fermilab looks to the future of accelerator science.
Unlike most particle physics facilities, the new Fermilab Accelerator Science and Technology facility (FAST) wasn’t constructed to find new particles or explain basic physical phenomena. Instead, FAST is a kind of workshop—a space for testing novel ideas that can lead to improved accelerator, beamline and laser technologies.
Historically, accelerator research has taken place on machines that were already in use for experiments, making it difficult to try out new ideas. Tinkering with a physicist’s tools mid-search for the secrets of the universe usually isn’t a great idea. By contrast, FAST enables researchers to study pieces of future high-intensity and high-energy accelerator technology with ease.
“FAST is specifically aiming to create flexible machines that are easily reconfigurable and that can be accessed on very short notice,” says Alexander Valishev, head of department that manages FAST. “You can roll in one experiment and roll the other out in a matter of days, maybe months, without expensive construction and operation costs.”
This flexibility is part of what makes FAST a useful place for training up new accelerator scientists. If a student has an idea, or something they want to study, there’s plenty of room for experimentation.
“We want students to come and do their thesis research at FAST, and we already have a number of students working.” Valishev says. “We have already had a PhD awarded on the basis of work done at FAST, but we want more of that.”
Small ring, bright beam
FAST will eventually include three parts: an electron injector, a proton injector and a particle storage ring called the Integrable Optics Test Accelerator, or IOTA. Although it will be small compared to other rings—only 40 meters long, while Fermilab’s Main Injector has a circumference of 3 kilometers—IOTA will be the centerpiece of FAST after its completion in 2019. And it will have a unique feature: the ability to switch from being an electron accelerator to a proton accelerator and back again.
“The sole purpose of this synchrotron is to test accelerator technology and develop that tech to test ideas and theories to improve accelerators everywhere,” says Dan Broemmelsiek, a scientist in the IOTA/FAST department.
One aspect of accelerator technology FAST focuses on is creating higher-intensity or “brighter” particle beams.
Brighter beams pack a bigger particle punch. A high-intensity beam could send a detector twice as many particles as is usually possible. Such an experiment could be completed in half the time, shortening the data collection period by several years.
IOTA will test a new concept for accelerators called integrable optics, which is intended to create a more concentrated, stable beam, possibly producing higher intensity beams than ever before.
“If this IOTA thing works, I think it could be revolutionary,” says Jamie Santucci, an engineering physicist working on FAST. “It’s going to allow all kinds of existing accelerators to pack in way more beam. More beam, more data.”
Maximum energy milestone
Although the completion of IOTA is still a few years away, the electron injector will reach a milestone this summer: producing an electron beam with the energy of 300 million electronvolts (MeV).
“The electron injector for IOTA is a research vehicle in its own right,” Valishev says. It provides scientists a chance to test superconducting accelerators, a key piece of technology for future physics machines that can produce intense acceleration at relatively low power.
“At this point, we can measure things about the beam, chop it up or focus it,” Broemmelsiek says. “We can use cameras to do beam diagnostics, and there’s space here in the beamline to put experiments to test novel instrumentation concepts.”
The electron beam’s previous maximum energy of 50 MeV was achieved by passing the beam through two superconducting accelerator cavities and has already provided opportunities for research. The arrival of the 300 MeV beam this summer—achieved by sending the beam through another eight superconducting cavities—will open up new possibilities for accelerator research, with some experiments already planned to start as soon as the beam is online.
FAST forward
The third phase of FAST, once IOTA is complete, will be the construction of the proton injector.
“FAST is unique because we will specifically target creating high-intensity proton beams,” Valishev says.
This high-intensity proton beam research will directly translate to improving research into elusive particles called neutrinos, Fermilab’s current focus.
“In five to 10 years, you’ll be talking to a neutrino guy and they’ll go, ‘I don’t know what the accelerator guys did, but it’s fabulous. We’re getting more neutrinos per hour than we ever thought we would,’” Broemmelsiek says.
Creating new accelerator technology is often an overlooked area in particle physics, but the freedom to try out new ideas and discover how to build better machines for research is inherently rewarding for people who work at FAST.
“Our business is science, and we’re supposed to make science, and we work really hard to do that,” Broemmelsiek says. “But it’s also just plain ol’ fun.”
August 09, 2017
Tommaso Dorigo - Scientificblogging
August 08, 2017
ZapperZ - Physics and Physicists
Now, they have used microwaves to flip the spin of the positron. This resulted not only in the first precise determination of the antihydrogen hyperfine splitting, but also the first antimatter transition line shape, a plot of the spin flip probability versus the microwave frequency.
“The data reveal clear and distinct signatures of two allowed transitions, from which we obtain a direct, magnetic-field-independent measurement of the hyperfine splitting,” the researchers said.
“From a set of trials involving 194 detected atoms, we determine a splitting of 1,420.4 ± 0.5 MHz, consistent with expectations for atomic hydrogen at the level of four parts in 10,000.”
I am expecting a lot more studies on these anti-hydrogen, especially now that they have a very reliable way of sustaining these things.
The paper is an open access on Nature, so you should be able to read the entire thing for free.
Zz.
by ZapperZ (noreply@blogger.com) at August 08, 2017 03:20 PM
Symmetrybreaking - Fermilab/SLAC
Prototype tests of the future SuperCDMS SNOLAB experiment are in full swing.
When an extraordinarily sensitive dark matter experiment goes online at one of the world’s deepest underground research labs, the chances are better than ever that it will find evidence for particles of dark matter—a substance that makes up 85 percent of all matter in the universe but whose constituents have never been detected.
The heart of the experiment, called SuperCDMS SNOLAB, will be one of the most sensitive detectors for hypothetical dark matter particles called WIMPs, short for “weakly interacting massive particles.” SuperCDMS SNOLAB is one of two next-generation experiments (the other one being an experiment called LZ) selected by the US Department of Energy and the National Science Foundation to take the search for WIMPs to the next level, beginning in the early 2020s.
“The experiment will allow us to enter completely unexplored territory,” says Richard Partridge, head of the SuperCDMS SNOLAB group at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and SLAC National Accelerator Laboratory. “It’ll be the world’s most sensitive detector for WIMPs with relatively low mass, complementing LZ, which will look for heavier WIMPs.”
The experiment will operate deep underground at Canadian laboratory SNOLAB inside a nickel mine near the city of Sudbury, where 6800 feet of rock provide a natural shield from high-energy particles from space, called cosmic rays. This radiation would not only cause unwanted background in the detector; it would also create radioactive isotopes in the experiment’s silicon and germanium sensors, making them useless for the WIMP search. That’s also why the experiment will be assembled from major parts at its underground location.
A detector prototype is currently being tested at SLAC, which oversees the efforts of the SuperCDMS SNOLAB project.
Colder than the universe
The only reason we know dark matter exists is that its gravity pulls on regular matter, affecting how galaxies rotate and light propagates. But researchers believe that if WIMPs exist, they could occasionally bump into normal matter, and these collisions could be picked up by modern detectors.
SuperCDMS SNOLAB will use germanium and silicon crystals in the shape of oversized hockey pucks as sensors for these sporadic interactions. If a WIMP hits a germanium or silicon atom inside these crystals, two things will happen: The WIMP will deposit a small amount of energy, causing the crystal lattice to vibrate, and it’ll create pairs of electrons and electron deficiencies that move through the crystal and alter its electrical conductivity. The experiment will measure both responses.
“Detecting the vibrations is very challenging,” says KIPAC’s Paul Brink, who oversees the detector fabrication at Stanford. “Even the smallest amounts of heat cause lattice vibrations that would make it impossible to detect a WIMP signal. Therefore, we’ll cool the sensors to about one hundredth of a Kelvin, which is much colder than the average temperature of the universe.”
These chilly temperatures give the experiment its name: CDMS stands for “Cryogenic Dark Matter Search.” (The prefix “Super” indicates that the experiment is more sensitive than previous detector generations.)
The use of extremely cold temperatures will be paired with sophisticated electronics, such as transition-edge sensors that switch from a superconducting state of zero electrical resistance to a normal-conducting state when a small amount of energy is deposited in the crystal, as well as superconducting quantum interference devices, or SQUIDs, that measure these tiny changes in resistance.
The experiment will initially have four detector towers, each holding six crystals. For each crystal material—silicon and germanium—there will be two different detector types, called high-voltage (HV) and interleaved Z-sensitive ionization phonon (iZIP) detectors. Future upgrades can further boost the experiment’s sensitivity by increasing the number of towers to 31, corresponding to a total of 186 sensors.
Working hand in hand
The work under way at SLAC serves as a system test for the future SuperCDMS SNOLAB experiment. Researchers are testing the four different detector types, the way they are integrated into towers, their superconducting electrical connectors and the refrigerator unit that cools them down to a temperature of almost absolute zero.
“These tests are absolutely crucial to verify the design of these new detectors before they are integrated in the experiment underground at SNOLAB,” says Ken Fouts, project manager for SuperCDMS SNOLAB at SLAC. “They will prepare us for a critical DOE review next year, which will determine whether the project can move forward as planned.” DOE is expected to cover about half of the project costs, with the other half coming from NSF and a contribution from the Canadian Foundation for Innovation.
Important work is progressing at all partner labs of the SuperCDMS SNOLAB project. Fermi National Accelerator Laboratory is responsible for the cryogenics infrastructure and the detector shielding—both will enable searching for faint WIMP signals in an environment dominated by much stronger unwanted background signals. Pacific Northwest National Laboratory will lend its expertise in understanding background noise in highly sensitive precision experiments. A number of US universities are involved in various aspects of the project, including detector fabrication, tests, data analysis and simulation.
The project also benefits from international partnerships with institutions in Canada, France, the UK and India. The Canadian partners are leading the development of the experiment’s data acquisition and will provide the infrastructure at SNOLAB.
“Strong partnerships create a lot of synergy and make sure that we’ll get the best scientific value out of the project,” says Fermilab’s Dan Bauer, spokesperson of the SuperCDMS collaboration, which consists of 109 scientists from 22 institutions, including numerous universities. “Universities have lots of creative students and principal investigators, and their talents are combined with the expertise of scientists and engineers at the national labs, who are used to successfully manage and build large projects.”
SuperCDMS SNOLAB will be the fourth generation of experiments, following CDMS-I at Stanford, CDMS-II at the Soudan mine in Minnesota, and a first version of SuperCDMS at Soudan, which completed operations in 2015.
“Over the past 20 years we’ve been pushing the limits of our detectors to make them more and more sensitive for our search for dark matter particles,” says KIPAC’s Blas Cabrera, project director of SuperCDMS SNOLAB. “Understanding what constitutes dark matter is as fundamental and important today as it was when we started, because without dark matter none of the known structures in the universe would exist—no galaxies, no solar systems, no planets and no life itself.”
John Baez - Azimuth
In the comments on this blog post I’m taking some notes on this conference:
• Applied Algebraic Topology 2017, August 8-12, 2017, Hokkaido University, Sapporo, Japan.
Unfortunately these notes will not give you a good summary of the talks—and almost nothing about applications of algebraic topology. Instead, I seem to be jotting down random cool math facts that I’m learning and don’t want to forget.
August 07, 2017
CERN Bulletin
Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website www.interfon.fr or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.
CERN Bulletin
Les activités du club de yoga reprennent le 1^{er} septembre
Yoga, Sophrologie, Tai Chi, Méditation
Êtes-vous à la recherche de bien-être, sérénité,
forme physique, souplesse de corps et d’esprit ?
Voulez-vous réduire votre stress ?
Rejoignez le club de yoga!
Des cours tous les jours de la semaine,
10 professeurs différents
CERN Bulletin
Wednesday 9 August 2017 at 20.00
CERN Council Chamber
The Fifth Element
Directed by Luc Besson
France, 1997, 126 min
Two hundred and fifty years in the future, life as we know it is threatened by the arrival of Evil. Only the Fifth Element can stop the Evil from extinguishing life, as it tries to do every five thousand years. She is assisted by a former elite commando turned cab driver, Korben Dallas, who is, in turn, helped by Prince/Arsenio clone, Ruby Rhod. Unfortunately, Evil is being assisted by Mr. Zorg, who seeks to profit from the chaos that Evil will bring, and his alien mercenaries.
Original version English; French subtitles
* * * * * * * *
Wednesday 16 August 2017 at 20.00
CERN Council Chamber
Mad Max 2 - The Road Warrior
Directed by George Miller
Australia, 1982, 94 min
A former Australian policeman now living in the post-apocalyptic Australian outback as a warrior agrees to help a community of survivors living in a gasoline refinery to defend them and their gasoline supplies from evil barbarian warriors.
Original version English; French subtitles
* * * * * * * *
Wednesday 23 August 2017 at 20.00
CERN Council Chamber
THX 1138
Directed by George Lucas
USA, 1971, 86 min
The human race has been relocated to an underground city located beneath the Earth's surface. There, the population is entertained by holographic TV which broadcasts sex and violence and robotic police force enforces the law. In the underground city, all citizens are drugged to control their emotions and their behaviour and sex is a crime. Factory worker THX-1138 stops taking the drugs and he breaks the law when he finds himself falling in love with his room-mate LUH 3417 and is imprisoned when LUH 3417 is pregnant. Escaping from jail with illegal programmer SEN 5241 and a hologram named SRT, THX 1138 goes in search of LUH 3417 and escapes to the surface, whilst being pursued by robotic policemen..
Original version English; French subtitles
CERN Bulletin
Summer is here, enjoy our offers for the water parks!
Walibi:
Tickets "Zone terrestre": 24 € instead of 30 €.
Access to Aqualibi: 5 € instead of 6 € on presentation of your ticket purchased at the Staff Association.
Bonus! Free for children under 100 cm, with limited access to the attractions.
Free car park.
* * * * * * * *
Aquaparc:
Day ticket:
- Children: 33 CHF instead of 39 CHF
- Adults : 33 CHF instead of 49 CHF
Bonus! Free for children under 5 years old.
CERN Bulletin
Would you like to learn a new sport and meet new people?
The CERN Golf Club organises golf lessons for beginners starting in August or September.
The lesson series consist of 6 lessons of 1h30 each week, in a group of 6 people and given by the instructor Cedric Steinmetz at the Jiva Hill golf course in Crozet: http://www.jivahillgolf.com
The cost for the golf lessons is 40 euros for CERN employees or family members plus the golf club membership fee of 30 CHF.
If you are interested in participating in these lessons or need more details, please contact us by email at: club-golf-committee@cern.ch
August 05, 2017
The n-Category Cafe
In June I went to the following conference.
This was held at the Będlewo Conference Centre which is run by the Polish Academy of Sciences’ Institute of Mathematics. Like Oberwolfach it is kind of in the middle of nowhere, being about half an hour’s bus ride from Poznan. (As our excursion guide told us, Poznan is 300km from anywhere: 300 km from Warsaw, 300 km from Berlin, 300 km from the sea and 300 km from the mountains.) You get to eat and drink in the palace pictured below; the seminar rooms and accommodation are in a somewhat less grand building out of shot of the photo.
I gave a 20-minute long, magnitude-related talk. You can download the slides below. Do try the BuzzFeed-like quiz at the end. How many of the ten spaces can just identify just from their dimension profile?
To watch the animation I think that you will have to use acrobat reader. If you don’t want to use that then there’s a movie-free version.
Here’s the abstract.
Some spaces seem to have different dimensions at different scales. A long thin strip might appear one-dimensional at a distance, then two-dimensional when zoomed in on, but when zoomed in on even closer it is seen to be made of a finite array of points, so at that scale it seems zero-dimensional. I will present a way of quantifying this phenomenon.
The main idea is to think of dimension as corresponding to growth rate of size: when you double distances, a line will double in size and a square will quadruple in size. You then just need some good notions of size of metric spaces. One such notion is ‘magnitude’ which was introduced by Leinster, using category theoretic ideas. but was found to have links to many other areas of maths such as biodiversity and potential theory. There’s a closely related, but computationally more tractable, family of notions of size called ‘spreads’ which I introduced following connections with biodiversity.
Meckes showed that the asymptotic growth rate of the magnitude of a metric space is the Minkowski dimension (i.e. the usual dimension for squares and lines and the usual fractal dimension for things like Cantor sets). But this is zero for finite metric spaces. However, by considering growth rate non-asymptotically you get interesting looking results for finite metric spaces, such as the phenomenon described in the first parargraph.
I have blogged about instantaneous dimension before at this post. One connection with applied topology is that as in for persistent homology, one is considering what is happens to a metric space as you scale the metric.
The talk was in the smallest room of three parallel talks, so I had a reasonably small audience. However, it was very nice that almost everyone who was in the talk came up and spoke to me about it afterwards; some even told me how I could calculate magnitude of large metric spaces much faster! For instance Brad Nelson showed me how you can use iterative methods, such as the Krylov subspace method, for solving large linear systems numerically. This is much faster than just naively asking Maple to solve the linear system.
Anyway, do say below how well you did in the quiz!
by willerton (S.Willerton@sheffield.ac.uk) at August 05, 2017 05:31 PM
Clifford V. Johnson - Asymptotia
So here's some big USC news that you're probably not hearing about elsewhere. I think it's the best thing that's happened on campus for a long time, and it's well worth noting. As of today (4th August, when I wrote this), there's a Trader Joe's on campus!
It opened (relatively quietly) today and I stopped by on my way home to pick up a few things - something I've fantasized about doing for some time. It's a simple thing but it's also a major thing in my opinion. Leaving aside the fact that I can now sometimes get groceries on the way home (with a subway stop just a couple of blocks away) - and also now more easily stock up my office with long workday essentials like Scottish shortbread and sardines in olive oil, there's another reason this is big news. This part of the city (and points south) simply don't have as many good options (when it comes to healthy food) as other parts of the city. It is still big news when a grocery store like this opens south the 10 freeway. In fact, away from over on the West side (where the demographic changes significantly), there were *no* Trader Joe's stores south of the 10 until this one opened today**. (Yes, in 2017 - I can wait while you check your calendar.) I consider this at least as significant (if not more) as the Whole Foods opening in downtown at [...] Click to continue reading this post
The post The Big USC News You Haven’t Heard… appeared first on Asymptotia.
The n-Category Cafe
People have been using algebraic topology in data analysis these days, so we’re starting to see conferences like this:
- Applied Algebraic Topology 2017, August 8-12, 2017, Hokkaido University, Sapporo, Japan.
I’m giving the first talk at this one. I’ve done a lot of work on applied category theory, but only a bit on on applied algebraic topology. It was tempting to smuggle in some categories, operads and props under the guise of algebraic topology. But I decided it would be more useful, as a kind of prelude to the conference, to say a bit about the overall history of algebraic topology, and its inner logic: how it was inevitably driven to categories, and then 2-categories, and then $<semantics>\mathrm{\infty}<annotation\; encoding="application/x-tex">\backslash infty</annotation></semantics>$-categories.
This may be the least ‘applied’ of all the talks at this conference, but I’m hoping it will at least trigger some interesting thoughts. We don’t want the ‘applied’ folks to forget the grand view that algebraic topology has to offer!
Here are my talk slides:
Abstract. As algebraic topology becomes more important in applied mathematics it is worth looking back to see how this subject has changed our outlook on mathematics in general. When Noether moved from working with Betti numbers to homology groups, she forced a new outlook on topological invariants: namely, they are often functors, with two invariants counting as ‘the same’ if they are naturally isomorphic. To formalize this it was necessary to invent categories, and to formalize the analogy between natural isomorphisms between functors and homotopies between maps it was necessary to invent 2-categories. These are just the first steps in the ‘homotopification’ of mathematics, a trend in which algebra more and more comes to resemble topology, and ultimately abstract ‘spaces’ (for example, homotopy types) are considered as fundamental as sets. It is natural to wonder whether topological data analysis is a step in the spread of these ideas into applied mathematics, and how the importance of ‘robustness’ in applications will influence algebraic topology.
I thank Mike Shulman with some help on model categories and quasicategories. Any mistakes are, of course, my own fault.
John Baez - Azimuth
People have been using algebraic topology in data analysis these days, so we’re starting to see conferences like this:
• Applied Algebraic Topology 2017, August 8-12, 2017, Hokkaido University, Sapporo, Japan.
I’m giving the first talk at this one. I’ve done a lot of work on applied category theory, but only a bit on on applied algebraic topology. It was tempting to smuggle in some categories, operads and props under the guise of algebraic topology. But decided it would be more useful, as a kind of prelude to the conference, to say a bit about the overall history of algebraic topology, and its inner logic: how it was inevitably driven to categories, and then 2-categories, and then ∞-categories.
This may be the least ‘applied’ of all the talks at this conference, but I’m hoping it will at least trigger some interesting thoughts. We don’t want the ‘applied’ folks to forget the grand view that algebraic topology has to offer!
Here are my talk slides:
• The rise and spread of algebraic topology.
Abstract. As algebraic topology becomes more important in applied mathematics it is worth looking back to see how this subject has changed our outlook on mathematics in general. When Noether moved from working with Betti numbers to homology groups, she forced a new outlook on topological invariants: namely, they are often functors, with two invariants counting as ‘the same’ if they are naturally isomorphic. To formalize this it was necessary to invent categories, and to formalize the analogy between natural isomorphisms between functors and homotopies between maps it was necessary to invent 2-categories. These are just the first steps in the ‘homotopification’ of mathematics, a trend in which algebra more and more comes to resemble topology, and ultimately abstract ‘spaces’ (for example, homotopy types) are considered as fundamental as sets. It is natural to wonder whether topological data analysis is a step in the spread of these ideas into applied mathematics, and how the importance of ‘robustness’ in applications will influence algebraic topology.
I thank Mike Shulman with some help on model categories and quasicategories. Any mistakes are, of course, my own fault.
August 04, 2017
Clifford V. Johnson - Asymptotia
Yeah, I still hate doing crowd scenes. (And the next panel is an even wider shot. Why do I do this to myself?)
Anyway, this is a glimpse of the work I'm doing on the final colour for a short science fiction story I wrote and drew for an anthology collection to appear soon. I mentioned it earlier. (Can't say more yet because it's all hush-hush still, involving lots of fancy writers I've really no business keeping company with.) I've [...] Click to continue reading this post
The post Future Crowds… appeared first on Asymptotia.
Lubos Motl - string vacua and pheno
T2K presents hint of CP violation by neutrinosThe strange acronym T2K stands for Tokai to Kamioka. So the T2K experiment is located in Japan but the collaboration is heavily multi-national. It works much like the older K2K, KEK to Kamioka. Indeed, it's no coincidence that Kamioka sounds like Kamiokande. Average Japanese people probably tend to know the former, average physicists tend to know the latter. ;-)
Dear physicists, Kamiokande was named after Kamioka, not vice versa! ;-)
Muon neutrinos are created at the source.
These muon neutrinos go under ground through 295 kilometers of rock and they have the opportunity to change themselves into electron neutrinos.
In 2011, T2K claimed evidence for neutrino oscillations powered by \(\theta_{13}\), the last and least "usual" real angle in the mixing matrix. In Summer 2017, we still believe that this angle is nonzero, like the other two, \(12,23\), and F-theory, a version of string theory, had predicted its approximate magnitude rather correctly.
In 2013, they found more than 7-sigma evidence for electron-muon neutrino oscillations and received a Breakthrough Prize for that.
By some physical and technical arrangements, they are able to look at the oscillations of antineutrinos as well and measure all the processes. The handedness (left-handed or right-handed) of the neutrinos we know is correlated with their being neutrinos or antineutrinos. But this correlation makes it possible to conserve the CP-symmetry. If you replace neutrinos with antineutrinos and reflect all the reality and images in the mirror, so that left-handed become right-handed, the allowed left-handed neutrinos become the allowed right-handed antineutrinos so everything is fine.
But we know that the CP-symmetry is also broken by elementary particles in Nature – even though the spectrum of known particles and their allowed polarizations doesn't make this breaking unavoidable. The only experimentally confirmed source of CP-violation we know is the complex phase in the CKM matrix describing the relationship between upper-type and lower-type quark mass eigenstates.
Well, T2K has done some measurement and they have found some two-sigma evidence – deviation from the CP-symmetric predictions – supporting the claim that a similar CP-violating phase \(\delta_{CP}\), or another CP-violating effect, is nonzero even in the neutrino sector. So if it's true, the neutrinos' masses are qualitatively analogous to the quark masses. They have all the twists and phases and violations of naive symmetries that are allowed by the basic consistency.
Needless to say, the two-sigma evidence is very weak. Most such "weak caricatures of a discovery" eventually turn out to be coincidences and flukes. If they managed to collect 10 times more data and the two-sigma deviation would really follow from a real effect, a symmetry breaking, then it would be likely enough to discover the CP-violation in the neutrino sector at 5 sigma – which is considered sufficient evidence for experimental physicists to brag, get drunk, scream "discovery, discovery", accept a prize, and get drunk again (note that the 5-sigma process has 5 stages).
Ivan Mládek, Japanese [people] in [Czech town of] Jablonec, "Japonci v Jablonci". Japanese men are walking through a Jablonec bijou exhibition and buying corals for the government and the king. The girl sees that one of them has a crush on her. He gives her corals and she's immediately his. I don't understand it, you, my Japanese boy, even though you are not a man of Jablonec, I will bring you home. I will feed you nicely, to turn you into a man, and I won't let you leave to any Japan after that again. Visual arts by fifth-graders.
So while I think that most two-sigma claims ultimately fade away, this particular candidate for a discovery sounds mundane enough so that it could be true and 2 sigma could be enough for you to believe it is true. Theoretically speaking, there is no good reason to think that the complex phase should be absent in the neutrino sector. If quarks and leptons differ in such aspects, I think that neutrinos tend to have larger and more generic angles than the quarks, not vice versa.
by Luboš Motl (noreply@blogger.com) at August 04, 2017 05:34 PM
ZapperZ - Physics and Physicists
And it’s really difficult to detect these gentle interactions. Collar’s group bombarded their detector with trillions of neutrinos per second, but over 15 months, they only caught a neutrino bumping against an atomic nucleus 134 times. To block stray particles, they put 20 feet of steel and a hundred feet of concrete and gravel between the detector and the neutrino source. The odds that the signal was random noise is less than 1 in 3.5 million—surpassing particle physicists’ usual gold standard for announcing a discovery. For the first time, they saw a neutrino nudge an entire atomic nucleus.
Currently, the entire paper is available from the Science website.
Zz.
by ZapperZ (noreply@blogger.com) at August 04, 2017 12:58 AM
August 03, 2017
Lubos Motl - string vacua and pheno
Dark Energy Survey reveals most accurate measurement of dark matter structure in the universecelebrating a new result by the Dark Energy Survey (DES), a multinational collaboration studying dark matter and dark energy using a telescope in Chile, at an altitude of 2,200 meters.
DES wants to produce similar results as Planck – the modern sibling of WMAP and COBE, a satellite that studies the cosmic microwave background temperature in various directions very finely – but its method is very different. The DES telescope looks at things in the infrared – but it is looking at "regular things" such as the number of galaxy clusters, weak gravitational lensing, type IA supernovae, and baryon acoustic oscillations.
It sounds incredible to me but the DES transnational team is capable of detecting tiny distortions of the images of distant galaxies that are caused by gravitational lensing and by measuring how much distortion there is in a given direction, they determine the density of dark matter in that direction.
At the end, they determine some of the same cosmological parameters as Planck, e.g. that dark energy makes about 70 percent of the energy density of our Universe in average. And especially if you focus on a two-dimensional plane, you may see a slight disagreement between Planck's measurement based on the CMB and "pretty much all other methods" to measure the cosmological parameters.
Planck (the blue blob) implies a slightly higher fraction of the matter in the Universe, perhaps 30-40 percent, and a slightly higher clumpiness of matter than DES whose fraction of the matter is between 24-30 percent. Meanwhile, all the measurements aside from the truly historicaly "pure CMB" Planck measurement – which includes DES and Planck's own analysis of clusters – seem to be in a better agreement with each other.
So it's disappointing that cosmology still allows us to measure the fraction of matter just as "something between 25 and 40 percent or so" – the accuracy is lousier than we used to say. On the other hand, the disagreement is just 1.4-2.3 sigma, depending on what is exactly considered and how. This is a very low signal-to-noise ratio – the disagreement is very far from a discovery (we often like 5 sigma).
More importantly, even if the disagreement could be calculated to be 4 sigma or something like that, what's troubling is that such a disagreement gives us almost no clue about "how we should modify our standard cosmological model" to improve the fit. An extra sterile neutrino could be the thing we need. Or some cosmic strings added to the Universe. Or a modified profile for some galactic dark matter. But maybe some holographic MOND-like modification of gravity is desirable. Or a different model of dark energy – some variable cosmological constant. Or something totally different – if you weren't impressed by the fundamental diversity of the possible explanations I have mentioned.
The disagreement in one or two parameters is just way too little information to give us (by us, I mean the theorists) useful clues. So even if I can imagine that in some distant future, perhaps in the year 2200, people will already agree that our model of the cosmological constant was seriously flawed in some way I can't imagine now, the observations provide us with no guide telling us where we should go from here.
Aside from the DES telescope, Chile has similar compartments and colors on their national flag as Czechia and they also have nice protocol pens with pretty good jewels that every wise president simply has to credibly appreciate. When I say "credibly", it means not just by words and clichés but by acts, too.
So even if the disagreement were 4 sigma, I just wouldn't switch to a revolutionary mode – partly because the statistical significance isn't quite persuasive, partly because I don't know what kind of a revolution I should envision or participate in.
That's why I prefer to interpret the result of DES as something that isn't quite new or ground-breaking but that still shows how nontrivially we understand the life of the Universe that has been around for 13.800002017 ;-) billion years so far and how very different ways to interpret the fields in the Universe seem to yield (almost) the same outcome.
You may look for some interesting relevant tweets by cosmologist Shaun Hotchkiss.
by Luboš Motl (noreply@blogger.com) at August 03, 2017 04:03 PM
Symmetrybreaking - Fermilab/SLAC
The Dark Energy Survey reveals the most accurate measurement of dark matter structure in the universe.
Imagine planting a single seed and, with great precision, being able to predict the exact height of the tree that grows from it. Now imagine traveling to the future and snapping photographic proof that you were right.
If you think of the seed as the early universe, and the tree as the universe the way it looks now, you have an idea of what the Dark Energy Survey (DES) collaboration has just done. In a presentation today at the American Physical Society Division of Particles and Fields meeting at the US Department of Energy’s (DOE) Fermi National Accelerator Laboratory, DES scientists will unveil the most accurate measurement ever made of the present large-scale structure of the universe.
These measurements of the amount and “clumpiness” (or distribution) of dark matter in the present-day cosmos were made with a precision that, for the first time, rivals that of inferences from the early universe by the European Space Agency’s orbiting Planck observatory. The new DES result (the tree, in the above metaphor) is close to “forecasts” made from the Planck measurements of the distant past (the seed), allowing scientists to understand more about the ways the universe has evolved over 14 billion years.
“This result is beyond exciting,” says Scott Dodelson of Fermilab, one of the lead scientists on this result. “For the first time, we’re able to see the current structure of the universe with the same clarity that we can see its infancy, and we can follow the threads from one to the other, confirming many predictions along the way.”
Most notably, this result supports the theory that 26 percent of the universe is in the form of mysterious dark matter and that space is filled with an also-unseen dark energy, which is causing the accelerating expansion of the universe and makes up 70 percent.
Paradoxically, it is easier to measure the large-scale clumpiness of the universe in the distant past than it is to measure it today. In the first 400,000 years following the Big Bang, the universe was filled with a glowing gas, the light from which survives to this day. Planck’s map of this cosmic microwave background radiation gives us a snapshot of the universe at that very early time. Since then, the gravity of dark matter has pulled mass together and made the universe clumpier over time. But dark energy has been fighting back, pushing matter apart. Using the Planck map as a start, cosmologists can calculate precisely how this battle plays out over 14 billion years.
“The DES measurements, when compared with the Planck map, support the simplest version of the dark matter/dark energy theory,” says Joe Zuntz, of the University of Edinburgh, who worked on the analysis. “The moment we realized that our measurement matched the Planck result within 7 percent was thrilling for the entire collaboration.”
The primary instrument for DES is the 570-megapixel Dark Energy Camera, one of the most powerful in existence, able to capture digital images of light from galaxies eight billion light-years from Earth. The camera was built and tested at Fermilab, the lead laboratory on the Dark Energy Survey, and is mounted on the National Science Foundation’s 4-meter Blanco telescope, part of the Cerro Tololo Inter-American Observatory in Chile, a division of the National Optical Astronomy Observatory. The DES data are processed at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign.
Scientists on DES are using the camera to map an eighth of the sky in unprecedented detail over five years. The fifth year of observation will begin in August. The new results released today draw from data collected only during the survey’s first year, which covers 1/30th of the sky.
“It is amazing that the team has managed to achieve such precision from only the first year of their survey,” says National Science Foundation Program Director Nigel Sharp. “Now that their analysis techniques are developed and tested, we look forward with eager anticipation to breakthrough results as the survey continues.”
DES scientists used two methods to measure dark matter. First, they created maps of galaxy positions as tracers, and second, they precisely measured the shapes of 26 million galaxies to directly map the patterns of dark matter over billions of light-years using a technique called gravitational lensing.
To make these ultra-precise measurements, the DES team developed new ways to detect the tiny lensing distortions of galaxy images, an effect not even visible to the eye, enabling revolutionary advances in understanding these cosmic signals. In the process, they created the largest guide to spotting dark matter in the cosmos ever drawn (see image). The new dark matter map is 10 times the size of the one DES released in 2015 and will eventually be three times larger than it is now.
“It’s an enormous team effort and the culmination of years of focused work,” says Erin Sheldon, a physicist at the DOE’s Brookhaven National Laboratory, who co-developed the new method for detecting lensing distortions.
These results and others from the first year of the Dark Energy Survey will be released today online and announced during a talk by Daniel Gruen, NASA Einstein fellow at the Kavli Institute for Particle Astrophysics and Cosmology at DOE’s SLAC National Accelerator Laboratory, at 5 pm Central time. The talk is part of the APS Division of Particles and Fields meeting at Fermilab and will be streamed live.
The results will also be presented by Kavli fellow Elisabeth Krause of the Kavli Insitute for Particle Astrophysics and Cosmology at SLAC at the TeV Particle Astrophysics Conference in Columbus, Ohio, on Aug. 9; and by Michael Troxel, postdoctoral fellow at the Center for Cosmology and AstroParticle Physics at Ohio State University, at the International Symposium on Lepton Photon Interactions at High Energies in Guanzhou, China, on Aug. 10. All three of these speakers are coordinators of DES science working groups and made key contributions to the analysis.
“The Dark Energy Survey has already delivered some remarkable discoveries and measurements, and they have barely scratched the surface of their data,” says Fermilab Director Nigel Lockyer. “Today’s world-leading results point forward to the great strides DES will make toward understanding dark energy in the coming years.”
A version of this article was published by Fermilab.
August 02, 2017
ZapperZ - Physics and Physicists
Collisions with heavy ions—typically gold or lead—put lots of protons and neutrons in a small volume with lots of energy. Under these conditions, the neat boundaries of those particles break down. For a brief instant, quarks and gluons mingle freely, creating a quark-gluon plasma. This state of matter has not been seen since an instant after the Big Bang, and it has plenty of unusual properties. "It has all sorts of superlatives," Ohio State physicist Mike Lisa told Ars. "It is the most easily flowing fluid in nature. It's highly explosive, much more than a supernova. It's hotter than any fluid that's known in nature."
.
.
.
We can now add another superlative to the quark-gluon plasma's list of "mosts:" it can be the most rapidly spinning fluid we know of. Much of the study of the material has focused on the results of two heavy ions smacking each other head-on, since that puts the most energy into the resulting debris, and these collisions spit the most particles out. But in many collisions, the two ions don't hit each other head-on—they strike a more glancing blow.
It is a fascinating article, and you may read the significance of this study, especially in relation to how it informs us on certain aspect of QCD symmetry.
But if you know me, I never fail to try to point something out that is more general in nature, and something that the general public should take note of. I like this statement in the article very much, and I'd like to highlight it here:
But a logical "should" doesn't always equal a "does," so it's important to confirm that the resulting material is actually spinning. And that's a rather large technical challenge when you're talking about a glob of material roughly the same size as an atomic nucleus.
This is what truly distinguish science with other aspects of our lives. There are many instances, especially in politics, social policies, etc., where certain assertions are made and appear to be "obvious" or "logical", and yet, these are simply statements made without any valid evidence to support it. I can think of many ("Illegal immigrants taking away jobs", or "gay marriages undermines traditional marriages", etc...etc). Yet, no matter how "logical" these may appear to be, they are simply statements that are devoid of evidence to support them. Still, whenever they are uttered, many in the public accept them as FACTS or valid, without seeking or requiring evidence to support them. One may believe that "A should cause B", but DOES IT REALLY?
Luckily, this is NOT how it is done in science. No matter how obvious it is, or how verified something is, there are always new boundaries to push and a retesting of the ideas, even ones that are known to be true under certain conditions. And a set of experimental evidence is the ONLY standard that will settle and verify any assertion and statements.
This is why everyone should learn science, not just for the material, but to understand the methodology and technique. It is too bad they don't require politicians to have such skills.
Zz.
by ZapperZ (noreply@blogger.com) at August 02, 2017 10:45 PM
ZapperZ - Physics and Physicists
The worlds of chemistry and indistinguishable physics have long been thought of as entirely separate. Indistinguishability generally occurs at low temperatures while chemistry requires relatively high temperatures where objects tend to lose their quantum properties. As a result, chemists have long felt confident in ignoring the effects of quantum indistinguishability.
Today, Matthew Fisher and Leo Radzihovsky at the University of California, Santa Barbara, say that this confidence is misplaced. They show for the first time that quantum indistinguishability must play a significant role in some chemical processes even at ordinary temperatures. And they say this influence leads to an entirely new chemical phenomenon, such as isotope separation and could also explain a previously mysterious phenomenon such as the enhanced chemical activity of reactive oxygen species.
They have uploaded their paper on arXiv.
Of course, this is still preliminary, but it provides the motivation to really explore this aspect that had not been seriously considered before. And with this latest addition, it is just another example on where physics, especially QM, are being further explored in biology and chemistry.
Zz.
by ZapperZ (noreply@blogger.com) at August 02, 2017 10:28 PM
August 01, 2017
Symmetrybreaking - Fermilab/SLAC
The sprawling Square Kilometer Array radio telescope hunts signals from one of the quietest places on earth.
When you think of radios, you probably think of noise. But the primary requirement for building the world’s largest radio telescope is keeping things almost perfectly quiet.
Radio signals are constantly streaming to Earth from a variety of sources in outer space. Radio telescopes are powerful instruments that can peer into the cosmos—through clouds and dust—to identify those signals, picking them up like a signal from a radio station. To do it, they need to be relatively free from interference emitted by cell phones, TVs, radios and their kin.
That’s one reason the Square Kilometer Array is under construction in the Great Karoo, 400,000 square kilometers of arid, sparsely populated South African plain, along with a component in the Outback of Western Australia. The Great Karoo is also a prime location because of its high altitude—radio waves can be absorbed by atmospheric moisture at lower altitudes. SKA currently covers some 1320 square kilometers of the landscape.
Even in the Great Karoo, scientists need careful filtering of environmental noise. Effects from different levels of radio frequency interference (RFI) can range from “blinding” to actually damaging the instruments. Through South Africa’s Astronomy Geographic Advantage Act, SKA is working toward “radio protection,” which would dedicate segments of the bandwidth for radio astronomy while accommodating other private and commercial RF service requirements in the region.
“Interference affects observational data and makes it hard and expensive to remove or filter out the introduced noise,” says Bernard Duah Asabere, Chief Scientist of the Ghana team of the African Very Long Baseline Interferometry Network (African VLBI Network, or AVN), one of the SKA collaboration groups in eight other African nations participating in the project.
SKA “will tackle some of the fundamental questions of our time, ranging from the birth of the universe to the origins of life,” says SKA Director-General Philip Diamond. Among the targets: dark energy, Einstein’s theory of gravity and gravitational waves, and the prevalence of the molecular building blocks of life across the cosmos.
SKA-South Africa can detect radio spectrum frequencies from 350 megahertz to 14 gigahertz. Its partner Australian component will observe the lower-frequency scale, from 50 to 350 megahertz. Visible light, for comparison, has frequencies ranging from 400 to 800 million megahertz. SKA scientists will process radiofrequency waves to form a picture of their source.
A precursor instrument to SKA called MeerKAT (named for the squirrel-sized critters indigenous to the area), is under construction in the Karoo. This array of 16 dishes in South Africa achieved first light on June 19, 2016. MeerKAT focused on 0.01 percent of the sky for 7.5 hours and saw 1300 galaxies—nearly double the number previously known in that segment of the cosmos.
Since then, MeerKAT met another milestone with 32 integrated antennas. MeerKat will also reach its full array of 64 dishes early next year, making it one of the world’s premier radio telescopes. MeerKAT will eventually be integrated into SKA Phase 1, where an additional 133 dishes will be built. That will bring the total number of antennas for SKA Phase I in South Africa to 197 by 2023. So far, 32 dishes are fully integrated and are being commissioned for science operations.
On completion of SKA 2 by 2030, the detection area of the receiver dishes will exceed 1 square kilometer, or about 11,000,000 square feet. Its huge size will make it 50 times more sensitive than any other radio telescope. It is expected to operate for 50 years.
SKA is managed by a 10-nation consortium, including the UK, China, India and Australia as well as South Africa, and receives support from another 10 countries, including the US. The project is headquartered at Jodrell Bank Observatory in the UK.
The full SKA will use radio dishes across Africa and Australia, and collaboration members say it will have a farther reach and more detailed images than any existing radio telescope.
In preparation for the SKA, South Africa and its partner countries developed AVN to establish a network of radiotelescopes across the African continent. One of its projects is the refurbishing of redundant 30-meter-class antennas, or building new ones across the partner countries, to operate as networked radio telescopes.
The first project of its kind is the AVN Ghana project, where an idle 32-meter diameter dish has been refurbished and revamped with a dual receiver system at 5 and 6.7 gigahertz central frequencies for use as a radio telescope. The dish was previously owned and operated by the government and the company Vodafone Ghana as a telecommunications facility. Now it will explore celestial objects such as extragalactic nebulae, pulsars and other RF sources in space, such as molecular clouds, called masers.
Asabere’s group will be able to tap into areas of SKA’s enormous database (several supercomputers’ worth) over the Internet. So will groups in Botswana, Kenya, Madagascar, Mauritius, Mozambique, Namibia and Zambia. SKA is also offering extensive outreach in participating countries and has already awarded 931 scholarships, fellowships and grants.
Other efforts in Ghana include introducing astronomy in the school curricula, training students in astronomy and related technologies, doing outreach in schools and universities, receiving visiting students at the telescope site and hosting programs such as the West African International Summer School for Young Astronomers taking place this week.
Asabere, who achieved his advanced degrees in Sweden (Chalmers University of Technology) and South Africa (University of Johannesburg), would like to see more students trained in Ghana, and would like get more researchers on board. He also hopes for the construction of the needed infrastructure, more local and foreign partnerships and strong governmental backing.
“I would like the opportunity to practice my profession on my own soil,” he says.
That day might not be far beyond the horizon. The Leverhulme-Royal Society Trust and Newton Fund in the UK are co-funding extensive human capital development programs in the SKA-AVN partner countries. A seven-member Ghanaian team, for example, has undergone training in South Africa and has been instructed in all aspects of the project, including the operation of the telescope.
Several PhD students and one MSc student from Ghana have received SKA-SA grants to pursue further education in astronomy and engineering. The Royal Society has awarded funding in collaboration with Leeds University to train two PhDs and 60 young aspiring scientists in the field of astrophysics.
Based on the success of the Leverhulme-Royal Society program, a joint UK-South Africa Newton Fund intervention (DARA—the Development in Africa with Radio Astronomy) has since been initiated in other partner countries to grow high technology skills that could lead to broader economic development in Africa.
As SKA seeks answers to complex questions over the next five decades, there should be plenty of opportunities for science throughout the Southern Hemisphere. Though it lives in one of the quietest places, SKA hopes to be heard loud and clear.
July 31, 2017
Symmetrybreaking - Fermilab/SLAC
A physics project kicks off construction a mile underground.
For many government officials, groundbreaking ceremonies are probably old hat—or old hardhat. But how many can say they’ve been to a groundbreaking that’s nearly a mile underground?
A group of dignitaries, including a governor and four members of Congress, now have those bragging rights. On July 21, they joined scientists and engineers 4850 feet beneath the surface at the Sanford Underground Research Facility to break ground on the Long-Baseline Neutrino Facility (LBNF).
LBNF will house massive, four-story-high detectors for the Deep Underground Neutrino Experiment (DUNE) to learn more about neutrinos—invisible, almost massless particles that may hold the key to how the universe works and why matter exists. Fourteen shovels full of dirt marked the beginning of construction for a project that could be, well, groundbreaking.
The Sanford Underground Research Facility in Lead, South Dakota resides in what was once the deepest gold mine in North America, which has been repurposed as a place for discovery of a different kind.
“A hundred years ago, we mined gold out of this hole in the ground. Now we’re going to mine knowledge,” said US Representative Kristi Noem of South Dakota in an address at the groundbreaking.
Transforming an old mine into a lab is more than just a creative way to reuse space. On the surface, cosmic rays from the sun constantly bombard us, causing cosmic noise in the sensitive detectors scientists use to look for rare particle interactions. But underground, shielded by nearly a mile of rock, there’s cosmic quiet. Cosmic rays are rare, making it easier for scientists to see what’s going on in their detectors without being clouded by interference.
Going down?
It may be easier to analyze data collected underground, but entering the subterranean science facility can be a chore. Nearly 60 people took a trip underground to the groundbreaking site, requiring some careful elevator choreography.
Before venturing into the deep below, reporters and representatives alike donned safety glasses, hardhats and wearable flashlights. They received two brass tags engraved with their names—one to keep and another to hang on a corkboard—a process called “brassing in.” This helps keep track of who’s underground in case of emergency.
The first group piled into the open-top elevator, known as a cage, to begin the descent. As the cage glides through a mile of mountain, it’s easy to imagine what it must have been like to be a miner back when Sanford Lab was the Homestake Mine. What’s waiting below may have changed, but the method of getting there hasn’t: The winch lowering the cage at 500-feet-a-minute is 80 years old and still works perfectly.
The ride to the 4850-level takes about 10 minutes in the cramped cage—it fits 35, but even with 20 people it feels tight. Water drips in through the ceiling as the open elevator chugs along, occasionally passing open mouths in the rock face of drifts once mined for gold.
“When you go underground, you start to think ‘It has never rained in here. And there’s never been daylight,’” says Tim Meyer, Chief Operating Officer of Fermilab, who attended the groundbreaking. “When you start thinking about being a mile below the surface, it just seems weird, like you’re walking through a piece of Swiss cheese.”
Where the cage stops at the 4850-level would be the destination of most elevator occupants on a normal day, since the shaft ends near the entrance of clean research areas housing Sanford Lab experiments. But for the contingent traveling to the future site of LBNF/DUNE on the other end of the mine, the journey continued, this time in an open-car train. It’s almost like a theme-park ride as the motor (as it’s usually called by Sanford staff) clips along through a tunnel, but fortunately, no drops or loop-the-loops are involved.
“The same rails now used to transport visitors and scientists were once used by the Homestake miners to remove gold from the underground facility,” says Jim Siegrist, Associate Director of High Energy Physics at the Department of Energy. “During the ride, rock bolts and protective screens attached to the walls were visible by the light of the headlamp mounted on our hardhats.”
After a 15-minute ride, the motor reached its destination and it was business as usual for a groundbreaking ceremony: speeches, shovels and smiling for photos. A fresh coat of white paint (more than 100 gallons worth) covered the wall behind the officials, creating a scene that almost could have been on the surface.
“Celebrating the moment nearly a mile underground brought home the enormity of the task and the dedication required for such precise experiments,” says South Dakota Governor Dennis Daugaard. “I know construction will take some time, but it will be well worth the wait for the Sanford Underground Research Facility to play such a vital role in one of the most significant physics experiments of our time."
What’s the big deal?
The process to reach the groundbreaking site is much more arduous than reaching most symbolic ceremonies, so what would possess two senators, two representatives, a White House representative, a governor and delegates from three international science institutions (to mention a few of the VIPs) to make the trip? Only the beginning of something huge—literally.
“This milestone represents the start of construction of the largest mega-science project in the United States,” said Mike Headley, executive director of Sanford Lab.
The 14 shovelers at the groundbreaking made the first tiny dent in the excavation site for LBNF, which will require the extraction of more than 870,000 tons of rock to create huge caverns for the DUNE detectors. These detectors will catch neutrinos sent 800 miles through the earth from Fermi National Accelerator Laboratory in the hopes that they will tell us something more about these strange particles and the universe we live in.
“We have the opportunity to see truly world-changing discovery,” said US Representative Randy Hultgren of Illinois. “This is unique—this is the picture of incredible discovery and experimentation going into the future.”
The n-Category Cafe
For a long time Blake Pollard and I have been working on ‘open’ chemical reaction networks: that is, networks of chemical reactions where some chemicals can flow in from an outside source, or flow out. The picture to keep in mind is something like this:
where the yellow circles are different kinds of chemicals and the aqua boxes are different reactions. The purple dots in the sets X and Y are ‘inputs’ and ‘outputs’, where certain kinds of chemicals can flow in or out.
Our paper on this stuff just got accepted, and it should appear soon:
- John Baez and Blake Pollard, A compositional framework for reaction networks, to appear in Reviews in Mathematical Physics.
But thanks to the arXiv, you don’t have to wait: beat the rush, click and download now!
Or at least read the rest of this blog post….
Blake and I gave talks about this stuff in Luxembourg this June, at a nice conference called Dynamics, thermodynamics and information processing in chemical networks. So, if you’re the sort who prefers talk slides to big scary papers, you can look at those:
- John Baez, The mathematics of open reaction networks.
- Blake Pollard, Black-boxing open reaction networks.
But I want to say here what we do in our paper, because it’s pretty cool, and it took a few years to figure it out. To get things to work, we needed my student Brendan Fong to invent the right category-theoretic formalism: ‘decorated cospans’. But we also had to figure out the right way to think about open dynamical systems!
In the end, we figured out how to first ‘gray-box’ an open reaction network, converting it into an open dynamical system, and then ‘black-box’ it, obtaining the relation between input and output flows and concentrations that holds in steady state. The first step extracts the dynamical behavior of an open reaction network; the second extracts its static behavior. And both these steps are functors! So, we’re applying Lawvere’s ideas on functorial semantics to chemistry.
Now Blake has passed his thesis defense based on this work, and he just needs to polish up his thesis a little before submitting it. This summer he’s doing an internship at the Princeton branch of the engineering firm Siemens. He’s working with Arquimedes Canedo on ‘knowledge representation’.
But I’m still eager to dig deeper into open reaction networks. They’re a small but nontrivial step toward my dream of a mathematics of living systems. My working hypothesis is that living systems seem ‘messy’ to physicists because they operate at a higher level of abstraction. That’s what I’m trying to explore.
Here’s the idea of our paper.
The idea
Reaction networks are a very general framework for describing processes where entities interact and transform int other entities. While they first showed up in chemistry, and are often called ‘chemical reaction networks’, they have lots of other applications. For example, a basic model of infectious disease, the ‘SIRS model’, is described by this reaction network:
$<semantics>S+I\stackrel{\iota}{\u27f6}2I\phantom{\rule{2em}{0ex}}I\stackrel{\rho}{\u27f6}R\stackrel{\lambda}{\u27f6}S<annotation\; encoding="application/x-tex">\; S\; +\; I\; \backslash stackrel\{\backslash iota\}\{\backslash longrightarrow\}\; 2\; I\; \backslash qquad\; I\; \backslash stackrel\{\backslash rho\}\{\backslash longrightarrow\}\; R\; \backslash stackrel\{\backslash lambda\}\{\backslash longrightarrow\}\; S\; </annotation></semantics>$
We see here three types of entity, called species:
- $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$: susceptible,
- $<semantics>I<annotation\; encoding="application/x-tex">I</annotation></semantics>$: infected,
- $<semantics>R<annotation\; encoding="application/x-tex">R</annotation></semantics>$: resistant.
We also have three `reactions’:
- $<semantics>\iota :S+I\to 2I<annotation\; encoding="application/x-tex">\backslash iota\; :\; S\; +\; I\; \backslash to\; 2\; I</annotation></semantics>$: infection, in which a susceptible individual meets an infected one and becomes infected;
- $<semantics>\rho :I\to R<annotation\; encoding="application/x-tex">\backslash rho\; :\; I\; \backslash to\; R</annotation></semantics>$: recovery, in which an infected individual gains resistance to the disease;
- $<semantics>\lambda :R\to S<annotation\; encoding="application/x-tex">\backslash lambda\; :\; R\; \backslash to\; S</annotation></semantics>$: loss of resistance, in which a resistant individual becomes susceptible.
In general, a reaction network involves a finite set of species, but reactions go between complexes, which are finite linear combinations of these species with natural number coefficients. The reaction network is a directed graph whose vertices are certain complexes and whose edges are called reactions.
If we attach a positive real number called a rate constant to each reaction, a reaction network determines a system of differential equations saying how the concentrations of the species change over time. This system of equations is usually called the rate equation. In the example I just gave, the rate equation is
$<semantics>\begin{array}{ccl}{\displaystyle \frac{dS}{dt}}& =& {r}_{\lambda}R-{r}_{\iota}SI\\ \\ {\displaystyle \frac{dI}{dt}}& =& {r}_{\iota}SI-{r}_{\rho}I\\ \\ {\displaystyle \frac{dR}{dt}}& =& {r}_{\rho}I-{r}_{\lambda}R\end{array}<annotation\; encoding="application/x-tex">\backslash begin\{array\}\{ccl\}\; \backslash displaystyle\{\backslash frac\{d\; S\}\{d\; t\}\}\; \&=\&\; r\_\backslash lambda\; R\; -\; r\_\backslash iota\; S\; I\; \backslash \backslash \; \backslash \backslash \; \backslash displaystyle\{\backslash frac\{d\; I\}\{d\; t\}\}\; \&=\&\; r\_\backslash iota\; S\; I\; -\; r\_\backslash rho\; I\; \backslash \backslash \; \backslash \backslash \; \backslash displaystyle\{\backslash frac\{d\; R\}\{d\; t\}\}\; \&=\&\; r\_\backslash rho\; I\; -\; r\_\backslash lambda\; R\; \backslash end\{array\}</annotation></semantics>$
Here $<semantics>{r}_{\iota},{r}_{\rho}<annotation\; encoding="application/x-tex">r\_\backslash iota,\; r\_\backslash rho</annotation></semantics>$ and $<semantics>{r}_{\lambda}<annotation\; encoding="application/x-tex">r\_\backslash lambda</annotation></semantics>$ are the rate constants for the three reactions, and $<semantics>S,I,R<annotation\; encoding="application/x-tex">S,\; I,\; R</annotation></semantics>$ now stand for the concentrations of the three species, which are treated in a continuum approximation as smooth functions of time:
$<semantics>S,I,R:\mathbb{R}\to [0,\mathrm{\infty})<annotation\; encoding="application/x-tex">S,\; I,\; R:\; \backslash mathbb\{R\}\; \backslash to\; [0,\backslash infty)</annotation></semantics>$
The rate equation can be derived from the law of mass action, which says that any reaction occurs at a rate equal to its rate constant times the product of the concentrations of the species entering it as inputs.
But a reaction network is more than just a stepping-stone to its rate equation! Interesting qualitative properties of the rate equation, like the existence and uniqueness of steady state solutions, can often be determined just by looking at the reaction network, regardless of the rate constants. Results in this direction began with Feinberg and Horn’s work in the 1960’s, leading to the Deficiency Zero and Deficiency One Theorems, and more recently to Craciun’s proof of the Global Attractor Conjecture.
In our paper, Blake and I present a ‘compositional framework’ for reaction networks. In other words, we describe rules for building up reaction networks from smaller pieces, in such a way that its rate equation can be figured out knowing those those of the pieces. But this framework requires that we view reaction networks in a somewhat different way, as ‘Petri nets’.
Petri nets were invented by Carl Petri in 1939, when he was just a teenager, for the purposes of chemistry. Much later, they became popular in theoretical computer science, biology and other fields. A Petri net is a bipartite directed graph: vertices of one kind represent species, vertices of the other kind represent reactions. The edges into a reaction specify which species are inputs to that reaction, while the edges out specify its outputs.
You can easily turn a reaction network into a Petri net and vice versa. For example, the reaction network above translates into this Petri net:
Beware: there are a lot of different names for the same thing, since the terminology comes from several communities. In the Petri net literature, species are called places and reactions are called transitions. In fact, Petri nets are sometimes called ‘place-transition nets’ or ‘P/T nets’. On the other hand, chemists call them ‘species-reaction graphs’ or ‘SR-graphs’. And when each reaction of a Petri net has a rate constant attached to it, it is often called a ‘stochastic Petri net’.
While some qualitative properties of a rate equation can be read off from a reaction network, others are more easily read from the corresponding Petri net. For example, properties of a Petri net can be used to determine whether its rate equation can have multiple steady states.
Petri nets are also better suited to a compositional framework. The key new concept is an ‘open’ Petri net. Here’s an example:
The box at left is a set X of ‘inputs’ (which happens to be empty), while the box at right is a set Y of ‘outputs’. Both inputs and outputs are points at which entities of various species can flow in or out of the Petri net. We say the open Petri net goes from X to Y. In our paper, we show how to treat it as a morphism $<semantics>f:X\to Y<annotation\; encoding="application/x-tex">f\; :\; X\; \backslash to\; Y</annotation></semantics>$ in a category we call $<semantics>\mathrm{RxNet}<annotation\; encoding="application/x-tex">\{RxNet\}</annotation></semantics>$.
Given an open Petri net with rate constants assigned to each reaction, our paper explains how to get its ‘open rate equation’. It’s just the usual rate equation with extra terms describing inflows and outflows. The above example has this open rate equation:
$<semantics>\begin{array}{ccr}{\displaystyle \frac{dS}{dt}}& =& -{r}_{\iota}SI-{o}_{1}\\ \\ {\displaystyle \frac{dI}{dt}}& =& {r}_{\iota}SI-{o}_{2}\end{array}<annotation\; encoding="application/x-tex">\backslash begin\{array\}\{ccr\}\; \backslash displaystyle\{\backslash frac\{d\; S\}\{d\; t\}\}\; \&=\&\; -\; r\_\backslash iota\; S\; I\; -\; o\_1\; \backslash \backslash \; \backslash \backslash \; \backslash displaystyle\{\backslash frac\{d\; I\}\{d\; t\}\}\; \&=\&\; r\_\backslash iota\; S\; I\; -\; o\_2\; \backslash end\{array\}</annotation></semantics>$
Here $<semantics>{o}_{1},{o}_{2}:\mathbb{R}\to \mathbb{R}<annotation\; encoding="application/x-tex">o\_1,\; o\_2\; :\; \backslash mathbb\{R\}\; \backslash to\; \backslash mathbb\{R\}</annotation></semantics>$ are arbitrary smooth functions describing outflows as a function of time.
Given another open Petri net $<semantics>g:Y\to Z,<annotation\; encoding="application/x-tex">g:\; Y\; \backslash to\; Z,</annotation></semantics>$ for example this:
it will have its own open rate equation, in this case
$<semantics>\begin{array}{ccc}{\displaystyle \frac{dS}{dt}}& =& {r}_{\lambda}R+{i}_{2}\\ \\ {\displaystyle \frac{dI}{dt}}& =& -{r}_{\rho}I+{i}_{1}\\ \\ {\displaystyle \frac{dR}{dt}}& =& {r}_{\rho}I-{r}_{\lambda}R\end{array}<annotation\; encoding="application/x-tex">\backslash begin\{array\}\{ccc\}\; \backslash displaystyle\{\backslash frac\{d\; S\}\{d\; t\}\}\; \&=\&\; r\_\backslash lambda\; R\; +\; i\_2\; \backslash \backslash \; \backslash \backslash \; \backslash displaystyle\{\backslash frac\{d\; I\}\{d\; t\}\}\; \&=\&\; -\; r\_\backslash rho\; I\; +\; i\_1\; \backslash \backslash \; \backslash \backslash \; \backslash displaystyle\{\backslash frac\{d\; R\}\{d\; t\}\}\; \&=\&\; r\_\backslash rho\; I\; -\; r\_\backslash lambda\; R\; \backslash end\{array\}\; </annotation></semantics>$
Here $<semantics>{i}_{1},{i}_{2}:\mathbb{R}\to \mathbb{R}<annotation\; encoding="application/x-tex">i\_1,\; i\_2:\; \backslash mathbb\{R\}\; \backslash to\; \backslash mathbb\{R\}</annotation></semantics>$ are arbitrary smooth functions describing inflows as a function of time. Now for the first bit of category theory: we can compose $<semantics>f<annotation\; encoding="application/x-tex">f</annotation></semantics>$ and $<semantics>g<annotation\; encoding="application/x-tex">g</annotation></semantics>$ by gluing the outputs of $<semantics>f<annotation\; encoding="application/x-tex">f</annotation></semantics>$ to the inputs of $<semantics>g.<annotation\; encoding="application/x-tex">g.</annotation></semantics>$ This gives a new open Petri net $<semantics>gf:X\to Z,<annotation\; encoding="application/x-tex">g\; f:\; X\; \backslash to\; Z,</annotation></semantics>$ as follows:
But this open Petri net $<semantics>gf<annotation\; encoding="application/x-tex">g\; f</annotation></semantics>$ has an empty set of inputs, and an empty set of outputs! So it amounts to an ordinary Petri net, and its open rate equation is a rate equation of the usual kind. Indeed, this is the Petri net we have already seen.
As it turns out, there’s a systematic procedure for combining the open rate equations for two open Petri nets to obtain that of their composite. In the example we’re looking at, we just identify the outflows of $<semantics>f<annotation\; encoding="application/x-tex">f</annotation></semantics>$ with the inflows of $<semantics>g<annotation\; encoding="application/x-tex">g</annotation></semantics>$ (setting $<semantics>{i}_{1}={o}_{1}<annotation\; encoding="application/x-tex">i\_1\; =\; o\_1</annotation></semantics>$ and $<semantics>{i}_{2}={o}_{2}<annotation\; encoding="application/x-tex">i\_2\; =\; o\_2</annotation></semantics>$) and then add the right hand sides of their open rate equations.
The first goal of our paper is to precisely describe this procedure, and to prove that it defines a functor
$<semantics>\diamond :\mathrm{RxNet}\to \mathrm{Dynam}<annotation\; encoding="application/x-tex">\backslash diamond:\; \{RxNet\}\; \backslash to\; \{Dynam\}\; </annotation></semantics>$
from $<semantics>\mathrm{RxNet}<annotation\; encoding="application/x-tex">\{RxNet\}</annotation></semantics>$ to a category $<semantics>\mathrm{Dynam}<annotation\; encoding="application/x-tex">\{Dynam\}</annotation></semantics>$ where the morphisms are ‘open dynamical systems’. By a dynamical system, we essentially mean a vector field on $<semantics>{\mathbb{R}}^{n},<annotation\; encoding="application/x-tex">\backslash mathbb\{R\}^n,</annotation></semantics>$ which can be used to define a system of first-order ordinary differential equations in $<semantics>n<annotation\; encoding="application/x-tex">n</annotation></semantics>$ variables. An example is the rate equation of a Petri net. An open dynamical system allows for the possibility of extra terms that are arbitrary functions of time, such as the inflows and outflows in an open rate equation.
In fact, we prove that $<semantics>\mathrm{RxNet}<annotation\; encoding="application/x-tex">\{RxNet\}</annotation></semantics>$ and $<semantics>\mathrm{Dynam}<annotation\; encoding="application/x-tex">\{Dynam\}</annotation></semantics>$ are symmetric monoidal categories and that $<semantics>d<annotation\; encoding="application/x-tex">d</annotation></semantics>$ is a symmetric monoidal functor. To do this, we use Brendan Fong’s theory of ‘decorated cospans’.
Decorated cospans are a powerful general tool for describing open systems. A cospan in any category is just a diagram like this:
We are mostly interested in cospans in $<semantics>\mathrm{FinSet},<annotation\; encoding="application/x-tex">\{FinSet\},</annotation></semantics>$ the category of finite sets and functions between these. The set $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$, the so-called apex of the cospan, is the set of states of an open system. The sets $<semantics>X<annotation\; encoding="application/x-tex">X</annotation></semantics>$ and $<semantics>Y<annotation\; encoding="application/x-tex">Y</annotation></semantics>$ are the inputs and outputs of this system. The legs of the cospan, meaning the morphisms $<semantics>i:X\to S<annotation\; encoding="application/x-tex">i:\; X\; \backslash to\; S</annotation></semantics>$ and $<semantics>o:Y\to S,<annotation\; encoding="application/x-tex">o:\; Y\; \backslash to\; S,</annotation></semantics>$ describe how these inputs and outputs are included in the system. In our application, $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ is the set of species of a Petri net.
For example, we may take this reaction network:
$<semantics>A+B\stackrel{\alpha}{\u27f6}2C\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}C\stackrel{\beta}{\u27f6}D<annotation\; encoding="application/x-tex">A+B\; \backslash stackrel\{\backslash alpha\}\{\backslash longrightarrow\}\; 2C\; \backslash quad\; \backslash quad\; C\; \backslash stackrel\{\backslash beta\}\{\backslash longrightarrow\}\; D\; </annotation></semantics>$
treat it as a Petri net with $<semantics>S=\{A,B,C,D\}<annotation\; encoding="application/x-tex">S\; =\; \backslash \{A,B,C,D\backslash \}</annotation></semantics>$:
and then turn that into an open Petri net by choosing any finite sets $<semantics>X,Y<annotation\; encoding="application/x-tex">X,Y</annotation></semantics>$ and maps $<semantics>i:X\to S<annotation\; encoding="application/x-tex">i:\; X\; \backslash to\; S</annotation></semantics>$, $<semantics>o:Y\to S<annotation\; encoding="application/x-tex">o:\; Y\; \backslash to\; S</annotation></semantics>$, for example like this:
(Notice that the maps including the inputs and outputs into the states of the system need not be one-to-one. This is technically useful, but it introduces some subtleties that I don’t feel like explaining right now.)
An open Petri net can thus be seen as a cospan of finite sets whose apex $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ is ‘decorated’ with some extra information, namely a Petri net with $<semantics>S<annotation\; encoding="application/x-tex">S</annotation></semantics>$ as its set of species. Fong’s theory of decorated cospans lets us define a category with open Petri nets as morphisms, with composition given by gluing the outputs of one open Petri net to the inputs of another.
We call the functor
$$<semantics>\diamond :\mathrm{RxNet}\to \mathrm{Dynam}<annotation\; encoding="application/x-tex">\backslash diamond:\; \{RxNet\}\; \backslash to\; \{Dynam\}</annotation></semantics>$$
gray-boxing because it hides some but not all the internal details of an open Petri net. (In the paper we draw it as a gray box, but that’s too hard here!)
We can go further and black-box an open dynamical system. This amounts to recording only the relation between input and output variables that must hold in steady state. We prove that black-boxing gives a functor
$$<semantics>\blacksquare :\mathrm{Dynam}\to \mathrm{SemiAlgRel}<annotation\; encoding="application/x-tex">\; \backslash blacksquare:\; \{Dynam\}\; \backslash to\; \{SemiAlgRel\}\; </annotation></semantics>$$
Here $<semantics>\mathrm{SemiAlgRel}<annotation\; encoding="application/x-tex">\{SemiAlgRel\}</annotation></semantics>$ is a category where the morphisms are semi-algebraic relations between real vector spaces, meaning relations defined by polynomials and inequalities. This relies on the fact that our dynamical systems involve algebraic vector fields, meaning those whose components are polynomials; more general dynamical systems would give more general relations.
That semi-algebraic relations are closed under composition is a nontrivial fact, a spinoff of the Tarski–Seidenberg theorem. This says that a subset of $<semantics>{\mathbb{R}}^{n+1}<annotation\; encoding="application/x-tex">\backslash mathbb\{R\}^\{n+1\}</annotation></semantics>$ defined by polynomial equations and inequalities can be projected down onto $<semantics>{\mathbb{R}}^{n}<annotation\; encoding="application/x-tex">\backslash mathbb\{R\}^n</annotation></semantics>$, and the resulting set is still definable in terms of polynomial identities and inequalities. This wouldn’t be true if we didn’t allow inequalities. It’s neat to see this theorem, important in mathematical logic, showing up in chemistry!
Structure of the paper
Okay, now you’re ready to read our paper! Here’s how it goes:
In Section 2 we review and compare reaction networks and Petri nets. In Section 3 we construct a symmetric monoidal category $<semantics>\mathrm{RNet}<annotation\; encoding="application/x-tex">\{RNet\}</annotation></semantics>$ where an object is a finite set and a morphism is an open reaction network (or more precisely, an isomorphism class of open reaction networks). In Section 4 we enhance this construction to define a symmetric monoidal category $<semantics>\mathrm{RxNet}<annotation\; encoding="application/x-tex">\{RxNet\}</annotation></semantics>$ where the transitions of the open reaction networks are equipped with rate constants. In Section 5 we explain the open dynamical system associated to an open reaction network, and in Section 6 we construct a symmetric monoidal category $<semantics>\mathrm{Dynam}<annotation\; encoding="application/x-tex">\{Dynam\}</annotation></semantics>$ of open dynamical systems. In Section 7 we construct the gray-boxing functor
$$<semantics>\diamond :\mathrm{RxNet}\to \mathrm{Dynam}<annotation\; encoding="application/x-tex">\backslash diamond:\; \{RxNet\}\; \backslash to\; \{Dynam\}</annotation></semantics>$$
In Section 8 we construct the black-boxing functor
$$<semantics>\blacksquare :\mathrm{Dynam}\to \mathrm{SemiAlgRel}<annotation\; encoding="application/x-tex">\backslash blacksquare:\; \{Dynam\}\; \backslash to\; \{SemiAlgRel\}</annotation></semantics>$$
We show both of these are symmetric monoidal functors.
Finally, in Section 9 we fit our results into a larger ‘network of network theories’. This is where various results in various papers I’ve been writing in the last few years start assembling to form a big picture! But this picture needs to grow….
ZapperZ - Physics and Physicists
Don Lincoln decides to tackle this issue regarding "radiation". If you have little knowledge and idea about this, this is the video to watch.
Zz.
July 30, 2017
John Baez - Azimuth
For a long time Blake Pollard and I have been working on ‘open’ chemical reaction networks: that is, networks of chemical reactions where some chemicals can flow in from an outside source, or flow out. The picture to keep in mind is something like this:
where the yellow circles are different kinds of chemicals and the aqua boxes are different reactions. The purple dots in the sets X and Y are ‘inputs’ and ‘outputs’, where certain kinds of chemicals can flow in or out.
Our paper on this stuff just got accepted, and it should appear soon:
• John Baez and Blake Pollard, A compositional framework for reaction networks, to appear in Reviews in Mathematical Physics.
But thanks to the arXiv, you don’t have to wait: beat the rush, click and download now!
Blake and I gave talks about this stuff in Luxembourg this June, at a nice conference called Dynamics, thermodynamics and information processing in chemical networks. So, if you’re the sort who prefers talk slides to big scary papers, you can look at those:
• John Baez, The mathematics of open reaction networks.
• Blake Pollard, Black-boxing open reaction networks.
But I want to say here what we do in our paper, because it’s pretty cool, and it took a few years to figure it out. To get things to work, we needed my student Brendan Fong to invent the right category-theoretic formalism: ‘decorated cospans’. But we also had to figure out the right way to think about open dynamical systems!
In the end, we figured out how to first ‘gray-box’ an open reaction network, converting it into an open dynamical system, and then ‘black-box’ it, obtaining the relation between input and output flows and concentrations that holds in steady state. The first step extracts the dynamical behavior of an open reaction network; the second extracts its static behavior. And both these steps are functors!
Lawvere had the idea that the process of assigning ‘meaning’ to expressions could be seen as a functor. This idea has caught on in theoretical computer science: it’s called ‘functorial semantics’. So, what we’re doing here is applying functorial semantics to chemistry.
Now Blake has passed his thesis defense based on this work, and he just needs to polish up his thesis a little before submitting it. This summer he’s doing an internship at the Princeton branch of the engineering firm Siemens. He’s working with Arquimedes Canedo on ‘knowledge representation’.
But I’m still eager to dig deeper into open reaction networks. They’re a small but nontrivial step toward my dream of a mathematics of living systems. My working hypothesis is that living systems seem ‘messy’ to physicists because they operate at a higher level of abstraction. That’s what I’m trying to explore.
Here’s the idea of our paper.
The idea
Reaction networks are a very general framework for describing processes where entities interact and transform int other entities. While they first showed up in chemistry, and are often called ‘chemical reaction networks’, they have lots of other applications. For example, a basic model of infectious disease, the ‘SIRS model’, is described by this reaction network:
We see here three types of entity, called species:
• : susceptible,
• : infected,
• : resistant.
We also have three `reactions’:
• : infection, in which a susceptible individual meets an infected one and becomes infected;
• : recovery, in which an infected individual gains resistance to the disease;
• : loss of resistance, in which a resistant individual becomes susceptible.
In general, a reaction network involves a finite set of species, but reactions go between complexes, which are finite linear combinations of these species with natural number coefficients. The reaction network is a directed graph whose vertices are certain complexes and whose edges are called reactions.
If we attach a positive real number called a rate constant to each reaction, a reaction network determines a system of differential equations saying how the concentrations of the species change over time. This system of equations is usually called the rate equation. In the example I just gave, the rate equation is
Here and are the rate constants for the three reactions, and now stand for the concentrations of the three species, which are treated in a continuum approximation as smooth functions of time:
The rate equation can be derived from the law of mass action, which says that any reaction occurs at a rate equal to its rate constant times the product of the concentrations of the species entering it as inputs.
But a reaction network is more than just a stepping-stone to its rate equation! Interesting qualitative properties of the rate equation, like the existence and uniqueness of steady state solutions, can often be determined just by looking at the reaction network, regardless of the rate constants. Results in this direction began with Feinberg and Horn’s work in the 1960’s, leading to the Deficiency Zero and Deficiency One Theorems, and more recently to Craciun’s proof of the Global Attractor Conjecture.
In our paper, Blake and I present a ‘compositional framework’ for reaction networks. In other words, we describe rules for building up reaction networks from smaller pieces, in such a way that its rate equation can be figured out knowing those those of the pieces. But this framework requires that we view reaction networks in a somewhat different way, as ‘Petri nets’.
Petri nets were invented by Carl Petri in 1939, when he was just a teenager, for the purposes of chemistry. Much later, they became popular in theoretical computer science, biology and other fields. A Petri net is a bipartite directed graph: vertices of one kind represent species, vertices of the other kind represent reactions. The edges into a reaction specify which species are inputs to that reaction, while the edges out specify its outputs.
You can easily turn a reaction network into a Petri net and vice versa. For example, the reaction network above translates into this Petri net:
Beware: there are a lot of different names for the same thing, since the terminology comes from several communities. In the Petri net literature, species are called places and reactions are called transitions. In fact, Petri nets are sometimes called ‘place-transition nets’ or ‘P/T nets’. On the other hand, chemists call them ‘species-reaction graphs’ or ‘SR-graphs’. And when each reaction of a Petri net has a rate constant attached to it, it is often called a ‘stochastic Petri net’.
While some qualitative properties of a rate equation can be read off from a reaction network, others are more easily read from the corresponding Petri net. For example, properties of a Petri net can be used to determine whether its rate equation can have multiple steady states.
Petri nets are also better suited to a compositional framework. The key new concept is an ‘open’ Petri net. Here’s an example:
The box at left is a set X of ‘inputs’ (which happens to be empty), while the box at right is a set Y of ‘outputs’. Both inputs and outputs are points at which entities of various species can flow in or out of the Petri net. We say the open Petri net goes from X to Y. In our paper, we show how to treat it as a morphism in a category we call .
Given an open Petri net with rate constants assigned to each reaction, our paper explains how to get its ‘open rate equation’. It’s just the usual rate equation with extra terms describing inflows and outflows. The above example has this open rate equation:
Here are arbitrary smooth functions describing outflows as a function of time.
Given another open Petri net for example this:
it will have its own open rate equation, in this case
Here are arbitrary smooth functions describing inflows as a function of time. Now for a tiny bit of category theory: we can compose and by gluing the outputs of to the inputs of This gives a new open Petri net as follows:
But this open Petri net has an empty set of inputs, and an empty set of outputs! So it amounts to an ordinary Petri net, and its open rate equation is a rate equation of the usual kind. Indeed, this is the Petri net we have already seen.
As it turns out, there’s a systematic procedure for combining the open rate equations for two open Petri nets to obtain that of their composite. In the example we’re looking at, we just identify the outflows of with the inflows of (setting and ) and then add the right hand sides of their open rate equations.
The first goal of our paper is to precisely describe this procedure, and to prove that it defines a functor
from to a category where the morphisms are ‘open dynamical systems’. By a dynamical system, we essentially mean a vector field on which can be used to define a system of first-order ordinary differential equations in variables. An example is the rate equation of a Petri net. An open dynamical system allows for the possibility of extra terms that are arbitrary functions of time, such as the inflows and outflows in an open rate equation.
In fact, we prove that and are symmetric monoidal categories and that is a symmetric monoidal functor. To do this, we use Brendan Fong’s theory of ‘decorated cospans’.
Decorated cospans are a powerful general tool for describing open systems. A cospan in any category is just a diagram like this:
We are mostly interested in cospans in the category of finite sets and functions between these. The set , the so-called apex of the cospan, is the set of states of an open system. The sets and are the inputs and outputs of this system. The legs of the cospan, meaning the morphisms and describe how these inputs and outputs are included in the system. In our application, is the set of species of a Petri net.
For example, we may take this reaction network:
treat it as a Petri net with :
and then turn that into an open Petri net by choosing any finite sets and maps , , for example like this:
(Notice that the maps including the inputs and outputs into the states of the system need not be one-to-one. This is technically useful, but it introduces some subtleties that I don’t feel like explaining right now.)
An open Petri net can thus be seen as a cospan of finite sets whose apex is ‘decorated’ with some extra information, namely a Petri net with as its set of species. Fong’s theory of decorated cospans lets us define a category with open Petri nets as morphisms, with composition given by gluing the outputs of one open Petri net to the inputs of another.
We call the functor
gray-boxing because it hides some but not all the internal details of an open Petri net. (In the paper we draw it as a gray box, but that’s too hard here!)
We can go further and black-box an open dynamical system. This amounts to recording only the relation between input and output variables that must hold in steady state. We prove that black-boxing gives a functor
(yeah, the box here should be black, and in our paper it is). Here is a category where the morphisms are semi-algebraic relations between real vector spaces, meaning relations defined by polynomials and inequalities. This relies on the fact that our dynamical systems involve algebraic vector fields, meaning those whose components are polynomials; more general dynamical systems would give more general relations.
That semi-algebraic relations are closed under composition is a nontrivial fact, a spinoff of the Tarski–Seidenberg theorem. This says that a subset of defined by polynomial equations and inequalities can be projected down onto , and the resulting set is still definable in terms of polynomial identities and inequalities. This wouldn’t be true if we didn’t allow inequalities. It’s neat to see this theorem, important in mathematical logic, showing up in chemistry!
Structure of the paper
Okay, now you’re ready to read our paper! Here’s how it goes:
In Section 2 we review and compare reaction networks and Petri nets. In Section 3 we construct a symmetric monoidal category where an object is a finite set and a morphism is an open reaction network (or more precisely, an isomorphism class of open reaction networks). In Section 4 we enhance this construction to define a symmetric monoidal category where the transitions of the open reaction networks are equipped with rate constants. In Section 5 we explain the open dynamical system associated to an open reaction network, and in Section 6 we construct a symmetric monoidal category of open dynamical systems. In Section 7 we construct the gray-boxing functor
In Section 8 we construct the black-boxing functor
We show both of these are symmetric monoidal functors.
Finally, in Section 9 we fit our results into a larger ‘network of network theories’. This is where various results in various papers I’ve been writing in the last few years start assembling to form a big picture! But this picture needs to grow….
July 28, 2017
Clifford V. Johnson - Asymptotia
Well, that was nice. Was out for a walk with my son and ran into Walter Isaacson. (The Aspen Center for Physics, which I'm currently visiting, is next door to the Aspen Institute. He's the president and CEO of it.) He wrote the excellent Einstein biography that was the official book of the Genius series I worked on as science advisor. We chatted, and it turns out we have mutual friends and acquaintances.
He was pleased to hear that they got a science advisor on board and that the writers (etc) did such a good job with the science. I also learned that he has a book on Leonardo da Vinci coming out [...] Click to continue reading this post
The post I Went Walking, and… appeared first on Asymptotia.
July 27, 2017
Tommaso Dorigo - Scientificblogging
July 26, 2017
Symmetrybreaking - Fermilab/SLAC
This experimental physicist has followed the ICARUS neutrino detector from Gran Sasso to Geneva to Chicago.
Physicist Angela Fava has been at the enormous ICARUS detector’s side for over a decade. As an undergraduate student in Italy in 2006, she worked on basic hardware for the neutrino hunting experiment: tightening bolts and screws, connecting and reconnecting cables, learning how the detector worked inside and out.
ICARUS (short for Imaging Cosmic And Rare Underground Signals) first began operating for research in 2010, studying a beam of neutrinos created at European laboratory CERN and launched straight through the earth hundreds of miles to the detector’s underground home at INFN Gran Sasso National Laboratory.
In 2014, the detector moved to CERN for refurbishing, and Fava relocated with it. In June ICARUS began a journey across the ocean to the US Department of Energy’s Fermi National Accelerator Laboratory to take part in a new neutrino experiment. When it arrives today, Fava will be waiting.
Fava will go through the installation process she helped with as a student, this time as an expert.
Journey to ICARUS
As a child growing up between Venice and the Alps, Fava always thought she would pursue a career in math. But during a one-week summer workshop before her final year of high school in 2000, she was drawn to experimental physics.
At the workshop, she realized she had more in common with physicists. Around the same time, she read about new discoveries related to neutral, rarely interacting particles called neutrinos. Scientists had recently been surprised to find that the extremely light particles actually had mass and that different types of neutrinos could change into one another. And there was still much more to learn about the ghostlike particles.
At the start of college in 2001, Fava immediately joined the University of Padua neutrino group. For her undergraduate thesis research, she focused on the production of hadrons, making measurements essential to studying the production of neutrinos. In 2004, her research advisor Alberto Guglielmi and his group joined the ICARUS collaboration, and she’s been a part of it ever since.
Fava jests that the relationship actually started much earlier: “ICARUS was proposed for the first time in 1983, which is the year I was born. So we are linked from birth.”
Fava remained at the University of Padua in the same research group for her graduate work. During those years, she spent about half of her time at the ICARUS detector, helping bring it to life at Gran Sasso.
Once all the bolts were tightened and the cables were attached, ICARUS scientists began to pursue their goal of using the detector to study how neutrinos change from one type to another.
During operation, Fava switched gears to create databases to store and log the data. She wrote code to automate the data acquisition system and triggering, which differentiates between neutrino events and background such as passing cosmic rays. “I was trying to take part in whatever activity was going on just to learn as much as possible,” she says.
That flexibility is a trait that Claudio Silverio Montanari, the technical director of ICARUS, praises. “She has a very good capability to adapt,” he says. “Our job, as physicists, is putting together the pieces and making the detector work.”
Changing it up
Adapting to changing circumstances is a skill both Fava and ICARUS have in common. When scientists proposed giving the detector an update at CERN and then using it in a suite of neutrino experiments at Fermilab, Fava volunteered to come along for the ride.
Once installed and operating at Fermilab, ICARUS will be used to study neutrinos from a source a few hundred meters away from the detector. In its new iteration, ICARUS will search for sterile neutrinos, a hypothetical kind of neutrino that would interact even more rarely than standard neutrinos. While hints of these low-mass particles have cropped up in some experiments, they have not yet been detected.
At Fermilab, ICARUS also won’t be buried below more than half a mile of rock, a feature of the INFN setup that shielded it from cosmic radiation from space. That means the triggering system will play an even bigger role in this new experiment, Fava says.
“We have a great challenge ahead of us.” She’s up to the task.