Particle Physics Planet


June 22, 2017

Christian P. Robert - xi'an's og

Le Monde puzzle [#1013]

A purely arithmetic Le Monde mathematical puzzle:

An operation þ applies to all pairs of natural integers with the properties

0 þ (a+1) = (0 þ a)+1, (a+1) þ (b+1)=(a þ b)+1, 271 þ 287 = 77777, 2018 þ 39 = 2018×39

Find the smallest integer d>287 such that there exists c<d leading to c þ d = c x d, the smallest integer f>2017 such that 2017 þ f = 2017×40. Is there any know integer f such that f þ 2017 = 40×2017?

The major appeal in this puzzle (where no R programming seems to help!) is that the “data” does not completely defines the operation  þ ! Indeed, when a<b, it is straightforward to deduce that a þ b = (0 þ 0)+b, hence solving the first two questions by deriving (0 þ 0)=270×287 [with d=2×287 and f=2017×40-270×287], but the opposed quantity b þ a is not defined, apart from (2018-39) þ 0. This however brings a resolution since

(2018-39) þ 0 = 2017×39 and (2018-39+2017) þ 2017 = 2017×39+2017 = 2017×40

leading to f=2018-39+2017=3996.


Filed under: Books, Kids Tagged: competition, Le Monde, mathematical puzzle, number theory, R, simulation

by xi'an at June 22, 2017 10:17 PM

Peter Coles - In the Dark

That Martin Rowson Cartoon..

I heard that the proprietors of The Sun and Daily Mail are getting very upset about this savage but brilliant cartoon by Martin Rowson of the Guardian in the wake of the recent terrorist attack on worshippers outside a mosque in Finsbury Park, so naturally I decided to post it here:


by telescoper at June 22, 2017 09:53 PM

Peter Coles - In the Dark

Is it hotter than normal?

I remember the summer of 1976 very well indeed, both for the heat and for the West Indies’s victory over England in the test series, including bowling England out for 71 at Old Trafford.

Protons for Breakfast Blog

MaxTemp_Average_1981-2010_June This map shows how the average of the maximum daily temperature in June varies across the UK.

It was hot last night. And hot today. But is this hotter than normal? Is this global warming?

Human beings have a remarkably poor perspective on such questions for two reasons.

  • Firstly we only experience the weather in a single place which may not be representative of a country or region. And certainly not the entire Earth!
  • And secondly, our memory of previous weather is poor. Can you remember whether last winter was warmer or colder than average?

Personally I thought last winter was cold. But it was not.

Another reason to love the Met Office.

The Met Office have created carefully written digests of past weather, with month-by-month summaries.

You can see their summaries here and use links from that page to chase historical month-by-month data for the UK…

View original post 748 more words


by telescoper at June 22, 2017 04:07 PM

Symmetrybreaking - Fermilab/SLAC

African School works to develop local expertise

Universities in sub-Saharan Africa are teaming up to offer free training to students interested in fundamental physics.

Header_2:African School

Last Feremenga was born in a small town in Zimbabwe. As a high school student in a specialized school in the capital, Harare, he was drawn to the study of physics.

“Physics was at the top of my list of potential academic fields to pursue,” he says.

But with limited opportunities nearby, that was going to require a lot of travel.

With help from the US Education Assistance Center at the American Embassy in Harare, Feremenga was accepted at the University of Chicago in 2007. As an undergraduate, he conducted research for a year at the nearby US Department of Energy’s Fermi National Accelerator Laboratory.

Then, through the University of Texas at Arlington, he became one of just a handful of African nationals to conduct research as a user at European research center CERN. Feremenga joined the ATLAS experiment at the Large Hadron Collider. He spent his grad-school years traveling between CERN and Argonne National Laboratory near Chicago, analyzing hundreds of terabytes of ATLAS data.

“I became interested in solving problems across diverse disciplines, not just physics,” he says.

“At CERN and Argonne, I assisted in developing a system that filters interesting events from large data-sets. I also analyzed these large datasets to find interesting physics patterns.”

Group photo of African students wearing coats and standing in front of a sign indicating the distance to different cities
The African School of Fundamental Physics and Applications

In December 2016, he received his PhD. In February 2017, he accepted a job at technology firm Digital Reasoning in Nashville, Tennessee.

To pursue particle physics, Feremenga needed to spend the entirety of his higher education outside Zimbabwe. Only one activity brought him even within the same continent as his home: the African School of Fundamental Physics and Applications. Feremenga attended the school in the program’s inaugural year at South Africa’s Stellenbosch University.

The ASP received funding for a year from France’s Centre National de la Recherche Scientific (CNRS) in 2008. Since then, major supporters among 20 funding institutions have included the International Center for Theoretical Physics (ICTP) in Trieste, Italy; the South African National Research Foundation, and department of Science and Technology; and the South African Institute of Physics. Other major supporters have included CERN, the US National Science Foundation and the University of Rwanda.

The free, three-week ASP has been held bi-annually since 2010. Targeting students in sub-Saharan Africa, the school has been held in South Africa, Ghana, Senegal and Rwanda. The 2018 School is slated to take place in Namibia. Thanks to outreach efforts, applications have risen from 125 in 2010 to 439 in 2016.

Inline_2:African School
The African School of Fundamental Physics and Applications

The free, three-week ASP has been held bi-annually since 2010. Targeting students in sub-Saharan Africa, the school has been held in South Africa, Ghana, Senegal and Rwanda. The 2018 School is slated to take place in Namibia. Thanks to outreach efforts, applications have risen from 125 in 2010 to 439 in 2016.

The 50 to 80 students selected for the school must have a minimum of a 3-year university education in math, physics, engineering and/or computer science. The first week of the school focuses on theoretical physics; the second week, experimental physics; the third week, physics applications and high-performance computing.

School organizers stay in touch to support alumni in pursuing higher education, says organizer Ketevi Assamagan. “We maintain contact with the students and help them as much as we can,” Assamagan says. “ASP alumni are pursuing higher education in Africa, Asia, Europe and the US.”

Assamagan, originally from Togo but now a US citizen, worked on the Higgs hunt with the ATLAS experiment. He is currently at Brookhaven National Lab in New York, which supports him devoting 10 percent of his time to the ASP.

While sub-Saharan countries are just beginning to close the gap in physics, there is one well-established accelerator complex in South Africa, operated by the iThemba LABS of Cape Town and Johannesburg. The 30-year-old Separated-Sector Cyclotron, which primarily produces particle beams for nuclear research and for training at the postdoc level, is the largest accelerator of its kind in the southern hemisphere.

Jonathan Dorfan, former Director of SLAC National Accelerator Laboratory and a native of South Africa, attended University of Cape Town. Dorfan recalls that after his Bachelor’s and Master’s degrees, the best PhD opportunities were in the US or Britain. He says he’s hopeful that that outlook could one day change.

Organizers of the African School of Fundamental Physics and Applications continue reaching out to students on the continent in the hopes that one day, someone like Feremenga won’t have to travel across the world to pursue particle physics.

by Mike Perricone at June 22, 2017 02:40 PM

Peter Coles - In the Dark

MSc Opportunities in Data-Intensive Physics and Astrophysics

Back to the office after external examining duties, I received an email this morning to say that the results have now been posted in Cambridge. I also had an email from Miss Lemon at Sussex that told me that their finalists’ results went up last Friday. We did ours in Cardiff last week. This provides me with a timely opportunity to congratulate all students at all three of these institutions – and indeed everywhere else – on their success!

It also occurred to me tha,t now that most students know how well they’ve done in their undergraduate degree, some may be thinking about further study, at postgraduate level. It seems a good opportunity to remind potential applicants about our two brand new MSc courses at Masters (MSc) level, called Data-Intensive Physics and Data-Intensive Astrophysics and they are both taught jointly by staff in the School of Physics and Astronomy and the School of Computer Science and Informatics in a kind of major/minor combination.

The aim of these courses is twofold.

One is to provide specialist postgraduate training for students wishing to go into academic research in a ‘data-intensive’ area of physics or astrophysics, by which I mean a field which involves the analysis and manipulation of very large or complex data sets and/or the use of high-performance computing for, e.g., simulation work. There is a shortage of postgraduates with the necessary combination of skills to undertake academic research in such areas, and we plan to try to fill the gap with these courses.

The other aim is to cater for students who may not have made up their mind whether to go into academic research, but wish to keep their options open while pursuing a postgraduate course. The unique combination of physics/astrophysics and computer science will give those with these qualifications the option of either continuing or going into another sphere of data-intensive research in the wider world of Big Data.

The motivation for these courses has been further strengthened recently by the announcement earlier this year of extra funding for PhD research in Data-Intensive Physics. We’ve been selecting students for this programme and making other preparations for the arrival of the first cohort in September. We’ve had many more applicants than we can accommodate this time, but this looks set to be a growth area for the future so anyone thinking of putting themselves in a good position for a PhD in Data-Intensive Physics or Astrophysics in the future might think about preparing by taking a Masters in Data-Intensive Physics or Astrophysics now!

I just checked on our admissions system and saw, as expected, conditional offers turning into firm acceptances now that the finals exam results are being published across the country but we have still got plenty of room on these courses so if you’re thinking about applying, please be assured that we’re still accepting new applications!

 


by telescoper at June 22, 2017 02:34 PM

Emily Lakdawalla - The Planetary Society Blog

Planetary Society volunteers host SpaceUp London 2017
Earlier this month, The Planetary Society brought together space enthusiasts at Queen Mary University of London for “SpaceUp London 2017”—the first large-scale event organized by Planetary Society volunteers in Europe.

June 22, 2017 11:00 AM

Lubos Motl - string vacua and pheno

Dwarf galaxies: gravity really, really is not entropic
Verlinde has already joined the community of fraudulent pseudoscientists who keep on "working" on something they must know to be complete rubbish

In the text Researchers Check Space-Time to See if It’s Made of Quantum Bits, the Quanta Magazine describes a fresh paper by Kris Pardo (Princeton U.)
Testing Emergent Gravity with Isolated Dwarf Galaxies
which tested some 2016 dark matter "application" of Erik Verlinde's completely wrong "entropic gravity" meme. Verlinde has irrationally linked his "entropic gravity" meme with some phenomenological, parameter-free fit for the behavior of galaxies. What a surprise, when this formula is compared to dwarf galaxies which are, you know, a bit smaller, it doesn't seem to work.

The maximum circular velocities are observed to reach up to 280 km/s but the predicted ones are at most 165 km/s. So it doesn't work, the model is falsified. This moment of the death of the model is where the discussion of the model should end and this is indeed where my discussion of the model ends.




But what I want to discuss is how much this branch of physics has been filled with garbage in a recent decade or two. I don't actually believe that Erik Verlinde believes that his formulae have any reason to produce nontrivially good predictions.




What he did was just to find some approximate fit for some data about a class of galaxies – which is only good up to a factor of a few (perhaps two, perhaps ten). It's not so shocking that such a rough fit may exist because by construction, all the galaxies in his class were qualitatively similar to each other. That's why only one or a small number of parameters describes the important enough characteristics of each galaxy and everything important we observe must be roughly a function of it. When you think about it, the functional dependence simply has to be close enough to a linear or power law function for such a limited class of similar objects.

And this fit was "justified" by some extremely sloppy arguments as being connected with his older "entropic gravity" meme. It claims that gravity is an entropic force – resulting from the desire of physical systems to increase their entropy. This is obviously wrong because the gravitational motion would be unavoidably irreversible; and because all the quantum interference phenomena (e.g. with neutrons in the gravitational field) would be rendered impossible if there were a high entropy underlying the gravitational potential even in the absence of the event horizons.

So his original meme is wrong and it contradicts absolutely basic observed facts about the gravitational force between the celestial bodies, such as the fact that planetary orbits are approximately elliptical. But the idea of this Verlinde-style of work is not to care and increase the ambitions. While his theory has no chance to correctly deduce even the Kepler laws, he just remains silent about it and starts to claim that it can do everything and it may even replace dark matter if not dark matter and dark energy.

An even stinker package of bogus claims and would-be equations is offered to "justify" this ambitious claim in the eyes of the truly gullible people, if I avoid the scientific term "imbeciles" for a while. Astrophysicists at Princeton feel the urge to spend their time with this junk. Needless to say, the "theory" is based on wrong assumptions, stupidities, and deception about connections between all these wrong claims and the actual, observed, correct claims. It has no reason to predict anything else correctly and it doesn't.

Verlinde's statement after his newest theory was basically killed is truly Smolinesque:
This is interesting and good work. [But] emergent gravity hasn’t been developed to the point where it can make specific predictions about all dwarf galaxies. I think more work needs to be done on both the observational and the theory side.
Holy cow. It's not too interesting work. It's just a straightforward paper showing that Mr Verlinde's "theory" contradicts well-known dwarf galaxy data. But even if it were interesting, it is absolutely ludicrous for Mr Verlinde to present himself as some kind of a superior judge of Pardo's work. He is just a student who tried a very stupid thing and was demolished by his professor who has showed him the actual correct data.

By the way, the excuse involving the word "predictions" is cute, too. Verlinde emits fog whose purpose is to create the impression that the falsification of his delusions doesn't matter because his theory hasn't been developed to predict properties of "all dwarf galaxies". But a key point of Pardo's paper is that it doesn't matter. One may predict certain things statistically and the predicted speeds are generally too low. The mean value of the distribution is low and so are the extremes. One doesn't need to predict and test every single individual dwarf galaxy. Verlinde just wants the imbeciles to think that a test hasn't really been done yet – except that it has. And he suggests that some fix or loophole exists – except that it doesn't.

But what I hate most about this piece of crackpot work and hundreds of others is this Smolinesque sentence:
I think more work needs to be done on both the observational and the theory side.
Promises and begging for others to support this kind of junk in the future, perhaps even more so than so far.

Should more work be done on both sides? No, on the contrary, less work or no work should be done on this "theory" because it was killed; it is, on the contrary, the promising ideas (those that have predicted something to agree with something else we know) that deserve more work and elaboration in the future. Further research will surely kill it even more, but Mr Verlinde will care even less.

If Mr Verlinde were pursuing the scientific method, he would understand that his theory doesn't work, he would abandon it, stop working on it, stop trying to make others work on it, and, last but not least, he would stop receive funding that is justified by this garbage. He should be the first man who points out that the value of this garbage is zero. But sadly enough, he doesn't have the integrity for that and the people around him don't have the intelligence to behave in the way that would actually be compatible with the rules of the scientific method.

And it's not just Smolin and Verlinde. The field has gotten crowded by dozens of sociologically high-profile fraudsters who pompously keep on working on various kinds of crackpottery that have been known to be absolutely wrong, worthless piece of junk for many years and sometimes decades. Entropic gravity, loop quantum gravity, spin foam, causal dynamical triangulation and dozens of other cousins like that, Bohmian theory, many world theory and a dozen of other "interpretations", various nonsensical claims about reversibility of the macroscopic phenomena, Boltzmann brains, and the list would go on and on and on.

Lots of these people are keeping themselves influential by some connections with the media or other politically powerful interests. Imagine what it does to the students who are deciding about their research specialization. Many if not most of them are exposed to a department where a high-profile fraudster like that overshadows all the meaningful and credible research. Many of these students join this bogus research and they quickly learn what really matters in their kind of "science". And they are very different skills from those that their honest classmates need. In particular, they learn to do the P.R. and they learn to say "how much they care about testing" and also they learn to talk about "future work" whenever they are proved wrong in a test, which happens after every single test, and so on. There is an intense struggle going on between genuine science and bogus science and genuine science is losing the battle.

by Luboš Motl (noreply@blogger.com) at June 22, 2017 06:15 AM

June 21, 2017

Christian P. Robert - xi'an's og

non-Bayesian decision riddle

As a continuation of the Bayesian resolution of last week riddle, I looked at a numeric resolution of the four secretaries problem, while in the train back from Rouen (and trying to block the chatter of my neighbours, a nuisance I find myself more and more sensitive to!). The target function is defined as

gainz=function(b,c,T=1e4,type="raw"){
  x=matrix(runif(4*T),ncol=4)
  maz=t(apply(x,1,cummax))
  zam=t(apply(x[,4:1],1,cummax))
  if (type=="raw"){return(mean(
   ((x[,2]>b*x[,1])*x[,2]+
    (x[,2]<b*x[,1])*((x[,3]>c*maz[,2])*x[,3]+
        (x[,3]<c*maz[,2])*x[,4]))/maz[,4]))} 
  if (type=="global"){return(mean( 
    ((x[,2]>b*x[,1])*(x[,2]==maz[,4])+
     (x[,2]<b*x[,1])*((x[,3]>c*maz[,2])*(x[,3]==maz[,4])+
         (x[,3]<c*maz[,2])*(x[,4]==maz[,4])))))} 
  if (type=="remain"){return(mean( 
    ((x[,2]>b*x[,1])*(x[,2]==zam[,3])+
     (x[,2]<b*x[,1])*((x[,3]>c*maz[,2])*(x[,3]==zam[,2])+
          (x[,3]<c*maz[,2])*(x[,4]==zam[,2])))))}}

where the data is generated from a U(0,1) distribution as the loss functions are made scaled free by deciding to always sacrifice the first draw, x¹. This function is to be optimised in (b,c) and hence I used a plain vanilla simulated annealing R code:

avemale=function(T=3e4,type){
  b=c=.5
  maxtar=targe=gainz(b,c,T=1e4,type)
  temp=0.1
  for (t in 1:T){
    bp=b+runif(1,-temp,temp)
    cp=c+runif(1,-temp,temp)
    parge=(bp>0)*(cp>0)*gainz(bp,cp,T=1e4,type)
    if (parge>maxtar){
      b=bs=bp;c=cs=cp;maxtar=targe=parge}else{
    if (runif(1)<exp((parge-targe)/temp)){
      b=bp;c=cp;targe=parge}}
    temp=.9999*temp}
  return(list(bs=bs,cs=cs,max=maxtar))}

with outcomes

  • b=1, c=.5, and optimum 0.8 for the raw type
  • b=c=1 and optimum 0.45 for the global type
  • b undefined, c=2/3 and optimum 0.75 for the remain type

Filed under: Statistics Tagged: mathematical puzzle, R, simulated annealing, The Riddler

by xi'an at June 21, 2017 10:17 PM

Christian P. Robert - xi'an's og

Emily Lakdawalla - The Planetary Society Blog

Revisiting the ice giants: NASA study considers Uranus and Neptune missions
Only one spacecraft has ever visited Uranus and Neptune: Voyager 2, in the late 1980s. A new NASA report explores the reasons to go back, and what type of mission might take us there.

June 21, 2017 11:00 AM

Peter Coles - In the Dark

LISA gets the go-ahead!

Just taking a short break from examining duties to pass on the news that the European Space Agency has selected the Laser Interferometric Space Experiment (LISA) – a gravitational wave experiment in space – for its large mission L3. This follows the detection of gravitational waves using the ground-based experiment Advanced LIGO and the success of a space-based technology demonstrator mission called Lisa Pathfinder.

LISA consists of a flotilla of three spacecraft in orbit around the Sun forming the arms of an interferometer with baselines of the order of 2.5 million kilometres, much longer than the ~1km arms of Advanced LIGO. These larger dimensions make LISA much more sensitive to long-period signals. Each of the LISA spacecraft contains two telescopes, two lasers and two test masses, arranged in two optical assemblies pointed at the other two spacecraft. This forms Michelson-like interferometers, each centred on one of the spacecraft, with the platinum-gold test masses defining the ends of the arms.

Here’s an artist’s impression of LISA:

This is excellent news for the gravitational waves community, especially since LISA was threatened with the chop when NASA pulled out a few years ago. Space experiments are huge projects – and LISA is more complicated than most – so it will take some time before it actually happens. At the moment, LISA is pencilled in for launch in 2034…


by telescoper at June 21, 2017 10:32 AM

June 20, 2017

Christian P. Robert - xi'an's og

slice sampling for Dirichlet mixture process

When working with my PhD student Changye in Dauphine this morning I realised that slice sampling also applies to discrete support distributions and could even be of use in such settings. That it works is (now) straightforward in that the missing variable representation behind the slice sampler also applies to densities defined with respect to a discrete measure. That this is useful transpires from the short paper of Stephen Walker (2007) where we saw this, as Stephen relies on the slice sampler to sample from the Dirichlet mixture model by eliminating the tail problem associated with this distribution. (This paper appeared in Communications in Statistics and it is through Pati & Dunson (2014) taking advantage of this trick that Changye found about its very existence. I may have known about it in an earlier life, but I had clearly forgotten everything!)

While the prior distribution (of the weights) of the Dirichlet mixture process is easy to generate via the stick breaking representation, the posterior distribution is trickier as the weights are multiplied by the values of the sampling distribution (likelihood) at the corresponding parameter values and they cannot be normalised. Introducing a uniform to replace all weights in the mixture with an indicator that the uniform is less than those weights corresponds to a (latent variable) completion [or a demarginalisation as we called this trick in Monte Carlo Statistical Methods]. As elaborated in the paper, the Gibbs steps corresponding to this completion are easy to implement, involving only a finite number of components. Meaning the allocation to a component of the mixture can be operated rather efficiently. Or not when considering that the weights in the Dirichlet mixture are not monotone, hence that a large number of them may need to be computed before picking the next index in the mixture when the uniform draw happens to be quite small.


Filed under: Books, Statistics, University life Tagged: Communications in Statistics, Dirichlet process, Gibbs sampler, intractability, MCMC, Monte Carlo Statistical Methods, normalising constant, slice sampling

by xi'an at June 20, 2017 10:17 PM

Georg von Hippel - Life on the lattice

Lattice 2017, Day Two
Welcome back to our blog coverage of the Lattics 2017 conference in Granada.

Today's first plenary session started with an experimental talk by Arantza Oyanguren of the LHCb collaboration on B decay anomalies at LHCb. LHCb have amassed a huge number of b-bbar pairs, which allow them to search for and study in some detail even the rarest of decay modes, and they are of course still collecting more integrated luminosity. Readers of this blog will likely recall the Bs → μ+μ- branching ratio result from LHCb, which agreed with the Standard Model prediction. In the meantime, there are many similar results for branching ratios that do not agree with Standard Model predictions at the 2-3σ level, e.g. the ratios of branching fractions like Br(B+→K+μ+μ-)/Br(B+→K+e+e-), in which lepton flavour universality appears to be violated. Global fits to data in these channels appear to favour the new physics hypothesis, but one should be cautious because of the "look-elsewhere" effect: when studying a very large number of channels, some will show an apparently significant deviation simply by statistical chance. On the other hand, it is very interesting that all the evidence indicating potential new physics (including the anomalous magnetic moment of the muon and the discrepancy between the muonic and electronic determinations of the proton electric charge radius) involve differences between processes involving muons and analogous processes involving electrons, an observation I'm sure model-builders have made a long time ago.

This was followed by a talk on flavour physics anomalies by Damir Bečirević. Expanding on the theoretical interpretation of the anomalies discussed in the previous talk, he explained how the data seem to indicate a violation of lepton flavour universality at the level where the Wilson coefficient C9 in the effective Hamiltonian is around zero for electrons, and around -1 for muons. Experimental data seem to favour the situation where C10=-C9, which can be accommodated is certain models with a Z' boson coupling preferentially to muons, or in certain special leptoquark models with corrections at the loop level only. Since I have little (or rather no) expertise in phenomenological model-building, I have no idea how likely these explanations are.

The next speaker was Xu Feng, who presented recent progress in kaon physics simulations on the lattice. The "standard" kaon quantities, such as the kaon decay constant or f+(0), are by now very well-determined from the lattice, with overall errors at the sub-percent level, but beyond that there are many important quantities, such as the CP-violating amplitudes in K → ππ decays, that are still poorly known and very challenging. RBC/UKQCD have been leading the attack on many of these observables, and have presented a possible solution to the ΔI=1/2 rule, which consists in non-perturbative effects making the amplitude A0 much larger relative to A2 than what would be expected from naive colour counting. Making further progress on long-distance contributions to the KL-KS mass difference or εK will require working at the physical pion mass and treating the charm quark with good control of discretization effects. For some processes, such as KL→π0+-, even the sign of the coefficient would be desirable.

After the coffee break, Luigi Del Debbio talked about parton distributions in the LHC era. The LHC data reduce the error on the NNLO PDFs by around a factor of two in the intermediate-x region. Conversely, the theory errors coming from the PDFs are a significant part of the total error from the LHC on Higgs physics and BSM searches. In particular the small-x and large-x regions remain quite uncertain. On the lattice, PDFs can be determined via quasi-PDFs, in which the Wilson line inside the non-local bilinear is along a spatial direction rather than in a light-like direction. However, there are still theoretical issues to be settled in order to ensure that the renormalization and matching the the continuum really lead to the determination of continuum PDFs in the end.

Next was a talk about chiral perturbation theory results on the multi-hadron state contamination of nucleon observables by Oliver Bär. It is well known that until very recently, lattice calculations of the nucleon axial charge underestimated its value relative to experiment, and this has been widely attributed to excited-state effects. Now, Oliver has calculated the corrections from nucleon-pion states on the extraction of the axial charge in chiral perturbation theory, and has found that they actually should lead to an overestimation of the axial charge from the plateau method, at least for source-sink separations above 2 fm, where ChPT is applicable. Similarly, other nucleon charges should be overestimated by 5-10%. Of course, nobody is currently measuring in that distance regime, and so it is quite possible that higher-order corrections or effects not captured by ChPT overcompensate this and lead to an underestimation, which would however mean that there is some instermediate source-sink separation for which one gets the experimental result by accident, as it were.

The final plenary speaker of the morning was Chia-Cheng Chang, who discussed progress towards a precise lattice determination of the nucleon axial charge, presenting the results of the CalLAT collaboration from using what they refer to as the Feynman-Hellmann method, a novel way of implementing what is essentially the summation method through ideas based in the Feynman-Hellmann theorem (but which doesn't involve simulating with a modified action, as a straightforward applicaiton of the Feynman-Hellmann theorem would demand).

After the lunch break, there were parallel sessions, and in the evening, the poster session took place. A particular interesting and entertaining contribution was a quiz about women's contributions to physics and computer science, the winner of which will win a bottle of wine and a book.

by Georg v. Hippel (noreply@blogger.com) at June 20, 2017 08:27 PM

Georg von Hippel - Life on the lattice

Lattice 2017, Day One
Hello from Granada and welcome to our coverage of the 2017 lattice conference.

After welcome addresses by the conference chair, a representative of the government agency in charge of fundamental research, and the rector of the university, the conference started off in a somewhat sombre mood with a commemoration of Roberto Petronzio, a pioneer of lattice QCD, who passed away last year. Giorgio Parisi gave a memorial talk summarizing Roberto's many contributions to the development of the field, from his early work on perturbative QCD and the parton model, through his pioneering contributions to lattice QCD back in the days of small quenched lattices, to his recent work on partially twisted boundary conditions and on isospin breaking effects, which is very much at the forefront of the field at the moment, not to omit Roberto's role as director of the Italian INFN in politically turbulent times.

This was followed by a talk by Martin Lüscher on stochastic locality and master-field simulations of very large lattices. The idea of a master-field simulation is based on the observation of volume self-averaging, i.e. that the variance of volume-averaged quantities is much smaller on large lattices (intuitively, this would be because an infinitely-extended properly thermalized lattice configuration would have to contain any possible finite sub-configuration with a frequency corresponding to its weight in the path integral, and that thus a large enough typical lattice configuration is itself a sort of ensemble). A master field is then a huge (e.g. 2564) lattice configuration, on which volume averages of quantities are computed, which have an expectation value equal to the QCD expectation value of the quantity in question, and a variance which can be estimated using a double volume sum that is doable using an FFT. To generate such huge lattice, algorithms with global accept-reject steps (like HMC) are unsuitable, because ΔH grows with the square root of the volume, but stochastic molecular dynamics (SMD) can be used, and it has been rigorously shown that for short-enough trajectory lengths SMD converges to a unique stationary state even without an accept-reject step.

After the coffee break, yet another novel simulation method was discussed by Ignacio Cirac, who presented techniques to perform quantum simulations of QED and QCD on alattice. While quantum computers of the kind that would render RSA-based public-key cryptography irrelevant remain elusive at the moment, the idea of a quantum simulator (which is essentially an analogue quantum computer), which goes back to Richard Feynman, can already be realized in practice: optical lattices allow trapping atoms on lattice sites while fine-tuning their interactions so as to model the couplings of some other physical system, which can thus be simulated. The models that are typically simulated in this way are solid-state models such as the Hubbard model, but it is of course also possible to setup a quantum simulator for a lattice field theory that has been formulated in the Hamiltonian framework. In order to model a gauge theory, it is necessary to model the gauge symmetry by some atomic symmetry such as angular momentum conservation, and this has been done at least in theory for QED and QCD. The Schwinger model has been studied in some detail. The plaquette action for d>1+1 additionally requires a four-point interaction between the atoms modelling the link variables, which can be realized using additional auxiliary variables, and non-abelian gauge groups can be encoded using multiple species of bosonic atoms. A related theoretical tool that is still in its infancy, but shows significant promise, is the use of tensor networks. This is based on the observation that for local Hamiltonians the entanglement between a region and its complement grows only as the surface of the region, not its volume, so only a small corner of the total Hilbert space is relevant; this allows one to write the coefficients of the wavefunction in a basis of local states as a contraction of tensors, from where classical algorithms that scale much better than the exponential growth in the number of variables that would naively be expected can be derived. Again, the method has been successfully applied to the Schwinger model, but higher dimensions are still challenging, because the scaling, while not exponential, still becomes very bad.

Staying with the topic of advanced simulation techniques, the next talk was Leonardo Giusti speaking about the block factorization of fermion determinants into local actions for multi-boson fields. By decomposing the lattice into three pieces, of which the middle one separates the other by a distance Δ large enough to render e-MπΔ small, and by applying a domain decomposition similar to the one used in Lüscher's DD-HMC algorithm to the Dirac operator, Leonardo and collaborators have been able to derive a multi-boson algorithm that allows to perform multilevel integration with dynamical fermions. For hadronic observables, the quark propagator also needs to be factorized, which Leonardo et al. also have achieved, making a significant decrease in statistical error possible.

After the lunch break there were parallel sessions, in one of which I gave my own talk and another one of which I chaired, thus finishing all of my duties other than listening (and blogging) on day one.

In the evening, there was a reception followed by a special guided tour of the truly stunning Alhambra (which incidentally contains a great many colourful - and very tasteful - lattices in the form of ornamental patterns).

by Georg v. Hippel (noreply@blogger.com) at June 20, 2017 08:26 PM

John Baez - Azimuth

The Theory of Devices

I’m visiting the University of Genoa and talking to two category theorists: Marco Grandis and Giuseppe Rosolini. Grandis works on algebraic topology and higher categories, while Rosolini works on the categorical semantics of programming languages.

Yesterday, Marco Grandis showed me a fascinating paper by his thesis advisor:

• Gabriele Darbo, Aspetti algebrico-categoriali della teoria dei dispotivi, Symposia Mathematica IV (1970), Istituto Nazionale di Alta Matematica, 303–336.

It’s closely connected to Brendan Fong’s thesis, but also different—and, of course, much older. According to Grandis, Darbo was the first person to work on category theory in Italy. He’s better known for other things, like ‘Darbo’s fixed point theorem’, but this piece of work is elegant, and, it seems to me, strangely ahead of its time.

The paper’s title translates as ‘Algebraic-categorical aspects of the theory of devices’, and its main concept is that of a ‘universe of devices’: a collection of devices of some kind that can be hooked up using wires to form more devices of this kind. Nowadays we might study this concept using operads—but operads didn’t exist in 1970, and Darbo did quite fine without them.

The key is the category \mathrm{FinCorel}, which has finite sets as objects and ‘corelations’ as morphisms. I explained corelations here:

Corelations in network theory, 2 February 2016.

Briefly, a corelation from a finite set X to a finite set Y is a partition of the disjoint union of X and Y. We can get such a partition from a bunch of wires connecting points of X and Y. The idea is that two points lie in the same part of the partition iff they’re connected, directly or indirectly, by a path of wires. So, if we have some wires like this:

they determine a corelation like this:

There’s an obvious way to compose corelations, giving a category \mathrm{FinCorel}.

Gabriele Darbo doesn’t call them ‘corelations’: he calls them ‘trasduttori’. A literal translation might be ‘transducers’. But he’s definitely talking about corelations, and like Fong he thinks they are basic for studying ways to connect systems.

Darbo wants a ‘universe of devices’ to assign to each finite set X a set D(X) of devices having X as their set of ‘terminals’. Given a device in D(X) and a corelation f \colon X \to Y, thought of as a bunch of wires, he wants to be able to attach these wires to the terminals in X and get a new device with Y as its set of terminals. Thus he wants a map D(f): D(X) \to D(Y). If you draw some pictures, you’ll see this should give a functor

D : \mathrm{FinCorel} \to \mathrm{Set}

Moreover, if we have device with a set X of terminals and a device with a set Y of terminals, we should be able to set them side by side and get a device whose set of terminals form the set X + Y, meaning the disjoint union of X and Y. So, Darbo wants to have maps

\delta_{X,Y} : D(X) \times D(Y) \to D(X + Y)

If you draw some more pictures you can convince yourself that \delta should be a lax symmetric monoidal functor… if you’re one of the lucky few who knows what that means. If you’re not, you can look it up in many places, such as Section 1.2 here:

• Brendan Fong, The Algebra of Open and Interconnected Systems, Ph.D. thesis, University of Oxford, 2016. (Blog article here.)

Darbo does not mention lax symmetric monoidal functors, perhaps because such concepts were first introduced by Mac Lane only in 1968. But as far as I can tell, Darbo’s definition is almost equivalent to this:

Definition. A universe of devices is a lax symmetric monoidal functor D \colon \mathrm{FinCorel} \to \mathrm{Set}.

One difference is that Darbo wants there to be exactly one device with no terminals. Thus, he assumes D(\emptyset) is a one-element set, say 1, while the definition above would only demand the existence of a map \delta \colon 1 \to D(\emptyset) obeying a couple of axioms. That gives a particular ‘favorite’ device with no terminals. I believe we get Darbo’s definition from the above one if we further assume \delta is the identity map. This makes sense if we take the attitude that ‘a device is determined by its observable behavior’, but not otherwise. This attitude is called ‘black-boxing’.

Darbo does various things in his paper, but the most exciting to me is his example connected to linear electrical circuits. He defines, for any pair of objects V and I in an abelian category C, a particular universe of devices. He calls this the universe of linear devices having V as the object of potentials and I as the object of currents.

If you don’t like abelian categories, think of C as the category of finite-dimensional real vector spaces, and let V = I = \mathbb{R}. Electric potential and electric current are described by real numbers so this makes sense.

The basic idea will be familiar to Fong fans. In an electrical circuit made of purely conductive wires, when two wires merge into one we add the currents to get the current on the wire going out. When one wire splits into two we duplicate the potential to get the potentials on the wires going out. Working this out further, any corelation f : X \to Y between finite set determines two linear relations, one

f_* : I^X \rightharpoonup I^Y

relating the currents on the wires coming in to the currents on the wires going out, and one

f^* : V^Y \rightharpoonup V^X

relating the potentials on the wires going out to the potentials on the wires coming in. Here I^X is the direct sum of X copies of I, and so on; the funky arrow indicates that we have a linear relation rather than a linear map. Note that f_* goes forward while f^* goes backward; this is mainly just conventional, since you can turn linear relations around, but we’ll see it’s sort of nice.

If we let \mathrm{Rel}(A,B) be the set of linear relations between two objects A, B \in C, we can use the above technology to get a universe of devices where

D(X) = \mathrm{Rel}(V^X, I^X)

In other words, a device of this kind is simply a linear relation between the potentials and currents at its terminals!

How does D get to be a functor D : \mathrm{FinCorel} \to \mathrm{FinSet}? That’s pretty easy. We’ve defined it on objects (that is, finite sets) by the above formula. So, suppose we have a morphism (that is, a corelation) f \colon X \to Y. How do we define D(f) : D(X) \to D(Y)?

To answer this question, we need a function

D(f) : \mathrm{Rel}(V^X, I^X) \to \mathrm{Rel}(V^Y, I^Y)

Luckily, we’ve got linear relations

f_* : I^X \rightharpoonup I^Y

and

f^* : V^Y \rightharpoonup V^X

So, given any linear relation R \in \mathrm{Rel}(V^X, I^X), we just define

D(f)(R) = f_* \circ R \circ f^*

Voilà!

People who have read Fong’s thesis, or my paper with Blake Pollard on reaction networks:

• John Baez and Blake Pollard, A compositional framework for reaction networks.

will find many of Darbo’s ideas eerily similar. In particular, the formula

D(f)(R) = f_* \circ R \circ f^*

appears in Lemma 16 of my paper with Blake, where we are defining a category of open dynamical systems. We prove that D is a lax symmetric monoidal functor, which is just what Darbo proved—though in a different context, since our R is not linear like his, and for a different purpose, since he’s trying to show D is a ‘universe of devices’, while we’re trying to construct the category of open dynamical systems as a ‘decorated cospan category’.

In short: if this work of Darbo had become more widely known, the development of network theory could have been sped up by three decades! But there was less interest in a general theory of networks at the time, lax monoidal functors were brand new, operads unknown… and, sadly, few mathematicians read Italian.

Darbo has other papers, and so do his students. We should read them and learn from them! Here are a few open-access ones:

• Franco Parodi, Costruzione di un universo di dispositivi non lineari su una coppia di gruppi abeliani , Rendiconti del Seminario Matematico della Università di Padova 58 (1977), 45–54.

• Franco Parodi, Categoria degli universi di dispositivi e categoria delle T-algebre, Rendiconti del Seminario Matematico della Università di Padova 62 (1980), 1–15.

• Stefano Testa, Su un universo di dispositivi monotoni, Rendiconti del Seminario Matematico della Università di Padova 65 (1981), 53–57.

At some point I will scan in G. Darbo’s paper and make it available here.


by John Baez at June 20, 2017 02:45 PM

Peter Coles - In the Dark

Astronomy Look-alikes, No. 98

I’ve been reminded that it has been a while since I last posted an Astronomy Look-alike so I was wondering if anyone else has noticed the spectacular similarity between Professor Will Percival of the University of Portsmouth and TV presenter Richard Osman? Is it pointless to ask whether they might possibly be related?

Percival


by telescoper at June 20, 2017 02:32 PM

Symmetrybreaking - Fermilab/SLAC

A speed trap for dark matter, revisited

A NASA rocket experiment could use the Doppler effect to look for signs of dark matter in mysterious X-ray emissions from space.

Image of stars and reddish, glowing clouds of dust at the center of the Milky Way Galaxy

Researchers who hoped to look for signs of dark matter particles in data from the Japanese ASTRO-H/Hitomi satellite suffered a setback last year when the satellite malfunctioned and died just a month after launch.

Now the idea may get a second chance.

In a new paper, published in Physical Review D, scientists from the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory, suggest that their novel search method could work just as well with the future NASA-funded Micro-X rocket experiment—an X-ray space telescope attached to a research rocket.

The search method looks for a difference in the Doppler shifts produced by movements of dark matter and regular matter, says Devon Powell, a graduate student at KIPAC and lead author on the paper with co-authors Ranjan Laha, Kenny Ng and Tom Abel.

The Doppler effect is a shift in the frequency of sound or light as its source moves toward or away from an observer. The rising and falling pitch of a passing train whistle is a familiar example, and the radar guns that cops use to catch speeders also work on this principle.

The dark matter search technique, called Dark Matter Velocity Spectroscopy, is like setting up a speed trap to “catch” dark matter.

“We think that dark matter has zero averaged velocity, while our solar system is moving,” says Laha, who is a postdoc at KIPAC.  “Due to this relative motion, the dark matter signal would experience a Doppler shift. However, it would be completely different than the Doppler shifts from signals coming from astrophysical objects because those objects typically co-rotate around the center of the galaxy with the sun, and dark matter doesn’t. This means we should be able to distinguish the Doppler signatures from dark and regular matter.”

Researchers would look for subtle frequency shifts in measurements of a mysterious X-ray emission. This 3500-electronvolt (3.5 keV) emission line, observed in data from the European XMM-Newton spacecraft and NASA’s Chandra X-ray Observatory, is hard to explain with known astrophysical processes. Some say it could be a sign of hypothetical dark matter particles called sterile neutrinos decaying in space.

“The challenge is to find out whether the X-ray line is due to dark matter or other astrophysical sources,” Powell says. “We’re looking for ways to tell the difference.”

The idea for this approach is not new: Laha and others described the method in a research paper last year, in which they suggested using X-ray data from Hitomi to do the Doppler shift comparison. Although the spacecraft sent some data home before it disintegrated, it did not see any sign of the 3.5-keV signal, casting doubt on the interpretation that it might be produced by the decay of dark matter particles. The Dark Matter Velocity Spectroscopy method was never applied, and the issue was never settled.  

In the future Micro-X experiment, a rocket will catapult a small telescope above Earth’s atmosphere for about five minutes to collect X-ray signals from a specific direction in the sky. The experiment will then parachute back to the ground to be recovered. The researchers hope that Micro-X will do several flights to set up a speed trap for dark matter.

Illustration of a research rocket catapulting an experiment above Earth’s atmosphere
Jeremy Stoller, NASA

“We expect the energy shifts of dark matter signals to be very small because our solar system moves relatively slowly,” Laha says. “That’s why we need cutting-edge instruments with superb energy resolution. Our study shows that Dark Matter Velocity Spectroscopy could be successfully done with Micro-X, and we propose six different pointing directions away from the center of the Milky Way.”

Esra Bulbul from the MIT Kavli Institute for Astrophysics and Space Research, who wasn’t involved in the study, says, “In the absence of Hitomi observations, the technique outlined for Micro-X provides a promising alternative for testing the dark matter origin of the 3.5-keV line.” But Bulbul, who was the lead author of the paper that first reported the mystery X-ray signal in superimposed data of 73 galaxy clusters, also points out that the Micro-X analysis would be limited to our own galaxy.

The feasibility study for Micro-X is more detailed than the prior analysis for Hitomi. “The earlier paper used certain approximations—for instance, that the dark matter halos of galaxies are spherical, which we know isn’t true,” Powell says. “This time we ran computer simulations without this approximation and predicted very precisely what Micro-X would actually see.”

The authors say their method is not restricted to the 3.5-keV line and can be applied to any sharp signal potentially associated with dark matter. They hope that Micro-X will do the first practice test. Their wish might soon come true.

“We really like the idea presented in the paper,” says Enectali Figueroa-Feliciano, the principal investigator for Micro-X at Northwestern University, who was not involved in the study. “We would look at the center of the Milky Way first, where dark matter is most concentrated. If we saw an unidentified line and it were strong enough, looking for Doppler shifts away from the center would be the next step.”  

by Manuel Gnida at June 20, 2017 01:00 PM

Lubos Motl - string vacua and pheno

Overbye on "ominous silence" in particle physics
Dennis Overbye is a top science writer and his new text in the New York Times,
Yearning for New Physics at CERN, in a Post-Higgs Way
is also pretty good. He quotes various particle physicists, including fans of SUSY and critics of SUSY, and those give him different ideas about the probability that new physics is going to be observed in coming years, but he decides to make a conclusion in one direction, anyway: the silence in particle physics is ominous, the subtitle says.

In particular, Overbye talks about the silence after the \(750\GeV\) hint faded away. It's been just a year or so when this bump disappeared. Should one be using ominous words such as "ominous" when there has been no similar intense wave of activity for a year? I don't think so.




There are at least three problems that I have with similar statements about frustration, depression, confusion, and ominous signs in particle physics of recent years:
  • it is irrational to single out the present
  • the success of the existing effective theory should not be described as "confusion" but the opposite of it, the lack of confusions
  • in general, it is irrational to link one's mood to the success of older or newer theories
What do I mean?




First, it's silly to say that 2017 is particularly frustrating because no new particles have appeared. The known list of elementary particles has really been settled in the mid 1970s or so. It became clear in that decade that there had to be three generations of quarks and leptons, gluons, photons, W-bosons, and Z-bosons, and the Higgs. Most of them were already discovered in that decade. An anomalous particle was the top quark that could have been lighter but it wasn't and was discovered in 1995; and the Higgs boson whose existence was obvious from the 1960s but that was only discovered in 2012, for some reasons.

So the absence of new elementary particles that are forced on us by experiments or indisputable arguments in theory isn't a new phenomenon. It's been the default state in some 40 years – and in many previous long periods, too. In particular, I haven't really seen a paradigm shift that would add a new particle to our list – I mean the unquestionably demonstrated list – in my whole life. So I just don't understand the calls for frustration. If people like me should be frustrated by the null results, our whole lives should be frustrating. Well, it may surely be said to be true from some viewpoints ;-) but it doesn't have to be. One may adopt a neutral or optimistic psychological attitude, too.

The choice of the "bad mood" is unjustifiable. The "bad mood" isn't science, it's just some emotions. As a Czech guy interested in politics, I can't resist to mention that the first president of modern Czechia Václav Havel began to talk about the "bad mood" ("blbá nálada") in the society starting from a talk he gave in 1997, i.e. exactly two decades ago. The problem was that these complaints about the political situation weren't an impartial, rational analysis in any sense. Instead, it was a political campaign because he was really promoting the bad mood.

But the word "confusing" is even more wrong. When the Standard Model works perfectly, there is no real confusion when it comes to predictions of particle physics experiments – and predictions are what science is almost all about. What does it mean to be confused? Yes, the Standard Model cannot be a complete theory and its parameters look fine-tuned or unnatural according to some basic guesses about "what should be beyond the Standard Model". But this observation is basically equivalent to the statement that we don't have a complete and completely proven theory of everything yet. Without such a theory, you just can't reliably determine the probability that some coupling constants are large or small etc.

We have never had a complete and completely proven theory of everything yet, however! So if the confusion we talk about is equivalent to the absence or incomplete status of our TOE, it's the minimum confusion that has always existed in science. The word "confusing" is just self-evidently wrong. I feel like Sheldon who is correcting Penny when she says that something is typical. On the contrary, it's an almost textbook example of something that is atypical, Sheldon once correctly pointed out!

But perhaps most importantly, I think that this whole identification of "good mood in physics" and even "the likability of physics in the public" with the "rate at which paradigms are shifting" is utterly irrational, too. As long as people are impartial about their expectations, there shouldn't exist any correlation like that. In fact, around the year 1900, the correlations that were used to say bad things about physics were upside down.

The period around 1900 was said to be a "frustrating period in physics" exactly because the framework of classical physics that had been established as well as the Standard Model (and GR) is established today ceased to work well. And indeed, it was fashionable to talk badly about physics than a few decades earlier. But with hindsight, the years around 1900 seem hugely important and with some emotions, absolutely wonderful. Various manifestations of radioactivity, X-rays, ionizing radiation composed of elementary particles were discovered. And in 1900, Planck figured out how to calculate the black body curve assuming that the energy of the electromagnetic radiation was quantized. First steps towards quantum mechanics were made. And in 1905, Einstein had his miraculous year. Relativity was found, the atomic theory was settled thanks to the new theory of the Brownian motion, the theory of the photoelectric effect has increased the seriousness of the photons, and so on.

Physics was getting "bad press" because the old picture of physics didn't seem sufficient. These days, physics is getting "bad press" because the old picture of physics seems very sufficient. These biases are opposite to each other. Both of these biases are just wrong in physics – and in science. Physics and science have the purpose of understanding Nature as correctly, accurately, and deeply as possible. Some theories or ideas may have a very wide range of validity. If it is shown that it is the case, it is how Nature works. Some theories break down. When they do, it's also wonderful, we're learning about a new thing and we have to pick the right replacements.

But it is not the goal of physics or science to maximize the frequency of paradigm shift or make a Trotskyist permanent revolution. In the same way, it is not the goal of physics or science to fanatically preserve some old ideas. It seems that the people – journalists, laymen, but also some physicists who basically cooperate with them – are introducing these utterly irrational biases. But these biases don't define science. They define ideologies such as Trotskyism or the Inquisition. If you do physics well, you look for the relevant evidence, arguments, and calculations that allow you to find correct answers, predictions, and explanations of wide and interesting sets of physical phenomena.

The correct answers may look more conservative or quickly evolving but the quality isn't measured by this speed of revolution – with either sign. The quality is measured by the correctness, accuracy, and universality of the predictions and the consistency, coherence, elegance, simplicity, and unified spirit of the theories that are used to make these predictions. The Standard Model may work up to proton-proton collisions at up to \(13\TeV\) or \(1,300\TeV\). We don't know the answer. None of these answers should bring bad press to the scientific research. It was Nature who decided about the answer and before we know this answer, we have the duty to be impartial towards the possible options. If the Standard Model works up to \(1,300\TeV\), then it is how Nature works. It means that most of the science, when done correctly, will obviously focus on other questions than the same experiments at slightly higher energies in coming centuries.

It is a strategy to work on the progress in physics by doing similar experiments and increasing the energy and luminosity gradually. I still think it's the most sensible strategy to do truly experimental particle physics. But the truly experimental approach just isn't the only one. Theorists are also learning about the coherent logic underlying particle physics by employing characteristically theoretical approaches. There's nothing wrong about them. Many of these ideas are absolutely wonderful. It remains to be seen which of the strategies and ideas will gain the importance in the next 10 or 50 years. Whatever the answer is, physics may be done correctly and cleverly, be wonderful, and deserve good press.

by Luboš Motl (noreply@blogger.com) at June 20, 2017 09:00 AM

The n-Category Cafe

The Geometric McKay Correspondence (Part 1)

The ‘geometric McKay correspondence’, actually discovered by Patrick du Val in 1934, is a wonderful relation between the Platonic solids and the ADE Dynkin diagrams. In particular, it sets up a connection between two of my favorite things, the icosahedron:

and the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> Dynkin diagram:

When I recently gave a talk on this topic, I realized I didn’t understand it as well as I’d like. Since then I’ve been making progress with the help of this book:

  • Alexander Kirillov Jr., Quiver Representations and Quiver Varieties, AMS, Providence, Rhode Island, 2016.

I now think I glimpse a way forward to a very concrete and vivid understanding of the relation between the icosahedron and E8. It’s really just a matter of taking the ideas in this book and working them out concretely in this case. But it takes some thought, at least for me. I’d like to enlist your help.

The rotational symmetry group of the icosahedron is a subgroup of <semantics>SO(3)<annotation encoding="application/x-tex">\mathrm{SO}(3)</annotation></semantics> with 60 elements, so its double cover up in <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> has 120. This double cover is called the binary icosahedral group, but I’ll call it <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> for short.

This group <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> is the star of the show, the link between the icosahedron and E8. To visualize this group, it’s good to think of <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> as the unit quaternions. This lets us think of the elements of <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> as 120 points in the unit sphere in 4 dimensions. They are in fact the vertices of a 4-dimensional regular polytope, which looks like this:

It’s called the 600-cell.

Since <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> is a subgroup of <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> it acts on <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>, and we can form the quotient space

<semantics>S= 2/Γ<annotation encoding="application/x-tex">S = \mathbb{C}^2/\Gamma</annotation></semantics>

This is a smooth manifold except at the origin—that is, the point coming from <semantics>0 2<annotation encoding="application/x-tex">0 \in \mathbb{C}^2</annotation></semantics>. There’s a singularity at the origin, and this where <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> is hiding! The reason is that there’s a smooth manifold <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics> and a map

<semantics>π:S˜S<annotation encoding="application/x-tex">\pi : \widetilde{S} \to S</annotation></semantics>

that’s one-to-one and onto except at the origin. It maps 8 spheres to the origin! There’s one of these spheres for each dot here:

Two of these spheres intersect in a point if their dots are connected by an edge; otherwise they’re disjoint.

The challenge is to find a nice concrete description of <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics>, the map <semantics>π:S˜S<annotation encoding="application/x-tex">\pi : \widetilde{S} \to S</annotation></semantics>, and these 8 spheres.

But first it’s good to get a mental image of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. Each point in this space is a <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> orbit in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>, meaning a set like this:

<semantics>{gx:gΓ}<annotation encoding="application/x-tex">\{g x : \; g \in \Gamma \}</annotation></semantics>

for some <semantics>x 2<annotation encoding="application/x-tex">x \in \mathbb{C}^2</annotation></semantics>. For <semantics>x=0<annotation encoding="application/x-tex">x = 0</annotation></semantics> this set is a single point, and that’s what I’ve been calling the ‘origin’. In all other cases it’s 120 points, the vertices of a 600-cell in <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>. This 600-cell is centered at the point <semantics>0 2<annotation encoding="application/x-tex">0 \in \mathbb{C}^2</annotation></semantics>, but it can be big or small, depending on the magnitude of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>.

So, as we take a journey starting at the origin in <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, we see a point explode into a 600-cell, which grows and perhaps also rotates as we go. The origin, the singularity in <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, is a bit like the Big Bang.

Unfortunately not every 600-cell centered at the origin is of the form I’ve shown:

<semantics>{gx:gΓ}<annotation encoding="application/x-tex">\{g x : \; g \in \Gamma \}</annotation></semantics>

It’s easiest to see this by thinking of points in 4d space as quaternions rather than elements of <semantics> 2<annotation encoding="application/x-tex">\mathbb{C}^2</annotation></semantics>. Then the points <semantics>gΓ<annotation encoding="application/x-tex">g \in \Gamma</annotation></semantics> are unit quaternions forming the vertices of a 600-cell, and multiplying <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> on the right by <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> dilates this 600-cell and also rotates it… but we don’t get arbitrary rotations this way. To get an arbitrarily rotated 600-cell we’d have to use both a left and right multiplication, and consider

<semantics>{xgy:gΓ}<annotation encoding="application/x-tex">\{x g y : \; g \in \Gamma \}</annotation></semantics>

for a pair of quaternions <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics>.

Luckily, there’s a simpler picture of the space <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>. It’s the space of all regular icosahedra centered at the origin in 3d space!

To see this, we start by switching to the quaternion description, which says

<semantics>S=/Γ<annotation encoding="application/x-tex">S = \mathbb{H}/\Gamma</annotation></semantics>

Specifying a point <semantics>x<annotation encoding="application/x-tex">x \in \mathbb{H}</annotation></semantics> amounts to specifying the magnitude <semantics>x<annotation encoding="application/x-tex">\|x\|</annotation></semantics> together with <semantics>x/x<annotation encoding="application/x-tex">x/\|x\|</annotation></semantics>, which is a unit quaternion, or equivalently an element of <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics>. So, specifying a point in

<semantics>{gx:gΓ}/Γ<annotation encoding="application/x-tex">\{g x : \; g \in \Gamma \} \in \mathbb{H}/\Gamma </annotation></semantics>

amounts to specifying the magnitude <semantics>x<annotation encoding="application/x-tex">\|x\|</annotation></semantics> together with a point in <semantics>SU(2)/Γ<annotation encoding="application/x-tex">\mathrm{SU}(2)/\Gamma</annotation></semantics>. But <semantics>SU(2)<annotation encoding="application/x-tex">\mathrm{SU}(2)</annotation></semantics> modulo the binary icosahedral group <semantics>Γ<annotation encoding="application/x-tex">\Gamma</annotation></semantics> is the same as <semantics>SO(3)<annotation encoding="application/x-tex">\mathrm{SO}(3)</annotation></semantics> modulo the icosahedral group (the rotational symmetry group of an icosahedron). Furthermore, <semantics>SO(3)<annotation encoding="application/x-tex">\mathrm{SO}(3)</annotation></semantics> modulo the icosahedral group is just the space of unit-sized icosahedra centered at the origin of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics>.

So, specifying a point

<semantics>{gx:gΓ}/Γ<annotation encoding="application/x-tex">\{g x : \; g \in \Gamma \} \in \mathbb{H}/\Gamma</annotation></semantics>

amounts to specifying a nonnegative number <semantics>x<annotation encoding="application/x-tex">\|x\|</annotation></semantics> together with a unit-sized icosahedron centered at the origin of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics>. But this is the same as specifying an icosahedron of arbitrary size centered at the origin of <semantics> 3<annotation encoding="application/x-tex">\mathbb{R}^3</annotation></semantics>. There’s just one subtlety: we allow the size of this icosahedron to be zero, but then the way it’s rotated no longer matters.

So, <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> is the space of icosahedra centered at the origin, with the ‘icosahedron of zero size’ being a singularity in this space. When we pass to the smooth manifold <semantics>S˜<annotation encoding="application/x-tex">\widetilde{S}</annotation></semantics>, we replace this singularity with 8 spheres, intersecting in a pattern described by the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> Dynkin diagram.

Points on these spheres are _limiting cases_ of icosahedra centered at the origin. We can approach these points by letting an icosahedron centered at the origin shrink to zero size in a clever way, perhaps spinning about wildly as it does.

I don’t understand this last paragraph nearly as well as I’d like! I’m quite sure it’s true, and I know a lot of relevant information, but I don’t see it. There should be a vivid picture of how this works, not just an abstract argument. Next time I’ll start trying to assemble the material that I think needs to go into building this vivid picture.

by john (baez@math.ucr.edu) at June 20, 2017 05:51 AM

Clifford V. Johnson - Asymptotia

Clifford V. Johnson - Asymptotia

Random Machinery

Making up random (ish) bits of machinery can be lots of fun!

(Click for larger view. This is for a short story I was asked to write, to appear next year.)

-cvj Click to continue reading this post

The post Random Machinery appeared first on Asymptotia.

by Clifford at June 20, 2017 04:45 AM

June 19, 2017

John Baez - Azimuth

The Geometric McKay Correspondence (Part 1)

The ‘geometric McKay correspondence’, actually discovered by Patrick du Val in 1934, is a wonderful relation between the Platonic solids and the ADE Dynkin diagrams. In particular, it sets up a connection between two of my favorite things, the icosahedron:

and the \mathrm{E}_8 Dynkin diagram:

When I recently gave a talk on this topic, I realized I didn’t understand it as well as I’d like. Since then I’ve been making progress with the help of this book:

• Alexander Kirillov Jr., Quiver Representations and Quiver Varieties, AMS, Providence, Rhode Island, 2016.

I now think I glimpse a way forward to a very concrete and vivid understanding of the relation between the icosahedron and E8. It’s really just a matter of taking the ideas in this book and working them out concretely in this case. But it takes some thought, at least for me. I’d like to enlist your help.

The rotational symmetry group of the icosahedron is a subgroup of \mathrm{SO}(3) with 60 elements, so its double cover up in \mathrm{SU}(2) has 120. This double cover is called the binary icosahedral group, but I’ll call it \Gamma for short.

This group \Gamma is the star of the show, the link between the icosahedron and E8. To visualize this group, it’s good to think of \mathrm{SU}(2) as the unit quaternions. This lets us think of the elements of \Gamma as 120 points in the unit sphere in 4 dimensions. They are in fact the vertices of a 4-dimensional regular polytope, which looks like this:

It’s called the 600-cell.

Since \Gamma is a subgroup of \mathrm{SU}(2) it acts on \mathbb{C}^2, and we can form the quotient space

S = \mathbb{C}^2/\Gamma

This is a smooth manifold except at the origin—that is, the point coming from 0 \in \mathbb{C}^2. There’s a singularity at the origin, and this where \mathrm{E}_8 is hiding! The reason is that there’s a smooth manifold \widetilde{S} and a map

\pi : \widetilde{S} \to S

that’s one-to-one and onto except at the origin. It maps 8 spheres to the origin! There’s one of these spheres for each dot here:

Two of these spheres intersect in a point if their dots are connected by an edge; otherwise they’re disjoint.

The challenge is to find a nice concrete description of \widetilde{S}, the map \pi : \widetilde{S} \to S, and these 8 spheres.

But first it’s good to get a mental image of S. Each point in this space is a \Gamma orbit in \mathbb{C}^2, meaning a set like this:

\{g x : \; g \in \Gamma \}

for some x \in \mathbb{C}^2. For x = 0 this set is a single point, and that’s what I’ve been calling the ‘origin’. In all other cases it’s 120 points, the vertices of a 600-cell in \mathbb{C}^2. This 600-cell is centered at the point 0 \in \mathbb{C}^2, but it can be big or small, depending on the magnitude of x.

So, as we take a journey starting at the origin in S, we see a point explode into a 600-cell, which grows and perhaps also rotates as we go. The origin, the singularity in S, is a bit like the Big Bang.

Unfortunately not every 600-cell centered at the origin is of the form I’ve shown:

\{g x : \; g \in \Gamma \}

It’s easiest to see this by thinking of points in 4d space as quaternions rather than elements of \mathbb{C}^2. Then the points g \in \Gamma are unit quaternions forming the vertices of a 600-cell, and multiplying g on the right by x dilates this 600-cell and also rotates it… but we don’t get arbitrary rotations this way. To get an arbitrarily rotated 600-cell we’d have to use both a left and right multiplication, and consider

\{x g y : \; g \in \Gamma \}

for a pair of quaternions x, y.

Luckily, there’s a simpler picture of the space S. It’s the space of all regular icosahedra centered at the origin in 3d space!

To see this, we start by switching to the quaternion description, which says

S = \mathbb{H}/\Gamma

Specifying a point x \in \mathbb{H} amounts to specifying the magnitude \|x\| together with x/\|x\|, which is a unit quaternion, or equivalently an element of \mathrm{SU}(2). So, specifying a point in

\{g x : \; g \in \Gamma \} \in \mathbb{H}/\Gamma

amounts to specifying the magnitude \|x\| together with a point in \mathrm{SU}(2)/\Gamma. But \mathrm{SU}(2) modulo the binary icosahedral group \Gamma is the same as \mathrm{SO(3)} modulo the icosahedral group (the rotational symmetry group of an icosahedron). Furthermore, \mathrm{SO(3)} modulo the icosahedral group is just the space of unit-sized icosahedra centered at the origin of \mathbb{R}^3.

So, specifying a point

\{g x : \; g \in \Gamma \} \in \mathbb{H}/\Gamma

amounts to specifying a nonnegative number \|x\| together with a unit-sized icosahedron centered at the origin of \mathbb{R}^3. But this is the same as specifying an icosahedron of arbitrary size centered at the origin of \mathbb{R}^3. There’s just one subtlety: we allow the size of this icosahedron to be zero, but then the way it’s rotated no longer matters.

So, S is the space of icosahedra centered at the origin, with the ‘icosahedron of zero size’ being a singularity in this space. When we pass to the smooth manifold \widetilde{S}, we replace this singularity with 8 spheres, intersecting in a pattern described by the \mathrm{E}_8 Dynkin diagram.

Points on these spheres are limiting cases of icosahedra centered at the origin. We can approach these points by letting an icosahedron centered at the origin shrink to zero size in a clever way, perhaps spinning about wildly as it does.

I don’t understand this last paragraph nearly as well as I’d like! I’m quite sure it’s true, and I know a lot of relevant information, but I don’t see it. There should be a vivid picture of how this works, not just an abstract argument. Next time I’ll start trying to assemble the material that I think needs to go into building this vivid picture.


by John Baez at June 19, 2017 06:03 PM

CERN Bulletin

Registrations for the 2017 Summer Camp : there are still places available!

The CERN Staff Association’s Summer Camp will be open for 4- to 6 year-old children for four weeks, from 3 to 28 July. Registration is offered on a weekly basis for 450 CHF, lunch included. A maximum of 24 children can attend the camp per week.

This year, the various activities will revolve around the theme of the Four Elements. Every week, one of the elements will be the core of all activities and explored through cultural outings, arts and crafts, stories, music, sports activities and scientific workshops, with or without special guests.

The general conditions are available on the website of EVE and School of the CERN Staff Association: http://nurseryschool.web.cern.ch.

For further questions and registration, please contact us by email at Summer.Camp@cern.ch.

June 19, 2017 04:06 PM

CERN Bulletin

Concertation rather than Consultation or Negotiation!

At CERN, the Concertation between the Management and the Personnel has been in effect since 1983, the year in which the Standing Concertation Committee (SCC) came to replace the Standing Consultation Committee.

Since then, the concertation process has been enshrined in the Staff Rules and Regulations, which define its scope of application: “Any proposed measures of a general nature regarding the conditions of employment or association of members of the personnel shall be the subject of discussion within the SCC” (S VII 1.08). More generally, all questions relating to the employment and working conditions of the members of personnel are discussed in the SCC, including in particular issues of remuneration, social protection (CHIS and Pension Fund), career evolution...

In Article S VII 1.07 of the Staff Rules and Regulations it is also stated that: “Discussion shall mean a procedure whereby the Director-General and the Staff Association concert together to try to reach a common position”. This process thus enables the Management and the Personnel, via the Staff Association1, to develop a common position in response to the expectations of the Member States, the Management, and the Personnel.

Also in 1983, a study group was launched to draft detailed proposals for establishing a permanent tripartite structure, eventually leading up to the creation of the Tripartite Forum on Conditions of Employment (TREF) in 1994.

The purpose of this Forum is to “oversee the collection of information and to stimulate communication and discussion between representatives of the Member States, the CERN Management and the CERN Staff Association”. Thus, the Forum provides each partner with the opportunity to express their views in order to facilitate the decision-making of the Finance Committee and the CERN Council.

This concertation process is unique. It does not stem from co-management or consultation. Rather, it is an integral part of a mechanism for sharing ideas and visions to manage the relations with the Personnel, and to ensure the future of the Organization.

The Staff Association is very committed to this process, which allows for exchange between the Member States, the Management and the Personnel, thus fostering transparency, commitment and mutual respect.

1 The Staff Association is the only statutory organ authorised to represent collectively the CERN personnel.

June 19, 2017 04:06 PM

CERN Bulletin

CERN Yoga club

Members of the CERN Yoga club are invited to the General Assembly of the club which will take place on:

Wednesday 5 July at 14.00
in conference room 504-E-005 (Next to Yoga room)

The agenda is available on the club's website: cern.ch/club-yoga/

If you are unable to participate, please elect a proxy voter to vote on your behalf (available from your teacher or in the Yoga room) – these can be addressed to: cernyoga@cern.ch

June 19, 2017 03:06 PM

CERN Bulletin

Offers for our members

Summer is here, enjoy our offers for the aquatic parcs!

Walibi :
Tickets "Zone terrestre": 24 € instead of 30 €.

Access to Aqualibi: 5 € instead of 6 € on presentation of your SA member ticket.

Free for children under 100 cm.

Car park free.

* * * * *

Aquaparc :
Day ticket:
Children: 33 CHF instead of 39 CHF
Adults : 33 CHF instead of 49 CHF

Bonus! Free for children under 5.

June 19, 2017 03:06 PM

CERN Bulletin

Tribute

Having learnt of the death of Dr Bjørn Jacobsen, Norwegian delegate to the CERN Council, the Staff Association joins the colleagues and the CERN Management to pay tribute paid to this dear friend of the Organization.

The Staff Association wishes to express its sincerest condolences to the family of Dr Bjørn Jacobsen and his loved ones.

The President of the CERN Staff Association, on behalf of the CERN personnel

June 19, 2017 03:06 PM

June 18, 2017

Sean Carroll - Preposterous Universe

A Response to “On the time lags of the LIGO signals” (Guest Post)

This is a special guest post by Ian Harry, postdoctoral physicist at the Max Planck Institute for Gravitational Physics, Potsdam-Golm. You may have seen stories about a paper that recently appeared, which called into question whether the LIGO gravitational-wave observatory had actually detected signals from inspiralling black holes, as they had claimed. Ian’s post is an informal response to these claims, on behalf of the LIGO Scientific Collaboration. He argues that there are data-analysis issues that render the new paper, by James Creswell et al., incorrect. Happily, there are sufficient online tools that this is a question that interested parties can investigate for themselves. Here’s Ian:


On 13 Jun 2017 a paper appeared on the arXiv titled “On the time lags of the LIGO signals” by Creswell et al. This paper calls into question the 5-sigma detection claim of GW150914 and following detections. In this short response I will refute these claims.

Who am I? I am a member of the LIGO collaboration. I work on the analysis of LIGO data, and for 10 years have been developing searches for compact binary mergers. The conclusions I draw here have been checked by a number of colleagues within the LIGO and Virgo collaborations. We are also in touch with the authors of the article to raise these concerns directly, and plan to write a more formal short paper for submission to the arXiv explaining in more detail the issues I mention below. In the interest of timeliness, and in response to numerous requests from outside of the collaboration, I am sharing these notes in the hope that they will clarify the situation.

In this article I will go into some detail to try to refute the claims of Creswell et al. Let me start though by trying to give a brief overview. In Creswell et al. the authors take LIGO data made available through the LIGO Open Science Data from the Hanford and Livingston observatories and perform a simple Fourier analysis on that data. They find the noise to be correlated as a function of frequency. They also perform a time-domain analysis and claim that there are correlations between the noise in the two observatories, which is present after removing the GW150914 signal from the data. These results are used to cast doubt on the reliability of the GW150914 observation. There are a number of reasons why this conclusion is incorrect: 1. The frequency-domain correlations they are seeing arise from the way they do their FFT on the filtered data. We have managed to demonstrate the same effect with simulated Gaussian noise. 2. LIGO analyses use whitened data when searching for compact binary mergers such as GW150914. When repeating the analysis of Creswell et al. on whitened data these effects are completely absent. 3. Our 5-sigma significance comes from a procedure of repeatedly time-shifting the data, which is not invalidated if correlations of the type described in Creswell et al. are present.

Section II: The curious case of the Fourier phase correlations?

The main result (in my opinion) from section II of Creswell et al. is Figure 3, which shows that, when one takes the Fourier transform of the LIGO data containing GW150914, and plots the Fourier phases as a function of frequency, one can see a clear correlation (ie. all the points line up, especially for the Hanford data). I was able to reproduce this with the LIGO Open Science Center data and a small ipython notebook. I make the ipython notebook available so that the reader can see this, and some additional plots, and reproduce this.

For Gaussian noise we would expect the Fourier phases to be distributed randomly (between -pi and pi). Clearly in the plot shown above, and in Creswell et al., this is not the case. However, the authors overlooked one critical detail here. When you take a Fourier transform of a time series you are implicitly assuming that the data are cyclical (i.e. that the first point is adjacent to the last point). For colored Gaussian noise this assumption will lead to a discontinuity in the data at the two end points, because these data are not causally connected. This discontinuity can be responsible for misleading plots like the one above.

To try to demonstrate this I perform two tests. First I whiten the colored LIGO noise by measuring the power spectral density (see the LOSC example, which I use directly in my ipython notebook, for some background on colored noise and noise power spectral density), then dividing the data in the Fourier domain by the power spectral density, and finally converting back to the time domain. This process will corrupt some data at the edges so after whitening we only consider the middle half of the data. Then we can make the same plot:

And we can see that there are now no correlations visible in the data. For white Gaussian noise there is no correlation between adjacent points, so no discontinuity is introduced when treating the data as cyclical. I therefore assert that Figure 3 of Creswell et al. actually has no meaning when generated using anything other than whitened data.

I would also like to mention that measuring the noise power spectral density of LIGO data can be challenging when the data are non-stationary and include spectral lines (as Creswell et al. point out). Therefore it can be difficult to whiten data in many circumstances. For the Livingston data some of the spectral lines are still clearly present after whitening (using the methods described in the LOSC example), and then mild correlations are present in the resulting plot (see ipython notebook). This is not indicative of any type of non-Gaussianity, but demonstrates that measuring the noise power-spectral density of LIGO noise is difficult, and, especially for parameter inference, a lot of work has been spent on answering this question.

To further illustrate that features like those seen in Figure 3 of Creswell et al. can be seen in known Gaussian noise I perform an additional check (suggested by my colleague Vivien Raymond). I generate a 128 second stretch of white Gaussian noise (using numpy.random.normal) and invert the whitening procedure employed on the LIGO data above to produce 128 seconds of colored Gaussian noise. Now the data, previously random, are ‘colored’ Coloring the data in the manner I did makes the full data set cyclical (the last point is correlated with the first) so taking the Fourier transform of the complete data set, I see the expected random distribution of phases (again, see the ipython notebook). However, If I select 32s from the middle of this data, introducing a discontinuity as I mention above, I can produce the following plot:

In other words, I can produce an even more extremely correlated example than on the real data, with actual Gaussian noise.

Section III: The data is strongly correlated even after removing the signal

The second half of Creswell et al. explores correlations between the data taken from Hanford and Livingston around GW150914. For me, the main conclusion here is communicated in Figure 7, where Creswell et al. claim that even after removal of the GW150914 best-fit waveform there is still correlation in the data between the two observatories. This is a result I have not been able to reproduce. Nevertheless, if such a correlation were present it would suggest that we have not perfectly subtracted the real signal from the data, which would not invalidate any detection claim. There could be any number of reasons for this, for example the fact that our best-fit waveform will not exactly match what is in the data as we cannot measure all parameters with infinite precision. There might also be some deviations because the waveform models we used, while very good, are only approximations to the real signal (LIGO put out a paper quantifying this possibility). Such deviations might also be indicative of a subtle deviation from general relativity. These are of course things that LIGO is very interested in pursuing, and we have published a paper exploring potential deviations from general relativity (finding no evidence for that), which includes looking for a residual signal in the data after subtraction of the waveform (and again finding no evidence for that).

Finally, LIGO runs “unmodelled” searches, which do not search for specific signals, but instead look for any coherent non-Gaussian behaviour in the observatories. These searches actually were the first to find GW150914, and did so with remarkably consistent parameters to the modelled searches, something which we would not expect to be true if the modelled searches are “missing” some parts of the signal.

With that all said I try to reproduce Figure 7. First I begin by cross-correlating the Hanford and Livingston data, after whitening and band-passing, in a very narrow 0.02s window around GW150914. This produces the following:

There is a clear spike here at 7ms (which is GW150914), with some expected “ringing” behaviour around this point. This is a much less powerful method to extract the signal than matched-filtering, but it is completely signal independent, and illustrates how loud GW150914 is. Creswell et al. however, do not discuss their normalization of this cross-correlation, or how likely a deviation like this is to occur from noise alone. Such a study would be needed before stating that this is significant—In this case we know this signal is significant from other, more powerful, tests of the data. Then I repeat this but after having removed the best-fit waveform from the data in both observatories (using only products made available in the LOSC example notebooks). This gives:

This shows nothing interesting at all.

Section IV: Why would such correlations not invalidate the LIGO detections?

Creswell et al. claim that correlations between the Hanford and Livingston data, which in their results appear to be maximized around the time delay reported for GW150914, raised questions on the integrity of the detection. They do not. The authors claim early on in their article that LIGO data analysis assumes that the data are Gaussian, independent and stationary. In fact, we know that LIGO data are neither Gaussian nor stationary and if one reads through the technical paper accompanying the detection PRL, you can read about the many tests we run to try to distinguish between non-Gaussianities in our data and real signals. But in doing such tests, we raise an important question: “If you see something loud, how can you be sure it is not some chance instrumental artifact, which somehow was missed in the various tests that you do”. Because of this we have to be very careful when assessing the significance (in terms of sigmas—or the p-value, to use the correct term). We assess the significance using a process called time-shifts. We first look through all our data to look for loud events within the 10ms time-window corresponding to the light travel time between the two observatories. Then we look again. Except the second time we look we shift ALL of the data from Livingston by 0.1s. This delay is much larger than the light travel time so if we see any interesting “events” now they cannot be genuine astrophysical events, but must be some noise transient. We then repeat this process with a 0.2s delay, 0.3s delay and so on up to time delays on the order of weeks long. In this way we’ve conducted of order 10 million experiments. For the case of GW150914 the signal in the non-time shifted data was louder than any event we saw in any of the time-shifted runs—all 10 million of them. In fact, it was still a lot louder than any event in the time-shifted runs as well. Therefore we can say that this is a 1-in-10-million event, without making any assumptions at all about our noise. Except one. The assumption is that the analysis with Livingston data shifted by e.g. 8s (or any of the other time shifts) is equivalent to the analysis with the Livingston data not shifted at all. Or, in other words, we assume that there is nothing special about the non-time shifted analysis (other than it might contain real signals!). As well as the technical papers, this is also described in the science summary that accompanied the GW150914 PRL.

Nothing in the paper “On the time lags of the LIGO signals” suggests that the non-time shifted analysis is special. The claimed correlations between the two detectors due to resonance and calibration lines in the data would be present also in the time-shifted analyses—The calibration lines are repetitive lines, and so if correlated in the non-time shift analyses, they will also be correlated in the time-shift analyses as well. I should also note that potential correlated noise sources was explored in another of the companion papers to the GW150914 PRL. Therefore, taking the results of this paper at face value, I see nothing that calls into question the “integrity” of the GW150914 detection.

Section V: Wrapping up

I have tried to reproduce the results quoted in “On the time lags of the LIGO signals”. I find the claims of section 2 are due to an issue in how the data is Fourier transformed, and do not reproduce the correlations claimed in section 3. Even if taking the results at face value, it would not affect the 5-sigma confidence associated with GW150914. Nevertheless I am in contact with the authors and we will try to understand these discrepancies.

For people interested in trying to explore LIGO data, check out the LIGO Open Science Center tutorials. As someone who was involved in the internal review of the LOSC data products it is rewarding to see these materials being used. It is true that these tutorials are intended as an introduction to LIGO data analysis, and do not accurately reflect many of the intricacies of these studies. For the interested reader a number of technical papers, for example this one, accompany the main PRL and within this paper and its references you can find all the nitty-gritty about how our analyses work. Finally, the PyCBC analysis toolkit, which was used to obtain the 5-sigma confidence, and of which I am one of the primary developers, is available open-source on git-hub. There are instructions here and also a number of examples that illustrate a number of aspects of our data analysis methods.

This article was circulated in the LIGO-Virgo Public Outreach and Education mailing list before being made public, and I am grateful to comments and feedback from: Christopher Berry, Ofek Birnholtz, Alessandra Buonanno, Gregg Harry, Martin Hendry, Daniel Hoak, Daniel Holz, David Keitel, Andrew Lundgren, Harald Pfeiffer, Vivien Raymond, Jocelyn Read and David Shoemaker.

by Sean Carroll at June 18, 2017 09:18 PM

June 16, 2017

Robert Helling - atdotde

I got this wrong
In yesterday's post, I totally screwed up when identifying the middle part of the spectrum as low frequency. It is not. Please ignore what I said or better take it as a warning what happens when you don't double check.

Apologies to everybody that I stirred up!

by Robert Helling (noreply@blogger.com) at June 16, 2017 02:55 PM

Robert Helling - atdotde

Some DIY LIGO data analysis
UPDATE: After some more thinking about this, I have very serious doubt about my previous conclusions. From looking at the power spectrum, I (wrongly) assumed that the middle part of the spectrum is the low frequency part (my original idea was, that the frequencies should be symmetric around zero but the periodicity of the Bloch cell bit me). So quite to the opposite, when taking into account the wrapping, this is the high frequency part (at almost the sample rate). So this is neither physics nor noise but the sample rate. For documentation, I do not delete the original post but leave it with this comment.


Recently, in the Arnold Sommerfeld Colloquium, we had Andrew Jackson of NBI talk about his take on the LIGO gravitational wave data, see this announcement with link to a video recording. He encouraged the audience to download the freely available raw data and play with it a little bit. This sounded like fun, so I had my go at it. Now, that his paper is out, I would like to share what I did with you and ask for your comments.

I used mathematica for my experiments, so I guess the way to proceed is to guide you to an html export of my (admittedly cleaned up) notebook (Source for your own experiments here).

The executive summary is that apparently, you can eliminate most of the "noise" at the interesting low frequency part by adding to the signal its time reversal casting some doubt about the stochasticity of this "noise".


I would love to hear what this is supposed to mean or what I am doing wrong, in particular from my friends in the gravitational wave community.



by Robert Helling (noreply@blogger.com) at June 16, 2017 02:47 PM

Tommaso Dorigo - Scientificblogging

Higgs Boson-Inspired Artwork By High-School Students
The "Art&Science" project is coming to the final phase as far as the activities in Venice are concerned. About 100 15 to 17-year-old students from high schools in Venice have assisted to lessons on particle physics and the Higgs boson in the past months, and have been challenged to produce, alone or in groups of up to three, artistic compositions inspired by what they had learned. This resulted in 38 artworks, many of which are really interesting. The 17 best works will be exposed at the Palazzo del Casinò of the Lido of Venice, the site of the international EPS conference, next July 5-12, and the three best among them will receive prizes during a public event on July 8th, in presence of the CERN director general Fabiola Gianotti.

read more

by Tommaso Dorigo at June 16, 2017 12:02 PM

Emily Lakdawalla - The Planetary Society Blog

When New Horizons' next target passed in front of a star, this scientist was watching from Argentina
A team of scientists recently traveled to rural Argentina in the hopes of catching New Horizons' next target—Kuiper Belt object MU 69—crossing in front of a distant star.

June 16, 2017 11:00 AM

Lubos Motl - string vacua and pheno

Loop quantum gravity was Aryan physics of aether reloaded
Philipp Lenard won a physics Nobel prize and was widely regarded as a top ethnic German physicist as recently as in the 1930s and 1940s.

This fact sounds utterly bizarre today (but yes, aside from Heisenberg, Born, and Jordan, plus the founder of "quantum theory" Max Planck, all the German-sounding founders of quantum mechanics tended to be Swiss or Austrian etc. – maybe this weakness of Germany was affected by the enhanced anti-Semitism and other ideologies in that country). He got his Nobel for cathode rays. They were streams of electrons – I think that you need this to be explained because the very phrase doesn't sound important today. Moreover, J.J. Thomson, Johann Hittorf, and Eugen Goldstein were arguably more vital and earlier discoverers of the cathode ray, in the same way in which Conrad Röntgen was the actual discoverer of the X-rays, even though Lenard wanted to take credit for those, too.

As the "Genius" series on National Geographic reminds us, Lenard was a top Nazi hater of Einstein – and a top warrior against modern physics which he called "Jewish physics". In 2015, Bruce Hillman wrote the book The Man Who Stalked Einstein: How Nazi Scientist Philipp Lenard Changed the Course of History which is extremely interesting because it reveals that the Šmoit-style criticism of modern physics which I considered to be a recent phenomenon isn't new, after all. It's just the crackpottery of the Aryan physics reloaded.




If you don't want to buy and read that book, you should at least read the 2015 Phys.org article When science gets ugly – the story of Philipp Lenard and Albert Einstein. The article reminds us of some basic facts, e.g.:
Lenard argued that Einstein's hyper-theoretical and hyper-mathematical approach to physics was exerting a pernicious influence in the field. The time had come, he argued, to restore experimentalism to its proper place. He also launched a malicious attack on Einstein, making little attempt to conceal his antipathy toward Jews.
OK, so according to Lenard, relativity and modern physics in general was "hyper-theoretical" and "hyper-mathematical" and was exerting a pernicious effect on the whole field (of physics). Experimentalism had to be restored etc. Haven't you heard these things somewhere? Well, yes, it's exactly the anti-theoretical-physics garbage that was resuscitated by people like Woit and Smolin a decade ago.

Note that Lenard was "capable" of saying and writing this rubbish without any references to Karl Popper. Popper is just a redundant and worthless extra label that was recently glued on top of these anti-science ideas. The essence of the anti-theoretical-physics sentiment has always been the same: The people who criticize theoretical physics simply hate the idea that the mathematical reasoning becomes a key or the key portion of the physics research. They are just not talented enough for that.




So for some time, I was aware of the fact that the general philosophical tenets of the attacks against theoretical physics are very old. But only Hillman's book and some of the recent texts about the Einstein-Lenard animosity that I saw, e.g. this textbook of German physics by Lenard (written in the German Gothic fonts, of course, even though Hitler abandoned those fonts as ancient and reducing Germany's international influence 3 years prior to the publication of Lenard's book), a book about aether, and a presentation about aether theories have opened my eyes further.

It wasn't just the irrational hateful anti-theoretical physics philosophy that the Šmoits have copied from Aryan physics. The basic technical framework of "Smolin-style" alternative theories was pretty much plagiarized from the Aryan physics, too.

OK, let's look at the history a little bit. Lenard did some cathode ray work in 1888. Once the 20th century – and modern physics – began, he basically stopped doing anything new in physics that had value. That's a lot of time to do things like the German physics – he lived up to 1947. At any rate, there was a moment when Lenard was far more well-known than the young Einstein and these two men had a respectful relationship. It deteriorated soon and became terrible. My understanding is that the transformation took place quickly during the year 1909.

A big part of the sentiment boiled down to Lenard's visceral anti-Semitism. But I think that one can see that it didn't explain all the tension. It was at least equally important that Lenard was a full-blown crackpot when it came to anything that depends on modern theoretical physics. He was fighting against relativity, even in the 1930s, three decades after every usable physicist knew that relativity was a fact. You could dismiss this anti-relativity sentiment as a consequence of Lenard's anti-Semitism. It may have been primary for him to preserve the idea that a Jewish physicist couldn't have done any groundbreaking work – try to appreciate how indefensible this thesis became after the war when over 1/2 of the key advances in cutting-edge physics was done by Jewish physicists.

In some quotes, the influence of Lenard's racist sentiments on his opinions about physics (and the completely omnipresent yet incoherent mixing of scientific and political ideas, something that is so typical for the Smolins, Woits, and Hossenfelders of the present as well) was obvious. For example, Lenard once whined:
"Then the Jew came and caused an upheaval," he wrote at the time, "with his abolition of the concept of ether, and ridiculously enough, even the oldest authorities followed him. They suddenly felt powerless when confronted with the Jew. This is how the Jewish spirit started to rule over physics."
But he also had his would-be "constructive" contributions to physics that were supposed to provide physics with alternatives. The 1944 textbook on German physics is basically just a textbook of classical physics. As far as I can say, the adjectives "German" and "classical" were basically synonymous if you looked at the content and basic axiomatic framework of physics. Well, "German" could have meant "classical and preferrably experimental", too. But aside from this outdated but basically uncontroversial physics, he was trying to promote concepts on the would-be cutting edge physics. And his proposals were main theories of the aether.

As this presentation tells us, Lenard wrote two books about aether:
  • “Über Äther und Materie”, 1911
  • “Über Äther und Uräther”, 1922
Note that these happened to be published during the years that Einstein spent in Prague, Czech lands – Lenard was born in Bratislava, Hungary, in the Austrian Empire. Well, when those books were published, Einstein was already open about his certainty that Lenard turned into a full-blown crank.

The first book, "On Aether and Matter", was mostly about the old electromagnetic aether. The second book, "On Aether and Fore-Aether" or "On Aether and Aether of Space", was partly a meme about some spin-network-like structure in the empty space similar to loop quantum gravity.

As the "Genius" series superficially sketches, Lenard had a young assistant named Jakob Johann Laub. Well, it turned out that Laub was a huge fan of Einstein's work (which implied that Laub felt certain about relativity and non-existence of aether) but Lenard ordered Laub to work to prove aether, anyway. And in 1909, as Hillman's book discusses in detail, Laub and Einstein exchanged some correspondence. Einstein was just exposed to a lecture by Lenard and he had compassion with Laub. Einstein wrote Laub:
Lenard must, however, in many things, be wound quite askew. His recent lecture on these fanciful ethers appears to me almost infantile. Further, the study he commanded of you... borders on the absurd. I am sorry that you must spend your time on such stupidity.
It went so far that Einstein promised Laub to find a new job. Lenard didn't want to allow it, insisting that Laub continues with the infantile aether work ordered by Lenard up to the moment when Laub finds the new job. "This is really a twisted fellow, Lenard," Einstein pointed out after hearing this.



To write whole physics books about aether in 1911 or even 1922 is, well, plain retarded. These books are examples of the episodes in the history of physics that look chronologically wrong: some people happen to land into a later century than where they should have spent their lives and die. Einstein was surely convinced about the non-existence of aether years before 1905. But at any rate, in 1905, he clarified all the puzzling things about space and time related to the speed of light and it made sense. His special theory of relativity made it clear that all inertial frames are equally good to formulate the laws of physics – and not even in the vacuum, you can determine which of the inertial frames is privileged. None of them is.

It follows that the spacetime just cannot contain any substance or objects that could pick a privileged frame. Period. The question was settled. Nevertheless, Lenard continued to promote the meme that the electromagnetic phenomena required luminiferous aether. He did so even years after he "sort of" accepted general relativity. This attitude was extremely unnatural – when gravity could be described by fields in the otherwise empty vacuum, why should electromagnetism be completely different? They clearly look totally analogous. (Also, there was a sense in which Lenard accepted general relativity "more than" the special one. Just imagine how illogical this attitude was. Did he justify it in some way? Not really, political shields were already primary for him and they didn't care that general relativity is, well, a generalization of the special one so it requires the special one.) He was constructing this aether out of some pieces resembling "LEGO" – and that's probably the main reason why Einstein called it (and why I independently called loop quantum gravity and similar memes) "infantile".

Much more generally, the whole way of thinking of Lenard and the loop quantum gravity and other "spacetime built of pieces" champions is childish because it imagines that at the bottom of things, the simplest objects in Nature such as elementary particles or regions of the empty space are made of pieces or objects similar to those that kids may play with. What do adult modern physicists actually think? Well, the theories needed to understand the behavior of the vacuum or elementary particles are also made of pieces – but the pieces are terms in mathematical expressions, such as this Lagrangian of the Standard Model:



You know, the idea that is obvious to everyone who thinks like a modern physicist is that the mathematical structures and constraints are fundamental and primary; and "objects" that we can see are their consequences. Lenard and various "discrete physics" proponents that still exist have this basic point upside down. They are imagining that some "pictures" are primary and fundamental while mathematics is at most a "servant" that we use to describe this visualizable reality in a way that looks more credible. But mathematics cannot be just a "servant" in this sense. Fundamental laws of physics must be intrinsically formulated in the language of mathematics. The explanations "why this part of the theory works in one way or another" must be mathematical in nature, too.

Just compare the deep difference between the colorful picture of the spinfoam – which is basically equivalent to Lenard's infantile models of aether, at the level of detail that we follow in this text – and the Lagrangian. It's a difference between pictures and calculations, geometry and algebra. Now, I am not saying that geometry has no place in physics. All of physics may be phrased as "some generalized geometry". But if you need to understand the precise rules that govern Nature at the fundamental level, you simply need some algebra. You need to convert the pictures to mathematical structures or constraints.

It is extremely unnatural to imagine that at the bottom, Nature is constructed out of some pieces that actually need lots of parameters – similar to macroscopic pieces of "LEGO" – to describe their shape or interactions. There is really no reason to think that the fundamental pieces should resemble the macroscopic ones. Even though he remained an opponent of quantum mechanics, Einstein always understood the basic point that the fundamental laws must be different than the everyday life objects in many characteristics – and that the possible laws of physics should be searched in the "space of mathematically possible candidate theories" i.e. by a mathematical selection and classification, not e.g. according to some instinctive experience with everyday objects. Even the top quantum mechanical folks have really credited Einstein for his help to elucidate this conceptual point.

At the end, the transition from classical physics to modern physics – which involved both relativity and especially quantum mechanics – was a classic and perhaps the most profound example of a revolution in science. A revolution in science simply demolishes some assumptions that have been considered holy for a very long time. And some people are just too dogmatic and incapable of seeing that such a transition is necessary.

My reading of this history has filled me with some preliminary optimism. The recent decade isn't the first moment when theoretical physics was facing completely irrational and dishonest critics who love to mix science and politics in scientifically unacceptable ways and whose sermons are ultimately addressed to the least demanding audiences. Despite Nazi Germany's dominance over most of Europe by 1942, the world has recovered from this irrational political movement designed to cripple and enslave science. We have some chance that the society will recover from the recent revival of these ideas once again.

by Luboš Motl (noreply@blogger.com) at June 16, 2017 09:34 AM

June 15, 2017

Symmetrybreaking - Fermilab/SLAC

From the cornfield to the cosmos

Fermilab celebrates 50 years of discovery.

Collage: 50 years of Fermilab

Imagine how it must have felt to be Robert Wilson in the spring of 1967. The Atomic Energy Commission had hired him as the founding director of the planned National Accelerator Laboratory. Before him was the opportunity to build the most powerful particle accelerator in the world—and to create a great new American laboratory dedicated to giving scientists extraordinary new capabilities to explore the universe. 

Fifty years later, we marvel at the boldness and scope of the project, and at the freedom, the leadership, the confidence and the vision that it took to conceive and build it. If anyone was up for the challenge, it was Wilson. 

By the early 1960s, the science of particle physics had outgrown its birthplace in university laboratories. The accelerators and detectors for advancing research had grown too big, complex and costly for any university to build and operate alone. Particle physics required a new model: national laboratories where the resources of the federal government would bring together the intellectual, scientific, engineering, technical and management capabilities to give collaborations of scientists the ability to explore scientific questions that could no longer be addressed at individual universities. 

The NAL, later renamed Fermi National Accelerator Laboratory, would be a national facility where university physicists—“users”—would be “at home and loved,” in the words of physicist Leon Lederman, who eventually succeeded Wilson as Fermilab director. The NAL would be a truly national laboratory rising from the cornfields west of Chicago, open to scientists from across the country and around the world. 

The Manhattan Project in the 1940s had shown the young Wilson—had shown the entire nation—what teams of physicists and engineers could achieve when, with the federal government’s support, they devoted their energy and capability to a common goal. Now, Wilson could use his skills as an accelerator designer and builder, along with his ability to lead and inspire others, to beat the sword of his Manhattan Project experience into the plowshare of a laboratory devoted to peacetime physics research.  

When the Atomic Energy Commission chose Wilson as NAL’s director, they may have been unaware that they had hired not only a gifted accelerator physicist but also a sculptor, an architect, an environmentalist, a penny-pincher (that they would have liked), an iconoclast, an advocate for human rights, a Wyoming cowboy and a visionary. 

Over the dozen years of his tenure Wilson would not only oversee the construction of the world’s most powerful particle accelerator, on time and under budget, and set the stage for the next generation of accelerators. He would also shape the laboratory with a vision that included erecting a high-rise building inspired by a French cathedral, painting other buildings to look like children’s building blocks, restoring a tall-grass prairie, fostering a herd of bison, designing an 847-seat auditorium (a venue for culture in the outskirts of Chicago), and adorning the site with sculptures he created himself. 

Fermilab physicist Roger Dixon tells of a student who worked for him in the lab’s early days.

“One night,” Dixon remembers, “I had Chris working overtime in a basement machine shop. He noticed someone across the way grinding and welding. When the guy tipped back his helmet to examine his work, Chris walked over and asked, ‘What’ve they got you doin’ in here tonight?’ The man said that he was working on a sculpture to go into the reflecting pond in front of the high rise. ‘Boy,’ Chris said, ‘they can think of more ways for you to waste your time around here, can’t they?’ To which Robert Wilson, welder, sculptor and laboratory director, responded with remarks Chris will never forget on the relationship of science, technology and art.”

Wilson believed a great physics laboratory should look beautiful. “It seemed to me,” he wrote, “that the conditions of its being a beautiful laboratory were the same conditions as its being a successful laboratory.”

With the passage of years, Wilson’s outsize personality and gift for eloquence have given his role in Fermilab’s genesis a near-mythic stature. In reality, of course, he had help. He used his genius for bringing together the right people with the right skills and knowledge at the right time to recruit and inspire scientists, engineers, technicians, administrators (and an artist) not only to build the laboratory but also to stick around and operate it. Later, these Fermilab pioneers recalled the laboratory’s early days as a golden age, when they worked all hours of the day and night and everyone felt like family. 

By 1972, the Main Ring of the laboratory’s accelerator complex was sending protons to the first university users, and experiments proliferated in the laboratory’s particle beams. In July 1977, Experiment E-288, a collaboration Lederman led, discovered the bottom quark. 

Physicist Patty McBride, who heads Fermilab’s Particle Physics Division, came to Fermilab in 1979 as a Yale graduate student. McBride’s most vivid memory of her early days at the laboratory is meeting people with a wide variety of life experiences. 

“True, there were almost no women,” she says. “But out in this lab on the prairie were people from far more diverse backgrounds than I had ever encountered before. Some, including many of the skilled technicians, had returned from serving in the Vietnam War. Most of the administrative staff were at least bilingual. We always had Russian colleagues; in fact the first Fermilab experiment, E-36, at the height of the Cold War, was a collaboration between Russian and American physicists. I worked with a couple of guest scientists who came to Fermilab from China. They were part of a group who were preparing to build a new accelerator at the Institute of High Energy Physics there.” 

The diversity McBride found was another manifestation of Wilson’s concept of a great laboratory.

“Prejudice has no place in the pursuit of knowledge,” he wrote. “In any conflict between technical expediency and human rights, we shall stand firmly on the side of human rights. Our support of the rights of the members of minority groups in our laboratory and its environs is inextricably intertwined with our goal of creating a new center of technical and scientific excellence.”

Designing the future

Advances in particle physics depend on parallel advances in accelerator technology. Part of an accelerator laboratory’s charge is to develop better accelerators—at least that’s how Wilson saw it. With the Main Ring delivering beam, it was time to turn to the next challenge. This time, he had a working laboratory to help.  

The designers of Fermilab’s first accelerator had hoped to use superconducting magnets for the Main Ring, but they soon realized that in 1967 it was not yet technically feasible. Nevertheless, they left room in the Main Ring tunnel for a next-generation accelerator. 

Wilson applied his teambuilding gifts to developing this new machine, christened the Energy Doubler (and later renamed the Tevatron). 

In 1972, he brought together an informal working group of metallurgists, magnet builders, materials scientists, physicists and engineers to begin investigating superconductivity, with the goal of putting this exotic phenomenon to work in accelerator magnets. 

No one had more to do with the success of the superconducting magnets than Fermilab physicist Alvin Tollestrup. Times were different then, he recalls.

“Bob had scraped up enough money from here and there to get started on pursuing the Doubler before it was officially approved,” Tollestrup says. “We had to fight tooth and nail for approval. But in those days, Bob could point the whole machine shop to do what we needed. They could build a model magnet in a week.”

It took a decade of strenuous effort to develop the superconducting wire, the cable configuration, the magnet design and the manufacturing processes to bring the world’s first large-scale superconducting accelerator magnets into production, establishing Fermilab’s leadership in accelerator technology. Those involved say they remember it as an exhilarating experience. 

By March 1983, the Tevatron magnets were installed underneath the Main Ring, and in July the proton beam in the Tevatron reached a world-record energy of 512 billion electronvolts. In 1985, a new Antiproton Source enabled proton-antiproton collisions that further expanded the horizons of the subatomic world. 

Two particle detectors—called the Collider Detector at Fermilab, or CDF, and DZero—gave hundreds of collaborating physicists the means to explore this new scientific territory. Design for CDF began in 1978, construction in 1982, and CDF physicists detected particle collisions in 1985. Fermilab’s current director, Nigel Lockyer, first came to work at Fermilab on CDF in 1984. 

“The sheer ambition of the CDF detector was enough to keep everyone excited,” he says. 

The DZero detector came online in 1992. A primary goal for both experiments was the discovery of the top quark, the heavier partner of the bottom quark and the last undiscovered quark of the six that theory predicted. Both collaborations worked feverishly to be the first to accumulate enough evidence for a discovery. 

In March 1995, CDF and DZero jointly announced that they had found the top. To spread the news, Fermilab communicators tried out a fledgling new medium called the World Wide Web.

Five decades of particle physics

Reaching new frontiers

Meanwhile, in the 1980s, growing recognition of the links between subatomic interactions and cosmology—between the inner space of particle physics and the outer space of astrophysics—led to the formation of the Fermilab Theoretical Astrophysics Group, pioneered by cosmologists Rocky Kolb and Michael Turner. Cosmology’s rapid evolution from theoretical endeavor to experimental science demanded large collaborations and instruments of increasing complexity and scale, beyond the resources of universities—a problem that particle physics knew how to solve. 

In the mid-1990s, the Sloan Digital Sky Survey turned to Fermilab for help. Under the leadership of former Fermilab Director John Peoples, who became SDSS director in 1998, the Sky Survey carried out the largest astronomical survey ever conducted and transformed the science of astrophysics.  

The discovery of cosmological evidence of dark matter and dark energy had profound implications for particle physics, revealing a mysterious new layer to the universe and raising critical scientific questions. What are the particles of dark matter? What is dark energy? In 2004, in recognition of Fermilab’s role in particle astrophysics, the laboratory established the Center for Particle Astrophysics. 

As the twentieth century ended and the twenty-first began, Fermilab’s Tevatron experiments defined the frontier of high-energy physics research. Theory had long predicted the existence of a heavy particle associated with particle mass, the Higgs boson, but no one had yet seen it. In the quest for the Higgs, Fermilab scientists and experimenters made a relentless effort to wring every ounce of performance from accelerator and detectors. 

The Tevatron had reached maximum energy, but in 1999 a new accelerator in the Fermilab complex, the Main Injector, began giving an additional boost to particles before they entered the Tevatron ring, significantly increasing the rate of particle collisions. The experiments continuously re-invented themselves using advances in detector and computing technology to squeeze out every last drop of data. They were under pressure, because the clock was ticking.  

A new accelerator with seven times the Tevatron’s energy was under construction at CERN, the European laboratory for particle physics in Geneva, Switzerland. When Large Hadron Collider operations began, its higher-energy collisions and state-of-the-art detectors would eclipse Fermilab’s experiments and mark the end of the Tevatron’s long run.

In the early 1990s, the Tevatron had survived what many viewed as a near-death experience with the cancellation of the Superconducting Super Collider, planned as a 26-mile ring that would surpass Fermilab’s accelerator, generating beams with 20 times as much energy. Construction began on the SSC’s Texas site in 1991, but in 1993 Congress canceled funding for the multibillion-dollar project. Its demise meant that, for the time being, the high-energy frontier would remain in Illinois. 

While the SSC drama unfolded, in Geneva the construction of the LHC went steadily onward—helped and supported by US physicists and engineers and by US funding. 

Among the more puzzling aspects of particle physics for those outside the field is the simultaneous competition and collaboration of scientists and laboratories. It makes perfect sense to physicists, however, because science is the goal. The pursuit of discovery drives the advancement of technology. Particle physicists have decades of experience in working collaboratively to develop the tools for the next generation of experiments, wherever in the world that takes them. 

Thus, even as the Tevatron experiments threw everything they had into the search for the Higgs, scientists and engineers at Fermilab—literally across the street from the CDF detector—were building advanced components for the CERN accelerator that would ultimately shut the Tevatron down.  

Going global

Just as in the 1960s particle accelerators had outgrown the resources of any university, by the end of the century they had outgrown the resources of any one country to build and operate. Detectors had long been international construction projects; now accelerators were, too, as attested by the superconducting magnets accumulating at Fermilab, ready for shipment to Switzerland.

As the US host for CERN’s CMS experiment, Fermilab built an LHC Remote Operations Center so that the growing number of US collaborating physicists could work on the experiment remotely. In the early morning hours of September 10, 2008, a crowd of observers watched on screens in the ROC as the first particle beam circulated in the LHC. Four years later, the CMS and ATLAS experiments announced the discovery of the Higgs boson. One era had ended, and a new one had begun. 

The future of twenty-first century particle physics, and Fermilab’s future, will unfold in a completely global context. More than half of US particle physicists carry out their research at LHC experiments. Now, the same model of international collaboration will create another pathway to discovery, through the physics of neutrinos. Fermilab is hosting the international Deep Underground Neutrino Experiment, powered by the Long-Baseline Neutrino Facility that will send the world’s most powerful beam of neutrinos through the earth to a detector more than a kilometer underground and 1300 kilometers away in the Sanford Underground Research Facility in South Dakota. 

“We are following the CERN model,” Lockyer says. “We have split the DUNE project into an accelerator facility and an experiment. Seventy-five percent of the facility will be built by the US, and 25 percent by international collaborators. For the experiment, the percentages will be reversed.” 

The DUNE collaboration now comprises more than 950 scientists from 162 institutions in 30 countries. “To design the project,” Lockyer says, “we started with a clean piece of paper and all of our international collaborators and their funding agencies in the room. They have been involved since t=0.”

In Lockyer’s model for Fermilab, the laboratory will keep its historic academic focus, giving scientists the tools to address the most compelling scientific questions. He envisions a diverse science portfolio with a flagship neutrino program and layers of smaller programs, including particle astrophysics. 

At the same time, he says, Fermilab feels mounting pressure to demonstrate value beyond creating knowledge. One potential additional pursuit involves using the laboratory’s unequaled capability in accelerator design and construction to build accelerators for other laboratories. Lockyer says he also sees opportunities to contribute the computing capabilities developed from decades of processing massive amounts of particle physics data to groundbreaking next-generation computing projects. “We have to dig deeper and reach out in new ways.”

In the five decades since Fermilab began, knowledge of the universe has flowered beyond anything we could have imagined in 1967. Particles and forces then unknown have become familiar, like old friends. Whole realms of inner space have opened up to us, and outer space has revealed a new dark universe to explore. Across the globe, collaborators have joined forces to extend our reach into the unknown beyond anything we can achieve separately. 

Times have changed, but Wilson would still recognize his laboratory. As it did then, Fermilab holds the same deep commitment to the science of the universe that brought it into being 50 years ago. 

by Judith Jackson at June 15, 2017 09:15 PM

June 14, 2017

ZapperZ - Physics and Physicists

The Physics of Texting And Driving
First of all, let me be clear on this. I hate, HATE, HATE drivers who play with their mobile devices while they drive. I don't care if it is texting (stupid!) or just talking on their phones. These drivers are often driving erratically, unpredictably, and often do not use turn signals, etc. They are distracted drivers, and their stupid acts put my life and my safety in jeopardy. My nasty thought on this is that I wish Darwin would eliminate them out of the gene pool.

There! I feel better now. Coming back to the more sedate and sensible topic related to physics, Rhett Allain has a nice, short article on why physics will rationally explain to you why texting and driving is not safe, and why texting and driving ANNOYS OTHER PEOPLE!

OK, so my calmness didn't last very long.

The physics is quite elementary that even any high-school physics students can understand. And now, I am going back to my happy place.

Zz.

by ZapperZ (noreply@blogger.com) at June 14, 2017 03:43 PM

Lubos Motl - string vacua and pheno

LHC null results haven't changed the qualitative big picture in HEP
Two years ago, I wrote about relaxions, a new way to create awkward theories – that could be said "to be rather natural according to some criteria but not all criteria" – which are capable of "explaining" the existence of large numbers in physics.

One starts with a large, but only logarithmically large, number of fields and assigns somewhat exotic values of charges under a \(U(1)\) gauge group – and yes, it has to be an Abelian group which may be considered a damning flaw of the whole paradigm. Consequently, one finds that there exists a scalar boson with a periodic range of values whose periodicity is "exponentially large" in the number of elementary fields we have used.

Backreaction discusses those papers and some of their recent followups under the new brand, Clockworks. It is an OK idea – which is probably irrelevant in physics but has some chance to be relevant – but it can in no way be classified as the "#1 idea" of a decade or something big like that.

Instead of discussing the somewhat modest and vague idea again, let me express my disbelief about a general statement made at Backreaction.




Sabine Hossenfelder wrote:
But there hasn’t been a big, new trend since the LHC falsified everything that was falsifiable. It’s like particle physics stepped over the edge of a cliff but hasn’t looked down and now just walks on nothing.
Wow. So the "LHC has falsified everything that was falsifiable", we hear. So particle physics as we knew it is probably dead by now. What is this sentence supposed to say?




Aside from several suggestive hints of new physics, e.g. those challenging lepton universality, the results from the LHC have been compatible with the Standard Model – the complete one which has included the Higgs boson and since the 2012 discovery, it should be considered a part of the "old physics".

So it means that the Standard Model's range of validity or usability is at least a bit greater than what an average particle physicist believed a few years ago. The range of validity may keep on growing. But this growth may also abruptly stop at any moment in the future if and when new physics is discovered.

What has the LHC falsified? It has only falsified theories that could have been falsified by now, i.e. theories and models that made a firm prediction that effects deviating from the Standard Model may be experimentally proven within 35 inverse femtobarns of the proton-proton collision data at the center-of-mass energy of \(13\TeV\). That's the right clarification of the sentence.

The LHC has only falsified what it could have falsified by the finite amount of data at the limited energy available as of today.

In other words, once you clarify the demagogically simplified sentence by Hossenfelder, it turns into an absolutely vacuous and useless tautology and you realize that there is nothing particularly "fatal" about the current moment in the history of physics. But the very point of her and similar demagogues' oversimplification is that they want to deceive their readers, and because many of their readers are very gullible, they get deceived all the time.

Her statement is just like the proposition: Life has shown that all people who are still around and who could have died are immortal.

Is that true? Well, it isn't. The fact that you're alive now doesn't mean that you are immortal, i.e. that you can never die. Exactly in the same way, the fact that the Standard Model hasn't been falsified or proven incomplete as of now doesn't mean that it is the precise theory of Nature forever. And on the contrary, if a theory or paradigm doesn't imply sharp, discoverable predictions of new effects in the dataset that the LHC has collected, it just doesn't mean that it is unfalsifiable.

The null results of the LHC – which are compatible with the Standard Model – bring us some information about Nature, whether you like it or not. At the same moment, the information is less exciting and "contains fewer bytes" because it may be summarized by saying that things continue to be the same as what was established previously.

Before the LHC started, physicists could have already imposed lower bounds on the masses on hypothetical new particles, upper bounds on the interaction strength of hypothetical new interactions, and so on. What the LHC has done was to change the numbers so that the statements that can be made now are stronger than those that were possible 10 years ago. The lower bounds on the new particles' masses have grown. The upper bounds on the new forces' interaction strength have been lowered. But all these things are quantitative adjustments. You don't really need to learn intellectually demanding new things – relatively to what you should have learned before 2008 – if you want to understand the observations done at the LHC. You may call it a relief, you may call it sad, you may call it a nightmare scenario, but regardless of the emotional labels, it's a fact. And it's a new fact and scientists are ultimately supposed to find them, whatever they turn out to be.

So where does it leave particle physics? Particular models, especially the ambitious theories predicting new phenomena that should have been around the corner, i.e. discovered soon, have been killed. More careful models that predict new phenomena for the LHC but don't try to claim that they must be around the corner could have been either disfavored, or somewhat favored, relatively speaking, because they were disfavored less than their bolder competitors.

But again, the impact of the null results from the LHC on the theories, models, and ideas may be summarized as a bunch of technicalities, elimination of all the models that were too specific, too ambitious, and too impatient. And some adjustments of numbers – relative degrees of faith that physicists assign to one theory, model, or idea or another.

Because after the Higgs discovery, the LHC hasn't made any qualitative, game-changing discovery of new physics, the game hasn't qualitatively changed! Again, this sentence is a tautology but the likes of Hossenfelder do everything they can to convince you that this self-evidently correct tautology is actually incorrect. They want you to abandon rational thinking altogether.

So the LHC hasn't changed the qualitative landscape. And the situation is therefore analogous to what it was in 2008. If you want to do research of high-energy physics, you may still pick similar strategies as you could in 2008. You may be biased towards "easily falsifiable" theories and models. Needless to say, you still have a chance to succeed once the LHC collects a bigger amount of data – or once a hypothetical bigger future collider collects some data. \(13\TeV\) isn't the maximum possible energy of particles in Nature and 35/fb isn't the maximum allowed integrated luminosity in Nature.

But you may also be proven wrong, just like the authors of all the "easily falsifiable" models before the recent LHC run.

Or you may abandon this fanatical desire to become famous as soon as possible and study the possible ideas that could be relevant for the future deepening of our understanding of Nature, and do so regardless of whether the ideas may be experimentally tested in the near future or not, regardless of the very high energies that they may demand. In other words, you may do the kind of research that is more typical for the hep-th archive than the hep-ph archive.

As I have pointed out several times in the past, the new bunch of null results should really imply that the relative importance of the hep-th thinking and strategy – not caring whether the ideas may be tested in the near future – should have grown simply because those who have followed the other, "around the corner" philosophy of many hep-ph researchers, have (repeatedly) burned themselves and they should learn a lesson.

But aside from this lesson, it's clear that some people will keep on investigating profoundly theoretical ideas, perhaps those about Planck scale physics, while others will focus their minds on possible effects that may show up at the next collider. Both groups of questions are self-evidently scientific and no amount of fog by the anti-physics demagogues can ever change this elementary fact.

And it's clear that a big part of the truly theoretical ideas and advances, e.g. those in string theory, that were celebrated by the theorists in 2007 are still celebrated in 2017. You know, their being detached from experiments in the foreseeable future has been claimed to be a disadvantage. But now, after the "too ambitious, around the corner" models have been killed, what used to be called a disadvantage has been proven to be a clear advantage instead.

So the research in string theory, quantum gravity, and some top-down parts of quantum field theory is continuing just like it did before the LHC null results. Their impact on these fields has been minimal because they primarily use deep mathematics and long chains of argumentation applied to some experimental data that have been known for a rather long time, anyway.

Hossenfelder's demagogic claim that "everything has been killed that can be killed" isn't just some isolated wrong statement that has no consequences. She wants this piece of rubbish to have consequences, so the following paragraph of hers said:
The best candidate for a new trend that I saw in the past years is the “clockwork mechanism,” ...
She basically says "all of theoretical particle physics has been killed and now almost everyone has to study the clockwork mechanism". Needless to say, only complete idiots – a set that may include herself – would trust such a statement. Almost no important qualitative idea in particle physics has been "killed" – despite the $10 billion, the LHC is just a way too weak player for such tasks – and the clockwork mechanism remains a tiny portion of the research activity in particle physics.

In the quote above, you may see the word "trend". Fake researchers such as Ms Hossenfelder are not trying to do actual science. They are trying to be trendy – and rely on those intruders in the broader community who think it is enough to be trendy. They are trying to find things that they consider "trends" and pretend to be up-to-date. But one can't really do serious physics (or any other science) in this way. Physics is working on a 10th floor of a skyscraper and you should better be sure that there are 9 floors beneath you if you want to do something sensible on this floor. Even though she accuses others, Hossenfelder is a great example of someone who wants to hover in the air above a cliff. Sorry but it won't work. If there's air beneath you, gravity will make sure that you fall to the asphalt and break your skull.

The actual progress doesn't appear in some isolated new ideas that "throw away all the previous physics" and look for a trend of the recent years or months. The actual progress mostly takes place in the "minitrends" that are only visible in the subdisciplines of particle physics, to those who are good enough to know certain fundamental things that were established in the past, and not just the recent "trends".

(OK, at this point, I can't avoid thinking about a recent question by Kashyap who also seems willing to listen about "trends" and he thinks that he may be ignorant about all the previous scientific results. Sorry, you can't.)

But let me return to the first offensively demagogic sentence again:
But there hasn’t been a big, new trend since the LHC falsified everything that was falsifiable.
It also implicitly says that physicists have been lame and failing recently and they haven't even been able to invent some big fashion in recent times... Which recent times is she talking about? The timing is deliberately undefined in the sentence. But no big "trend" has occurred "since the moment when the LHC has falsified everything that could be falsified".

I have already explained that it's pure rubbish to claim that the "LHC has falsified everything that could be falsified". But what timing could have been meant by that sentence? Well, when did the LHC falsify "all" the theories? For example, I paid my $100 bet to Adam Falkowski– which could have been a $10,000 bounty for me if Nature had been more generous – some two or three months ago, in late March. That's when reasonable people could have concluded that the LHC had found no new physics after a big enough package of new searches (my bet was about 30/fb and it was the amount agreed upon a decade ago – we were not "extending" the deadline in any way).

It means that Hossenfelder is basically whining that the particle physicists haven't made a big trend-setting revolution between March 2017 and June 2017. What a catastrophe that they're not doing a trend-setting revolution every three months!

The type of garbage that is spread by the likes of Ms Hossenfelder – in explicit sentences but also in between the lines – is just stunning and it is very clear that she and others want to turn the readers into big haters of all theoretical physics, the kind of nasty uncultural folks who would throw all of the amazing theoretical physics that people have found into a trash can without a glimpse of a rational justification.

I am watching "Genius" on National Geographic, I liked the episodes so far, and there's a lot to say about them. One minor topic that I always found amazing is how identical the demagogy promoted by the likes of Hossenfelder and Woit is to the Nazi propaganda against Einstein and his physics, especially the dumb rants spread by Philipp Lenard, an experimenter heading the Aryan Physics movement who simply had no clue about theoretical physics and who licked the Führer's rectum in the most recent episode. He was full of the very same Woitian šit about Einstein's ideas not being testable and all this crap.

Many people have often told me that "it shouldn't be relevant" that e.g. Peter Woit's grandfather was one of the key politicians in his Baltic country when the murder of 40,000 Jews was organized in Riga in 1941. Woit can't be held responsible for the acts done by his grandfather in 1941, can he? No, he can't, but this hypothetical but non-existent acausal influence isn't the only possible source of problems resulting from Woit's ancestry.

You know, there existed actual influences that are real and didn't contradict causality. Woit's grandfather educated his kids in a certain way and those educated their kids in a certain way. Peter Woit is one of those that belong to the latter group and when it comes to theoretical physics, he thinks and talks exactly like a brain-dead Nazi. And this is a problem for me, whether you kindly "allow" me to realize this problem or not. History can't be changed but we may make sure that some of its worst mistakes aren't done again in the future. And to do so, it's damn too important to emphasize e.g. the Nazi roots of Woit's campaign against theoretical physics.

by Luboš Motl (noreply@blogger.com) at June 14, 2017 01:47 PM

June 13, 2017

Emily Lakdawalla - The Planetary Society Blog

Curiosity update, sols 1675-1725: Traverse to Vera Rubin Ridge
Curiosity has had a busy eight weeks, driving south from the Bagnold Dunes toward Vera Rubin Ridge. The path has steepened and the rover is now rapidly climbing upward with every meter traveled. It's been a productive time for arm instruments, but the drill is still not working.

June 13, 2017 06:19 PM

The n-Category Cafe

Eliminating Binders for Easier Operational Semantics

guest post by Mike Stay

Last year, my son’s math teacher introduced the kids to the concept of a function. One of the major points of confusion in the class was the idea that it didn’t matter whether he wrote <semantics>f(x)=x 2<annotation encoding="application/x-tex">f(x) = x^2</annotation></semantics> or <semantics>f(y)=y 2<annotation encoding="application/x-tex">f(y) = y^2</annotation></semantics>, but it did matter whether he wrote <semantics>f(x)=xy<annotation encoding="application/x-tex">f(x) = x y</annotation></semantics> or <semantics>f(x)=xz<annotation encoding="application/x-tex">f(x) = x z</annotation></semantics>. The function declaration binds some of the variables appearing on the right to the ones appearing on the left; the ones that don’t appear on the left are “free”. In a few years when he takes calculus, my son will learn about the quantifiers “for all” and “there exists” in the “epsilon-delta” definition of limit; quantifiers also bind variables in expressions.

Reasoning formally about languages with binders is hard:

“The problem of representing and reasoning about inductively-defined structures with binders is central to the PoplMark challenges. Representing binders has been recognized as crucial by the theorem proving community, and many different solutions to this problem have been proposed. In our (still limited) experience, none emerge as clear winners.” – Aydemir, Bohannon, Fairbairn, Foster, Pierce, Sewell, Vytiniotis, Washburn, Weirich, and Zdancewic, Mechanized metatheory for the masses: The PoplMark challenge. (2005)

The paper quoted above reviews around a dozen approaches in section 2.3, and takes pains to point out that their review is incomplete. However, recently Andrew Pitts and his students (particularly Murdoch Gabbay) developed the notion of a nominal set (introductory slides, book) that has largely solved this problem. Bengston and Barrow use a nominal datatype package in Isabell/HOL to formalize <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus, and Clouston defined nominal Lawvere theories. It’s my impression that pretty much everyone now agrees that using nominal sets to formally model binders is the way forward.

Sometimes, though, it’s useful to look backwards; old techniques can lead to new ways of looking at a problem. The earliest approach to the problem of formally modeling bound variables was to eliminate them.

Abstraction elimination

<semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>-calculus is named for the binder in that language. The language itself is very simple. We start with an infinite set of variables <semantics>x,y,z,<annotation encoding="application/x-tex">x, y, z, \ldots</annotation></semantics> and then define the terms to be

<semantics>t,t::={x variable λx.t abstraction (tt) application<annotation encoding="application/x-tex">t, t' ::= \left\{ \begin{array}{lr}x & variable \\ \lambda x.t & abstraction \\ (t\; t') & application\end{array}\right.</annotation></semantics>

Schönfinkel’s idea was roughly to “sweep the binders under the rug”. We’ll allow binders, but only in the definition of a “combinator”, one of a finite set of predefined terms. We don’t allow binders in any expression using the combinators themselves; the binders will all be hidden “underneath” the name of the combinator.

To eliminate all the binders in a term, we start at the “lowest level”, a term of the form <semantics>t=λx.u,<annotation encoding="application/x-tex">t = \lambda x.u,</annotation></semantics> where <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> only has variables, combinators, and applications; no abstractions! Then we’ll try to find a way of rewriting <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> using combinators instead. Since the lambda term <semantics>λx.(vx)<annotation encoding="application/x-tex">\lambda x.(v\; x)</annotation></semantics> behaves the same as <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> itself, if we can find some <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> such that <semantics>vx=u<annotation encoding="application/x-tex">v x = u</annotation></semantics>, then the job’s done.

Suppose <semantics>u=x<annotation encoding="application/x-tex">u=x</annotation></semantics>. What term can we apply to <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and get <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> itself back? The identity function, obviously, so our first combinator is

<semantics>I=λx.x.<annotation encoding="application/x-tex">I = \lambda x.x.</annotation></semantics>

What if <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> doesn’t contain <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> at all? We need a “konstant” term <semantics>K u<annotation encoding="application/x-tex">K_u</annotation></semantics> such that <semantics>(K ux)<annotation encoding="application/x-tex">(K_u\; x)</annotation></semantics> just discards <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and returns <semantics>u.<annotation encoding="application/x-tex">u.</annotation></semantics> At the same time, we don’t want to have to specify a different combinator for each <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> that doesn’t contain <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>, so we define our second combinator <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics> to first read in which <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> to return, then read in <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>, throw it away, and return <semantics>u:<annotation encoding="application/x-tex">u:</annotation></semantics>

<semantics>K=λux.u.<annotation encoding="application/x-tex">K = \lambda u x.u.</annotation></semantics>

Finally, suppose <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> is an application <semantics>u=(ww).<annotation encoding="application/x-tex">u = (w\; w').</annotation></semantics> The variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> might occur in <semantics>w<annotation encoding="application/x-tex">w</annotation></semantics> or in <semantics>w<annotation encoding="application/x-tex">w'</annotation></semantics> or both. Note that if we recurse on each of the terms in the application, we’ll have terms <semantics>r,r<annotation encoding="application/x-tex">r, r'</annotation></semantics> such that <semantics>(rx)=w<annotation encoding="application/x-tex">(r\; x) = w</annotation></semantics> and <semantics>(rx)=w<annotation encoding="application/x-tex">(r'\; x) = w'</annotation></semantics>, so we can write <semantics>u=((rx)(rx)).<annotation encoding="application/x-tex">u = ((r\; x)\; (r'\; x)).</annotation></semantics> This suggests our final combinator should read in <semantics>r,r<annotation encoding="application/x-tex">r, r'</annotation></semantics>, and <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and “share” <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> with them:

<semantics>S=λrrx.((rx)(rx)).<annotation encoding="application/x-tex">S = \lambda r r'x.((r\; x)\; (r'\; x)).</annotation></semantics>

If we look at the types of the terms <semantics>S,K,<annotation encoding="application/x-tex">S, K,</annotation></semantics> and <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics>, we find something interesting:

<semantics>I: ZZ K: ZYZ S: (ZYX)(ZY)ZX<annotation encoding="application/x-tex">\begin{array}{rl}I:&Z \to Z\\K:&Z \to Y \to Z\\S:&(Z \to Y\to X)\to (Z\to Y) \to Z \to X\end{array}</annotation></semantics>

The types correspond exactly to the axiom schemata for positive implicational logic!

The <semantics>SKI<annotation encoding="application/x-tex">S K I</annotation></semantics>-calculus is a lot easier to model formally than the <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>-calculus; we can use a tiny Gph-enriched Lawvere theory (see the appendix) to capture the operational semantics and then derive the denotational semantics from it.

<semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-Calculus

As computers got smaller and telephony became cheaper, people started to connect them to each other. ARPANET went live in 1969, grew dramatically over the next twenty years, and eventually gave rise to the internet. ARPANET was decommissioned in 1990; that same year, Robin Milner published a paper introducing the <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus. He designed it to model the new way computation was occurring in practice: instead of serially on a single computer, concurrently on many machines via the exchange of messages. Instead of applying one term to another, as in the lambda and SK calculi, terms (now called “processes”) get juxtaposed and then exchange messages. Also, whereas in <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>-calculus a variable can be replaced by an entire term, in the <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus names can only be replaced by other names.

Here’s a “good parts version” of the asynchronous <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus; see the appendix for a full description.

<semantics>P,Q::= 0 donothing | P|Q concurrency | for(xy)P input | x!y output | νx.P newname | !P replication<annotation encoding="application/x-tex">\begin{array}{rll}P, Q ::= \quad& 0 & do\; nothing \\ \;|\; & P|Q & concurrency \\ \;|\; &for (x \leftarrow y) P & input \\ \;|\; & x!y & output \\ \;|\; & \nu x.P & new\; name \\ \;|\; & !P& replication \end{array}</annotation></semantics>

<semantics>x!z|for(yx).PP{z/y}communicationrule<annotation encoding="application/x-tex">x!z \;|\; for(y \leftarrow x).P \Rightarrow P\{z/y\}\quad communication\; rule</annotation></semantics>

There are six term constructors for <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus instead of the three in <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>-calculus. Concurrency is represented with a vertical bar |, which forms a commutative monoid with 0 as the monoidal unit. There are two binders, one for input and one for introducing a new name into scope. The rewrite rule is reminiscent of a trace in a compact closed category: <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> appears in an input term and an output term on the left-hand side, while on the right-hand side <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> doesn’t appear at all. I’ll explore that relationship in another post.

The syntax we use for the input prefix is not Milner’s original syntax. Instead, we borrowed from Scala, where the same syntax is syntactic sugar for <semantics>M(λx.P)(y)<annotation encoding="application/x-tex">M(\lambda x.P)(y)</annotation></semantics> for some monad <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> that describes a collection. We read it as “for a message <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> drawn from the set of messages sent on <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, do <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> with it”.

For many years after Milner proposed the <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus, researchers tried to come up with a way to eliminate the bound names from a <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus term. Yoshida was able to give an algorithm for eliminating the bound names that come from input prefixes, but not those from new names. Like the abstraction elimination algorithm above, Yoshida’s algorithm produced a set of concurrent combinators. There’s one combinator <semantics>m(a,x)<annotation encoding="application/x-tex">m(a, x)</annotation></semantics> for sending a message <semantics>a<annotation encoding="application/x-tex">a</annotation></semantics> on the name <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>, and several others that interact with <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>’s to move the computation forward (see the appendix for details):

<semantics>d(a,b,c)|m(a,x) m(b,x)|m(c,x) (fanout) k(a)|m(a,x) 0 (drop) fw(a,b)|m(a,x) m(b,x) (forward) br(a,b)|m(a,x) fw(b,x) (branchright) bl(a,b)|m(a,x) fw(x,b) (branchleft) s(a,b,c)|m(a,x) fw(b,c) (synchronize)<annotation encoding="application/x-tex">\begin{array}{rlr} d(a,b,c) | m(a,x) & \Rightarrow m(b,x) | m(c,x) & (fanout)\\ k(a) | m(a,x) & \Rightarrow 0 & (drop) \\ fw(a,b) | m(a,x) & \Rightarrow m(b,x) & (forward) \\ br(a,b) | m(a,x) & \Rightarrow fw(b,x) & (branch\; right)\\ bl(a,b) | m(a,x) & \Rightarrow fw(x,b) & (branch\; left) \\ s(a,b,c) | m(a,x) & \Rightarrow fw(b,c) & (synchronize)\end{array}</annotation></semantics>

Unlike the <semantics>SKI<annotation encoding="application/x-tex">S K I</annotation></semantics> combinators, no one has shown a clear connection between some notion of type for these combinators and a system of logic.

Reflection

Several years ago, Greg Meredith had the idea to combine the set of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus names and the set of <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus terms recursively. In a paper with Radestock he introduced a “quoting” operator I’ll write & that turns processes into names and a “dereference” operator I’ll write * that turns names into processes. They also made the calculus higher-order: they send processes on a name and receive the quoted process on the other side.

<semantics>x!Q|for(yx).PP{&Q/y}communicationrule<annotation encoding="application/x-tex">x!\langle Q\rangle \;|\; for(y \leftarrow x).P \Rightarrow P\{\&Q/y\}\quad communication\; rule</annotation></semantics>

The smallest process is 0, so the smallest name is &0. The next smallest processes are

<semantics>&00andfor(&0&0)0,<annotation encoding="application/x-tex">\&0\langle 0 \rangle \quad and \quad for(\&0 \leftarrow \&0) \; 0,</annotation></semantics>

which in turn can be quoted to produce more names, and so on.

Together, these two changes let them demonstrate a <semantics>ν<annotation encoding="application/x-tex">\nu</annotation></semantics>-elimination transformation from <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> calculus to their reflective higher-order (RHO) calculus: since a process never contains its own name (<semantics>&P<annotation encoding="application/x-tex">\&P</annotation></semantics> cannot occur in <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>), one can use that fact to generate names that are fresh with respect to a process.

Another benefit of reflection is the concept of “namespaces”: since the names have internal structure, we can ask whether they satisfy propositions. This lets Meredith and Radestock define a spatial-behavioral type system like that of Caires, but more powerful. Greg demonstrated that the type system is strong enough to prevent attacks on smart contracts like the $50M one last year that caused the fork in Ethereum.

In our most recent pair of papers, Greg and I consider two different reflective higher-order concurrent combinator calculi where we eliminate all the bound variables. In the first paper, we present a reflective higher-order version of Yoshida’s combinators. In the second, we note that we can think of each of the term constructors as combinators and apply them to each other. Then we can use <semantics>SKI<annotation encoding="application/x-tex">S K I</annotation></semantics> combinators to eliminate binders from input prefixes and Greg’s reflection idea to eliminate those from <semantics>ν.<annotation encoding="application/x-tex">\nu.</annotation></semantics> Both calculi can be expressed concisely using Gph-enriched Lawvere theories.

In future work, we intend to present a type system for the resulting combinators and show how the types give axiom schemata.

Appendix

Gph-theory of <semantics>SKI<annotation encoding="application/x-tex">S K I</annotation></semantics>-calculus

  • objects
    • <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>
  • morphisms
    • <semantics>S,K,I:1T<annotation encoding="application/x-tex">S, K, I\colon 1 \to T</annotation></semantics>
    • <semantics>():T×TT<annotation encoding="application/x-tex">(-\; -)\colon T \times T \to T</annotation></semantics>
  • equations
    • none
  • edges
    • <semantics>σ:(((Sx)y)z)((xz)(yz))<annotation encoding="application/x-tex">\sigma\colon (((S\; x)\; y)\; z) \Rightarrow ((x\; z)\; (y\; z))</annotation></semantics>
    • <semantics>κ:((Kx)z)x<annotation encoding="application/x-tex">\kappa\colon ((K\; x)\; z) \Rightarrow x</annotation></semantics>
    • <semantics>ι:(Iz)z<annotation encoding="application/x-tex">\iota\colon (I\; z) \Rightarrow z</annotation></semantics>

<semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus

  • grammar
    • <semantics>P,Q::= 0 donothing | P|Q concurrency | for(xy).P input | x!y output | νx.P newname | !P replication<annotation encoding="application/x-tex">\begin{array}{rll}P, Q ::= \quad & 0 & do\; nothing \\ \;|\; & P|Q & concurrency \\ \;|\; &for (x \leftarrow y).P & input \\ \;|\; & x!y & output \\ \;|\; & \nu x.P & new\; name \\ \;|\; & !P& replication \end{array}</annotation></semantics>
  • structural equivalence
    • free names
      • <semantics>FN(0)={}<annotation encoding="application/x-tex">FN(0) = \{\}</annotation></semantics>
      • <semantics>FN(P|Q)=FN(P)FN(Q)<annotation encoding="application/x-tex">FN(P|Q) = FN(P) \cup FN(Q)</annotation></semantics>
      • <semantics>FN(for(xy).P)={y}FN(P){x}<annotation encoding="application/x-tex">FN(for (x \leftarrow y).P) = \{y\} \cup FN(P) - \{x\}</annotation></semantics>
      • <semantics>FN(x!y)={x,y}<annotation encoding="application/x-tex">FN(x!y) = \{x,y\}</annotation></semantics>
      • <semantics>FN(νx.P)=FN(P){x}<annotation encoding="application/x-tex">FN(\nu x.P) = FN(P) - \{x\}</annotation></semantics>
      • <semantics>FN(!P)=FN(P)<annotation encoding="application/x-tex">FN(!P) = FN(P)</annotation></semantics>
    • 0 and | form a commutative monoid
      • <semantics>P|0P<annotation encoding="application/x-tex">P|0 \equiv P</annotation></semantics>
      • <semantics>P|QQ|P<annotation encoding="application/x-tex">P|Q \equiv Q|P</annotation></semantics>
      • <semantics>(P|Q)|RP|(Q|R)<annotation encoding="application/x-tex">(P|Q)|R \equiv P|(Q|R)</annotation></semantics>
    • replication
      • <semantics>!PP|!P<annotation encoding="application/x-tex">!P \equiv P\;|\;!P</annotation></semantics>
    • <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics>-equivalence
      • <semantics>for(xy).Pfor(zy).P{z/x}<annotation encoding="application/x-tex">for (x \leftarrow y).P\equiv for (z \leftarrow y).P\{z/x\}</annotation></semantics> where <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics> is not free in <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>
    • new names
      • <semantics>νx.νx.Pνx.P<annotation encoding="application/x-tex">\nu x.\nu x.P \equiv \nu x.P</annotation></semantics>
      • <semantics>νx.νy.Pνy.νx.P<annotation encoding="application/x-tex">\nu x.\nu y.P \equiv \nu y.\nu x.P</annotation></semantics>
      • <semantics>νx.(P|Q)(νx.P)|Q<annotation encoding="application/x-tex">\nu x.(P|Q) \equiv (\nu x.P)|Q</annotation></semantics> when <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is not free in <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>
  • rewrite rules
    • <semantics>x!z|for(yx).PP{z/y}<annotation encoding="application/x-tex">x!z \;|\; for(y \leftarrow x).P \Rightarrow P\{z/y\}</annotation></semantics>
    • if <semantics>PP<annotation encoding="application/x-tex">P \Rightarrow P'</annotation></semantics>, then <semantics>P|QP|Q<annotation encoding="application/x-tex">P\;|\; Q \Rightarrow P'\;|\; Q</annotation></semantics>
    • if <semantics>PP<annotation encoding="application/x-tex">P \Rightarrow P'</annotation></semantics>, then <semantics>νx.Pνx.P<annotation encoding="application/x-tex">\nu x.P \Rightarrow \nu x.P'</annotation></semantics>

To be clear, the following is not an allowed reduction, because it occurs under an input prefix:

<semantics>for(vu).(x!z|for(yx).P)for(vu).P{z/y}<annotation encoding="application/x-tex">for(v \leftarrow u).(x!z \;|\; for(y \leftarrow x).P) \nRightarrow for(v \leftarrow u).P\{z/y\}</annotation></semantics>

Yoshida’s combinators

  • grammar
    • atom: <semantics>Q::=0|m(a,b)|d(a,b,c)|k(a)|fw(a,b)|br(a,b)|bl(a,b)|s(a,b,c)<annotation encoding="application/x-tex">Q ::= 0 \;|\; m(a,b) \;|\; d(a,b,c) \;|\; k(a) \;|\; fw(a,b) \;|\; br(a,b) \;|\; bl(a,b) \;|\; s(a,b,c)</annotation></semantics>
    • process: <semantics>P::=Q|νa.P|P|P|!P<annotation encoding="application/x-tex">P ::= Q \;|\; \nu a.P \;|\; P|P \;|\; !P</annotation></semantics>
  • structural congruence
    • free names
      • <semantics>FN(0)={}<annotation encoding="application/x-tex">FN(0) = \{\}</annotation></semantics>
      • <semantics>FN(k(a))={a}<annotation encoding="application/x-tex">FN(k(a)) = \{a\}</annotation></semantics>
      • <semantics>FN(m(a,b))=FN(fw(a,b))=FN(br(a,b))=FN(bl(a,b))={a,b}<annotation encoding="application/x-tex">FN(m(a,b)) = FN(fw(a,b)) = FN(br(a,b)) = FN(bl(a,b)) = \{a, b\}</annotation></semantics>
      • <semantics>FN(d(a,b,c))=FN(s(a,b,c))={a,b,c}<annotation encoding="application/x-tex">FN(d(a,b,c)) = FN(s(a,b,c)) = \{a,b,c\}</annotation></semantics>
      • <semantics>FN(νa.P)=FN(P){x}<annotation encoding="application/x-tex">FN(\nu a.P) = FN(P) - \{x\}</annotation></semantics>
      • <semantics>FN(P|Q)=FN(P)FN(Q)<annotation encoding="application/x-tex">FN(P|Q) = FN(P)\cup FN(Q)</annotation></semantics>
      • <semantics>FN(!P)=FN(P)<annotation encoding="application/x-tex">FN(!P) = FN(P)</annotation></semantics>
    • 0 and | form a commutative monoid
      • <semantics>P|0P<annotation encoding="application/x-tex">P|0 \equiv P</annotation></semantics>
      • <semantics>P|QQ|P<annotation encoding="application/x-tex">P|Q \equiv Q|P</annotation></semantics>
      • <semantics>(P|Q)|RP|(Q|R)<annotation encoding="application/x-tex">(P|Q)|R \equiv P|(Q|R)</annotation></semantics>
    • replication
      • <semantics>!PP|!P<annotation encoding="application/x-tex">!P \equiv P\;|\;!P</annotation></semantics>
    • new names
      • <semantics>νx.νx.Pνx.P<annotation encoding="application/x-tex">\nu x.\nu x.P \equiv \nu x.P</annotation></semantics>
      • <semantics>νx.νy.Pνy.νx.P<annotation encoding="application/x-tex">\nu x.\nu y.P \equiv \nu y.\nu x.P</annotation></semantics>
      • <semantics>νx.(P|Q)(νx.P)|Q<annotation encoding="application/x-tex">\nu x.(P|Q) \equiv (\nu x.P)|Q</annotation></semantics> when <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is not free in <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics>
  • rewrite rules
    • <semantics>d(a,b,c)|m(a,x)m(b,x)|m(c,x)<annotation encoding="application/x-tex">d(a,b,c) | m(a,x) \Rightarrow m(b,x) | m(c,x)</annotation></semantics> (fanout)
    • <semantics>k(a)|m(a,x)0<annotation encoding="application/x-tex">k(a) | m(a,x) \Rightarrow 0</annotation></semantics> (drop)
    • <semantics>fw(a,b)|m(a,x)m(b,x)<annotation encoding="application/x-tex">fw(a,b) | m(a,x) \Rightarrow m(b,x)</annotation></semantics> (forward)
    • <semantics>br(a,b)|m(a,x)fw(b,x)<annotation encoding="application/x-tex">br(a,b) | m(a,x) \Rightarrow fw(b,x)</annotation></semantics> (branch right)
    • <semantics>bl(a,b)|m(a,x)fw(x,b)<annotation encoding="application/x-tex">bl(a,b) | m(a,x) \Rightarrow fw(x,b)</annotation></semantics> (branch left)
    • <semantics>s(a,b,c)|m(a,x)fw(b,c)<annotation encoding="application/x-tex">s(a,b,c) | m(a,x) \Rightarrow fw(b,c)</annotation></semantics> (synchronize)
    • <semantics>!PP|!P<annotation encoding="application/x-tex">!P \Rightarrow P|!P</annotation></semantics>
    • if <semantics>PP<annotation encoding="application/x-tex">P \Rightarrow P'</annotation></semantics> then for any term context <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, <semantics>C[P]C[P]<annotation encoding="application/x-tex">C[P] \Rightarrow C[P']</annotation></semantics>.

Gph-theory of RHO Yoshida combinators

  • objects
    • <semantics>N,T<annotation encoding="application/x-tex">N, T</annotation></semantics>
  • morphisms
    • <semantics>0:1T<annotation encoding="application/x-tex">0\colon 1 \to T</annotation></semantics>
    • <semantics>k:TT<annotation encoding="application/x-tex">k\colon T \to T</annotation></semantics>
    • <semantics>m:N×TT<annotation encoding="application/x-tex">m\colon N \times T \to T</annotation></semantics>
    • <semantics>fw,br,bl:N 2T<annotation encoding="application/x-tex">fw,br,bl\colon N^2 \to T</annotation></semantics>
    • <semantics>d,s:N 3T<annotation encoding="application/x-tex">d,s\colon N^3 \to T</annotation></semantics>
    • <semantics>|:T 2T<annotation encoding="application/x-tex">|\colon T^2 \to T</annotation></semantics>
    • <semantics>*:NT<annotation encoding="application/x-tex">*\colon N \to T</annotation></semantics>
    • <semantics>&:TN<annotation encoding="application/x-tex">\&\colon T \to N</annotation></semantics>
  • equations
    • 0 and | form a commutative monoid
      • <semantics>P|0=P<annotation encoding="application/x-tex">P|0 = P</annotation></semantics>
      • <semantics>P|Q=Q|P<annotation encoding="application/x-tex">P|Q = Q|P</annotation></semantics>
      • <semantics>(P|Q)|R=P|(Q|R)<annotation encoding="application/x-tex">(P|Q)|R = P|(Q|R)</annotation></semantics>
    • <semantics>*&P=P<annotation encoding="application/x-tex">*\&P = P</annotation></semantics>
  • edges
    • <semantics>δ:d(a,b,c)|m(a,P)m(b,P)|m(c,P)<annotation encoding="application/x-tex">\delta\colon d(a,b,c) | m(a,P) \Rightarrow m(b,P) | m(c,P)</annotation></semantics>
    • <semantics>κ:k(a)|m(a,P)0<annotation encoding="application/x-tex">\kappa\colon k(a) | m(a,P) \Rightarrow 0</annotation></semantics>
    • <semantics>ϕ:fw(a,b)|m(a,P)m(b,P)<annotation encoding="application/x-tex">\phi\colon fw(a,b) | m(a,P) \Rightarrow m(b,P)</annotation></semantics>
    • <semantics>ρ:br(a,b)|m(a,P)fw(b,&P)<annotation encoding="application/x-tex">\rho\colon br(a,b) | m(a,P) \Rightarrow fw(b,\&P)</annotation></semantics>
    • <semantics>λ:bl(a,b)|m(a,P)fw(&P,b)<annotation encoding="application/x-tex">\lambda\colon bl(a,b) | m(a,P) \Rightarrow fw(\&P,b)</annotation></semantics>
    • <semantics>σ:s(a,b,c)|m(a,P)fw(b,c)<annotation encoding="application/x-tex">\sigma\colon s(a,b,c) | m(a,P) \Rightarrow fw(b,c)</annotation></semantics>

Gph-theory of RHO <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-like combinators

  • objects
    • <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>
  • morphisms
    • <semantics>C,0,|,for,!,&,*,S,K,I:1T<annotation encoding="application/x-tex">C, 0, |, for, !, \&, *, S, K, I\colon 1 \to T</annotation></semantics>
    • <semantics>():T×TT<annotation encoding="application/x-tex">(-\; -)\colon T\times T \to T</annotation></semantics>
  • equations
    • 0 and | form a commutative monoid
      • <semantics>((|0)P)=P<annotation encoding="application/x-tex">((|\; 0)\; P) = P</annotation></semantics>
      • <semantics>((|((|P)Q))R)=((|P)((|Q)R)<annotation encoding="application/x-tex">((|\; ((|\; P)\; Q))\; R)=((|\; P)\; ((|\; Q)\; R)</annotation></semantics>
      • <semantics>((|P)Q)=((|Q)P)<annotation encoding="application/x-tex">((|\; P)\; Q) = ((|\; Q)\; P)</annotation></semantics>
  • edges
    • <semantics>σ:(((SP)Q)R)((PR)(QR))<annotation encoding="application/x-tex">\sigma\colon(((S\; P)\; Q)\; R) \Rightarrow ((P\; R)\; (Q\; R))</annotation></semantics>
    • <semantics>κ:((KP)Q)P<annotation encoding="application/x-tex">\kappa\colon((K\; P)\; Q) \Rightarrow P</annotation></semantics>
    • <semantics>ι:(IP)P<annotation encoding="application/x-tex">\iota\colon(I\; P) \Rightarrow P</annotation></semantics>
    • <semantics>ξ:((|C)((|((for(&P))Q))((!(&P))R)))((|C)(Q(&R)))<annotation encoding="application/x-tex">\xi\colon((|\; C)\; ((|\; ((for\; (\&\; P))\; Q))\; ((!\; (\&\; P))\; R))) \Rightarrow ((|\; C)\; (Q\; (\&\; R)))</annotation></semantics>
    • <semantics>ϵ:((|C)(*(&P)))((|C)P)<annotation encoding="application/x-tex">\epsilon\colon((|\; C)\;(*\; (\&\; P))) \Rightarrow ((|\; C)\; P)</annotation></semantics>

by john (baez@math.ucr.edu) at June 13, 2017 03:16 PM

Symmetrybreaking - Fermilab/SLAC

Fermilab en español (ES)

El laboratorio de física de partículas establece una conexión en español.

Header:Fermilab en español

Marylu Reyes y su hija de 12 años viven a unas pocas millas al norte de Fermi National Accelerator Laboratory, en West Chicago, Illinois, una ciudad de 27,000 habitantes con una población significativa de hispanohablantes.

Cuando la cliente de Reyes, una empleada de Fermilab, le contó que el gran laboratorio del vecindario estaba organizando un evento totalmente en español, Reyes y su hija apuntaron la fecha con gran entusiasmo.

Lo que vieron en Pregúntale a un Científico—Ask a Scientist—de Fermilab las cautivó.

“A medida que recorría el laboratorio, era igual que en las películas sobre la NASA: habitaciones grandes, computadoras, todos esos equipos. Sentías como si pudieras formar parte de ello,” cuenta Reyes, quien escuchó exposiciones sobre aceleradores de partículas, materia oscura y neutrinos. “Fue una gran oportunidad poder presenciarlo… ¡en nuestro idioma!”

Pregúntale a un Científico de marzo fue la primera vez que Fermilab ofreció Ask-a-Scientist, uno de sus principales programas de difusión pública del laboratorio, en idioma español. De hecho, fue la cliente de Reyes, Griselda Lopez, quien encabezó el esfuerzo. Asimismo, a través del compromiso cívico del Foro hispano/latino de Fermilab, un grupo de recursos, el exitoso evento, que atrajo a casi un centenar de personas, demostró el gran interés en el trabajo del laboratorio por parte de la comunidad latina circundante.

Pregúntale a un Científico es solo una parte del esfuerzo continuo de Fermilab para llegar a los hispanohablantes.

En la actualidad, Fermilab se encuentra desarrollando materiales de ciencia en idioma español para el salón de clases. Asimismo, ha organizado en dos oportunidades una conferencia bilingüe para una organización local que alienta a estudiantes latinas de la escuela secundaria a cursar estudios relacionados con la ciencia, la tecnología, la ingeniería y las matemáticas (STEM).

“Mientras estaba realizando estas actividades de difusión, me di cuenta de que no se trata solo de ciencia,” dijo Erika Catano Mur, una estudiante de posgrado de la Universidad Estatal de Iowa (Iowa State University) participante en el experimento NOvA sobre neutrinos de Fermilab, y quien ha guiado recorridos en idioma español dentro del laboratorio. “Existe un muro que enfrentan los hispanohablantes del cual uno no siempre es consciente. Ellos afirman: ‘Me dicen que me dirija a este sitio web para llamar a tal persona a fin de obtener más información. Y esa persona, ¿habla español?’ De modo que estamos observando lo que ya hay disponible en español y qué más se necesita.”

Catano Mur aprendió inglés en la escuela en Colombia, su país natal, y habla dicho idioma a diario en el trabajo. Minerba Betancourt, una científica de Fermilab participante en el experimento MINERvA sobre neutrinos, y quien realizó exposiciones en Pregúntale a un Científico, comenzó a hablar inglés de forma regular solo después de venir a los Estados Unidos desde Venezuela para cursar estudios de posgrado. Ella continúa hablando español con su familia.

“Soy la prueba de que se puede hacer ciencia en tu segundo idioma,” afirmó Betancourt.

Catano Mur dice que rara vez hace física en español. Por lo tanto, su primer idioma se convierte en su segundo idioma cuando se trata de física.

“Si estoy conversando con otro hispanohablante en el laboratorio, entonces podemos hacerlo en Spanglish, porque los términos científicos me vienen a la cabeza mucho más rápido en inglés,” afirma.

Al conversar con no científicos, según Betancourt, ninguno de los idiomas es más difícil que el otro. El verdadero desafío de traducción consiste en pasar los términos técnicos específicos a un léxico sencillo.

No eran solo científicos los que interactuaban con los participantes en Pregúntale a un Científico. Personal no técnico también estaba presente allí para mezclarse y responder preguntas.

“Contamos con una vasta comunidad de hispanohablantes en el laboratorio: empleados, estudiantes de posgrado y posdoctorados de instituciones latinoamericanas y estadounidenses,” contó Betancourt. “Cada voluntario aporta algo al maravilloso programa científico en Fermilab.”

Los participantes acudieron de todas partes, no solo de los suburbios aledaños. Betancourt conoció a una familia de Chicago, que vive a 40 millas de distancia, y otra que vive en Argentina que, casualmente, estaba por la zona.

Cuando se trata del laboratorio como un recurso educativo, los habitantes de los alrededores son, por supuesto, los que tienen más ventajas, ya que se encuentran a pasos del lugar.

“Disponemos de una buena comunidad con un gran potencial de estudiantes que podrían ser físicos e ingenieros,” expresa Betancourt. “Esa es una oportunidad que yo no tuve: ir a un laboratorio cercano para observar lo que hacen.”

Es una oportunidad tanto para padres como para hijos de obtener información sobre carreras científicas.

“Los padres están muy involucrados. A veces tienen la idea de que si te adentras en la física, solo podrás ser profesor de secundaria y tendrás que llevar una vida solitaria,” sostiene Catano Mur. “Cualquier información más allá de eso es sorprendente.”

Su objetivo consiste en reducir eso.

“La comunidad hispana tiene aquí una gran oportunidad de involucrarse en la ciencia. Un laboratorio como este no existe en muchas partes del mundo,” afirma Catano Mur. “Un par de conversaciones científicas puede iniciar el proceso.”

Reyes ya va por buen camino. Incluso antes de asistir a Pregúntale a un Científico, ella asumió el papel de pregonera, distribuyendo volantes acerca del evento en supermercados locales, en la escuela secundaria de su hija y en su iglesia. Parece haber funcionado: Reyes vio a varios amigos y conocidos allí.

“Estoy tan feliz de que hayan hecho esto por nosotros. Mi hija dijo: ‘Mamá, esta fue una gran experiencia,’” contó Reyes. “Había oído acerca de Fermilab pero no sabía realmente qué era. Ahora, nos sentimos muy bien recibidos.”

(English version)

by Leah Hesla at June 13, 2017 01:00 PM

Symmetrybreaking - Fermilab/SLAC

Fermilab en español (EN)

The particle physics laboratory makes a Spanish connection.

Header:Fermilab en español

Marylu Reyes and her 12-year-old daughter live just a few miles north of Fermi National Accelerator Laboratory, in West Chicago, Illinois, a town of 27,000 residents with a significant Spanish-speaking population.

When her client, a Fermilab employee, told her the big lab down the street was hosting an event given entirely in Spanish, Reyes and her daughter excitedly marked the date.

What they saw at Fermilab's Pregúntale a un Científico—Ask a Scientist—blew them away.

“When I walked through the lab, it was just like the movies about NASA: big rooms, computers, all that equipment. You felt like you could be a part of it,” says Reyes, who heard presentations on particle accelerators, dark matter and neutrinos. “It was a great opportunity to see it — in our language.”

March’s Pregúntale a un Científico was the first time Fermilab had offered its Ask-a-Scientist, one of the lab’s mainstay public-outreach programs, in Spanish. In fact it was Reyes’ client, Griselda Lopez, who spearheaded the effort. And through the civic engagement of Fermilab’s Hispanic/Latino Forum, a resource group, the successful event, which drew nearly a hundred people, demonstrated the great interest from the surrounding Latino community in the laboratory’s work.   

Pregúntale a un Científico is just one part of Fermilab’s ongoing effort to reach Spanish speakers.

Fermilab is currently developing Spanish-language science materials for the classroom. And it has twice hosted a bilingual conference for a local organization that encourages Latina middle school girls to pursue a STEM education.

“As I was doing these outreach activities, I figured out that it’s not just about science,” said Erika Catano Mur, an Iowa State University graduate student on Fermilab’s NOvA neutrino experiment who has led Spanish-language tours at the lab. “There’s a wall that Spanish-speaking people face that you’re not always aware of. They say, ‘You tell me to go to this website, to call this person to learn more. Do they speak Spanish?’ So we're looking at what’s already out there in Spanish and what more is needed.”

Catano Mur learned English in school in her home country of Colombia, and she speaks English daily at work. Minerba Betancourt, a Fermilab scientist on the MINERvA neutrino experiment who gave presentations at Pregúntale a un Científico, started speaking English regularly only after she came to the United States for graduate school from Venezuela. She continues to speak Spanish with her family.

“I’m proof that you can do science in your second language,” Betancourt says. 

Catano Mur says she rarely does physics in Spanish, since her first language becomes her second language when it comes to physics.

“If I’m talking to another Spanish speaker at the lab, then it can come out in Spanglish, because the science terms come to me much faster in English,” she says.

When talking with nonscientists, Betancourt says, neither language is more difficult than the other. The real translation challenge is moving from jargon into plainspeak. 

It wasn’t just scientists interacting with the attendees at Pregúntale a un Científico. Nontechnical staff were also there to mingle and answer questions.

“We have a rich Spanish-speaking community at the lab—employees, graduate students and postdocs from Latin American and US institutions,” Betancourt says. “Each volunteer contributes something to the wonderful science program at Fermilab.”

The attendees came from all over—not just the surrounding suburbs. Betancourt met one family from Chicago, 40 miles away, and another who lives in Argentina and just happened to be in the area.

When it comes to the lab serving as an educational resource, it is of course nearby residents who have the most to gain, being a stone’s throw away. 

“We have a good community with a great potential for students who could be physicists and engineers,” Betancourt says. “That’s an opportunity I didn’t have — to go to a nearby lab to see what they do.”

It’s as much a chance for the parents as for the children to learn about science careers. 

“The parents are very involved. They sometimes have the idea that if you go into physics, you can be only a high school teacher and have to live a lonely life,” Catano Mur says.”“Any information beyond that is surprising.”

Her goal is to make it less so.

“The Hispanic community here has a big opportunity to get involved in science. A lab like this doesn’t exist in many parts of the world,” Catano Mur says. “A couple of science talks can get the process started.”

Reyes is already well on her way. Even before attending Pregúntale a un Científico, she assumed the role of town crier, distributing flyers about the event at local supermarkets, her daughter’s middle school and her church. It seems to have worked: She saw several friends and acquaintances there.

“I’m so happy that they did this for us. My daughter said, ‘Mom, this was a great experience.’ Reyes says. “I had heard about Fermilab, but I didn’t really know what it was. Now, we feel so welcome.”

(Version en español)

by Leah Hesla at June 13, 2017 01:00 PM

John Baez - Azimuth

Information Processing in Chemical Networks (Part 2)

I’m in Luxembourg, and I’ll be blogging a bit about this workshop:

Dynamics, Thermodynamics and Information Processing in Chemical Networks, 13-16 June 2017, Complex Systems and Statistical Mechanics Group, University of Luxembourg. Organized by Massimiliano Esposito and Matteo Polettini.

I’ll do it in the comments!

I explained the idea of this workshop here:

Information processing in chemical networks.

and now you can see the program here.


by John Baez at June 13, 2017 07:00 AM

Clifford V. Johnson - Asymptotia

Cover Fun!

Thanks for all the great compliments that many of you have been sending me about the cover of the book. The final version of the cover is essentially the one you've seen before (it is on the MIT Press website here, and you can currently pre-order at amazon and other booksellers the world over - for example here), but the blue is a bit lighter (some people at the publisher were concerned that the figures were a little too subtle and wanted them much brighter, I did not want them to light so that they'd get lost in the lettering...so we compromised). Click the image to see a sightly larger version.

For those of you who want a deeper dive into the background of all this, I thought I'd share the sketches I made back in early April when they asked me to design the cover. (Click for larger view.)

It is always good to explore options, and also to give design options when asked to design something... I was secretly hoping they'd choose my favourite [...] Click to continue reading this post

The post Cover Fun! appeared first on Asymptotia.

by Clifford at June 13, 2017 01:24 AM

June 12, 2017

Symmetrybreaking - Fermilab/SLAC

How to clean inside the LHC

The beam pipes of the LHC need to be so clean, even air molecules count as dirt.

Cutaway image showing the two beam pipes inside the Large Hadron Collider

The Large Hadron Collider is the world’s most powerful accelerator. Inside, beams of particles sprint 17 miles around in opposite directions through a pair of evacuated beam pipes that intersect at collision points surrounded by giant particle detectors.

The inside of the beam pipes need to be spotless, which is why the LHC is thoroughly cleaned every year before it ramps up its summer operations program.

It’s not dirt or grime that clogs the LHC. Rather, it’s microscopic air molecules.

“The LHC is incredibly cold and under a strong vacuum, but it’s not a perfect vacuum,” says LHC accelerator physicist Giovanni Rumolo. “There’s a tiny number of simple atmospheric gas molecules and even more frozen to the beam pipes’ walls.”

Protons racing around the LHC crash into these floating air molecules, detaching their electrons. The liberated electrons jump after the positively charged protons but quickly crash into the beam pipe walls, depositing heat and liberating even more electrons from the frozen gas molecules there.

This process quickly turns into an avalanche, which weakens the vacuum, heats up the cryogenic system, disrupts the proton beam and dramatically lowers the efficiency and reliability of the LHC.

But the clouds of buzzing electrons inside the beam pipe possess an interesting self-healing feature, Rumolo says.

“When the chamber wall is under intense electron bombardment, the probability of it creating secondary electrons decreases and the avalanche is gradually mitigated,” he says. “Before ramping the LHC up to its full intensity, we run the machine for several days with as many low-energy protons as we can safely manage and intentionally produce electron clouds. The effect is that we have fewer loose electrons during the LHC’s physics runs.”

In other words, accelerator engineers clean the inside of the LHC a little like they would unclog a shower drain. They gradually pump the LHC full of more and more sluggish protons, which act like a scrub brush and knock off the microscopic grime clinging to the inside of the beam pipe. This loose debris is flushed out by the vacuum system. In addition, the bombardment of electrons transforms simple carbon molecules, which are still clinging to the beam pipe’s walls, into an inert and protective coating of graphite.

Cleaning the beam pipe is such an important job that there is a team of experts responsible for it (officially called the “Scrubbing Team”).

“Scrubbing is essential if we want to operate the LHC at its full potential,” Rumolo says. “It’s challenging, because there is a fine line between thoroughly cleaning the machine and accidentally dumping the beam. When we’re scrubbing, we work around the clock in the CERN Control Center to make sure the accelerator is safe and the scrubbing is working properly.”

by Sarah Charley at June 12, 2017 04:37 PM

Emily Lakdawalla - The Planetary Society Blog

Did a Planetary Society citizen scientist help find one of Earth’s biggest impact craters?
Scientists have found what appears to be a 250-kilometer-wide crater near the Falkland Islands. Is it ground zero for Earth's largest-ever extinction event?

June 12, 2017 11:00 AM

Lubos Motl - string vacua and pheno

Deep-learning the landscape
Two hep-th papers are "conceptual" today. First, Eva Silverstein wrote her report from the December 2015 German meeting with philosophers "Why Trust a Theory",
The dangerous irrelevance of string theory
"Dangerous irrelevance" isn't a special example of "irrelevance" in the colloquial sense. Instead, it's the dependence of physics on the laws that are valid at "higher than probed" energy scales. She discusses some totally standard technical questions that are being settled by the ongoing research of string cosmology – and explains why this research is an unquestionable example of the scientific method even when it comes to the seemingly most abstract questions such as the "existence of the landscape".



Yang-Hui He (London, Oxford, Tianjin) wrote something fascinating,
Deep-Learning the Landscape
He uses Wolfram Mathematica (the word "He" is correct even if the author is female! But in this case, he is not) to turn a computer to a hard-working string theorist. I've been calling for such advances for years but he actually did it. Using the machine learning functions, He can train His PC to find new patterns (methods to classify) in the "landscape data" as well as predict some properties of a compactification.




The objects that He is classifying including various Calabi-Yau topologies – complete intersections, both 3-folds and 4-folds, hypersurfaces in projective spaces, Calabi-Yaus from reflexive polytopes, vector bundles in the heterotic string, quivers producing gauge theories, and a few others.

The questions he wants to classify or ask about the objects are how many generations there are, whether the Hodge numbers are low or high, and so on.




The databases he is applying the methodology to contain from thousands to ten billion objects. In principle, you could imagine bigger numbers. Some people have said that it's a hopeless task to get familiar with the properties of a large number of compactifications, and that's why "it's not science". Except that these mammals are full of šit. He just places the problem in Mathematica, lets it work for one minute, and then it's enough to read the results. For example, in 96% of some new Calabi-Yaus, the computer correctly says whether the compactification has a high number of complex parameters. Or after a training based on 40% (3,000) compactifications, the computer is capable of correctly guessing the exact value of \(h^{2,1}\) for 80% of the remaining compactifications!

It's plausible that sometime in the future, perhaps in 2030, perhaps tonight, He will run a command like: Please, Wolfram Mathematica,
TellMeWhichPaper ["from", "the", "arXiv", {"contains", "the right", "compactification", "for a theory of everything"}]
The computer will work for three minutes and then it spits the right theory of everything. Problem solved. ;-) Maybe I am oversimplifying just a little bit. But a computer could be courageous and not being afraid of some – straightforward to see – patterns in a list of thousands or billions of objects.

Even if you describe the compactifications incompletely – if you only talk about some classes that are further divided according to fluxes etc. – it's plausible that a machine learning software sees the classes of compactifications that are by far more promising than others, or something like that, so the information describing the correct compactification could be found by identifying the correct answers to one question after another.

He already has a guess where the phenomenologically relevant compactification is located: it is a desirable residence within a barren vista. If you use Google Street View on top of Wolfram Mathematica, you may find out that it is here in Las Vegas. ;-) By the way, I think that He is misinterpreting what the swampland means when he writes "hints have emerged that the vastness of the landscape might well be mostly a swamp-land". My understanding is that Vafa's swampland is by definition disjoint with the string landscape.

by Luboš Motl (noreply@blogger.com) at June 12, 2017 04:48 AM

June 10, 2017

ZapperZ - Physics and Physicists

What Non-Scientists Can Learn From Physics
Chad Orzel has a follow-up to his earlier article on what every physics undergrad should know. This time, he tackles on what he thinks non-scientists can learn from physics.

You may read the linked article to get everything, but I have a different track in mind. Sticking to students rather than just a generic non-scientist, I'd rather focus on the value of a physics education for both scientists and non-scientists alike. After all, many non-physicists and non-scientists are "forced" to take physics classes at various levels in their undergraduate education. How can we motivate these students of the importance of these classes, and what can they learn and acquire from these classes that will be useful to them not only in their education, but also in their careers and life?

I of course tell them the relevance of physics in whatever area that they major in. But even non-scientists, such as an arts major, can acquire important skills from a physics class. With that in mind, I'd like to refer to the NACE website. They often have a poll of potential employers and what they look for in new graduates that they are considering to hire. In particular, they were asked on what type of skills they tend to look for in a candidate.

The result can be found here.

I have extracted the info in this picture:

I often show this to my students because I highlight all the skills that we will employ and honed in a physics class. I tell them that these are what they can acquire out of the class, and to be conscious of them when we either tackled a physics concept and problem, or when they are working on an experiment. In fact, often times, I often try to get them to think on how they would approach a problem in trying to solve it, with the intention of emphasizing analytical skills.

I think as physics teachers and instructors, we often neglect to show the students the non-physics benefits of a physics class. A student, whether he/she is a physics, engineering, other science, or STEM major, can ALWAYS again an advantage if he/she has those skills that I highlighted above. This is why I've often emphasize that the skills that can be acquired from a physics class often transcends the narrow boundary of a physics topic, and can often be valuable in many other areas. These skills are not subject-specific.

I often notice the irrational and puzzling argument on TV, especially from the world of politics, and I often wonder how many people could benefit from a clear, analytical ability to be able to analyze and decipher an issue or an argument. So heck yes, non-scientists can learn A LOT from physics, and from a physics class.

Zz.

by ZapperZ (noreply@blogger.com) at June 10, 2017 03:39 PM

The n-Category Cafe

Enriched Lawvere Theories for Operational Semantics

guest post by Mike Stay

Programs are an expression of programmer intent. We want the computer to do something for us, so we need to tell it what to do. We make mistakes, though, so we want to be able to check somehow that the program will do what we want. The idea of semantics for a programming language is that we assign some meaning to programs in such a way that we can reason about the behavior of a program. There are two main approaches to this: denotational semantics and operational semantics. I’ll discuss both below, but the post will focus for the most part on operational semantics.

There’s a long history of using 2-categories and related structures for term rewriting and operational semantics, but Greg Meredith and I are particularly fond of an approach using multisorted Lawvere theories enriched over the category of reflexive directed graphs, which we call Gph. Such enriched Lawvere theories are equal in power to, for instance, Sassone and Sobociński’s reactive systems, but in my opinion they have a cleaner categorical presentation. We wrote a paper on them:

Here I’ll just sketch the basic ideas.

Denotational Semantics

Denotational semantics works really well for functional programming languages. The actual process of computation is largely ignored in denotational semantics; it doesn’t matter how you compute the function, just what the function is. John Baez’s seminar eleven years ago explored Lambek and Scott’s approach to the denotational semantics of lambda calculus, and there was extensive discussion on this blog. Lambek and Scott constructed a cartesian closed category of types and <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics>-<semantics>β<annotation encoding="application/x-tex">\beta</annotation></semantics>-<semantics>η<annotation encoding="application/x-tex">\eta</annotation></semantics>-equivalence classes of terms with one free variable, and then assigned meaning to the types and terms with a cartesian closed functor into Set.

Denotational semantics gets a lot harder once we move away from functional programming languages. Modern programs run on multiple computers at the same time and each computer has several cores. The computers are connected by networks that can mix up the order in which messages are received. A program may run perfectly by itself, but deadlock when you run it in parallel with another copy of itself. The notion of “composition” begins to change, too: we run programs in parallel with each other and let them interact by passing messages back and forth, not simply by feeding the output of one function into the input of another. All of this makes it hard to think of such programs as functions.

Operational Semantics

Operational semantics is the other end of the spectrum, concerned with the rules by which the state of a computer changes. Whereas denotational semantics is inspired by Church and the lambda calculus, operational semantics is inspired by Turing and his machines. All of computational complexity lives here (e.g. <semantics>P=?NP<annotation encoding="application/x-tex">P \stackrel{?}{=} NP</annotation></semantics>).

To talk about the operational semantics of a programming language, there are five things we need to define.

First, we have to describe the layout of the state of the computer. For each kind of data that goes into a description of the state, we have a sort. If we’re using a programming language like lambda calculus, we have a sort for variables and a sort for terms, and the term is the entire state of the computer. If we’re using a Turing machine, there are more parts: the tape, the state transition table, the current state, and the position of the read/write head on the tape. If we’re using a modern language like JavaScript, the state is very complex: there are a couple of stacks, the heap, the lexical environment, the this binding, and more.

Second, we have to build up the state itself using term constructors. For example, in lambda calculus, we start with variables and use abstraction and application to build up a specific term.

Third, we say what rearrangements of the state we’re going to ignore; this is called structural congruence. In lambda calculus, we say that two terms are the same if they only differ in the choice of bound variables. In pi calculus, it doesn’t matter in what order we list the processes that are all running at the same time.

Fourth, we give reduction rules describing how the state is allowed to change. In lambda calculus, the state only changes via <semantics>β<annotation encoding="application/x-tex">\beta</annotation></semantics>-reduction, substituting the argument of a function for the bound variable. In a Turing machine, each state leads to one of five others (change the bit to 0 or 1, then move left or right; or halt). In pi calculus, there may be more than one transition possible out of a particular state: if a process is listening on a channel and there are two messages, then either message may be processed first. Computational complexity theory is all about how many steps it takes to compute a result, so we do not have equations between sequences of rewrites.

Finally, the reduction rules themselves may only apply in certain contexts; for example, in all modern programming languages based on the lambda calculus, no reductions happen under an abstraction. That is, even if a term <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> reduces to <semantics>t<annotation encoding="application/x-tex">t'</annotation></semantics>, it is never the case that <semantics>λx.t<annotation encoding="application/x-tex">\lambda x.t</annotation></semantics> reduces to <semantics>λx.t<annotation encoding="application/x-tex">\lambda x.t'</annotation></semantics>. The resulting normal form is called “weak head normal form”.

Here’s an example from Boudol’s paper “The <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics>-calculus in direct style.” There are two sorts: <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> or <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics> for variables and <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> or <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> for terms. The first line, labeled “syntax,” defines four term constructors. There are equations for structural congruence, and there are two reduction rules followed by the contexts in which the rules apply:

Category theory

We’d like to formalize this using category theory. For our first attempt, we capture almost all of this information in a multisorted Gph-enriched Lawvere theory: we have a generating object for each sort, a generating morphism for each term constructor, an equation between morphisms encoding structural congruence, and an edge for each rewrite.

We interpret the theory in Gph. Sorts map to graphs, term constructors to graph homomorphisms, equations to equations, and rewrites map to things I call “graph transformations”, which are the obvious generalization of a natural transformation to the category of graphs: a graph transformation between two graph homomorphisms <semantics>α:FG<annotation encoding="application/x-tex">\alpha:F\Rightarrow G</annotation></semantics> assigns to each vertex <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> an edge <semantics>α v:FvGv<annotation encoding="application/x-tex">\alpha_v:Fv \to Gv</annotation></semantics>. There’s nothing about a commuting square in the definition because it doesn’t even parse: we can’t compose edges to get a new edge.

This initial approach doesn’t quite work because of the way reduction contexts are usually presented. The reduction rules assume that we have a “global view” of the term being reduced, but the category theory insists on a “local view”. By “local” I mean that we can always whisker a reduction with a term constructor: if <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics> is an endomorphism on a graph, then given any edge <semantics>e:vv<annotation encoding="application/x-tex">e:v\to v'</annotation></semantics>, there’s necessarily an edge <semantics>Ke:KvKv.<annotation encoding="application/x-tex">K e:K v \to K v'.</annotation></semantics> These two requirements conflict: to model reduction to weak head normal form, if we have a reduction <semantics>tt,<annotation encoding="application/x-tex">t \to t',</annotation></semantics> we don’t want a reduction <semantics>λx.tλx.t.<annotation encoding="application/x-tex">\lambda x.t \to \lambda x.t'.</annotation></semantics>

One solution is to introduce “context constructors”, unary morphisms for marking reduction contexts. These contexts become part of the rewrite rules and the structural congruence; for example, taking <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> to be the context constructor for weak head normal form, we add a structural congruence rule that says that to reduce an application of one term to another, we have to reduce the term on the left first:

<semantics>C(TU)(CTU).<annotation encoding="application/x-tex">C(T\; U) \equiv (C T\; U).</annotation></semantics>

We also modify the reduction reduction rule to involve the context constructors. Here’s <semantics>β<annotation encoding="application/x-tex">\beta</annotation></semantics> reduction when reducing to weak head normal form:

<semantics>β:(C(λx.T)U)CT{U/x}.<annotation encoding="application/x-tex"> \beta: (C(\lambda x.T)\; U) \Rightarrow C T\{U/x\}. </annotation></semantics>

Now <semantics>β<annotation encoding="application/x-tex">\beta</annotation></semantics> reduction can’t happen just anywhere; it can only happen in the presence of the “catalyst” <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>.

With context constructors, we can capture all of the information about operational semantics using a multisorted Gph-enriched Lawvere theory: we have a generating object for each sort, a generating morphism for each term constructor and for each context constructor, equations between morphisms encoding structural congruence and context propagation, and an edge for each rewrite in its appropriate context.

Connecting to Lambek/Scott

The SKI combinator calculus is a formal system invented by Schönfinkel and Curry. It allows for universal computation, and expressions in this calculus can easily be translated into the lambda calculus, but it’s simpler because it doesn’t include variables. The SK calculus is a fragment of the SK calculus that is still computationally universal.

We can recover the Lambek/Scott-style denotational semantics of the SK calculus (see the appendix) by taking the Gph-theory, modding out by the edges, and taking the monoid of endomorphisms on the generating object. The monoid is the cartesian closed category with only the “untyped” type. Using Melliès and Zeilberger’s notion of a functor as a type refinement system, we “-oidify” the monoid into a category of types and equivalence classes of terms.

However, modding out by edges utterly destroys the semantics of concurrent languages, and composition of endomorphisms doesn’t line up particularly well with composition of processes, so neither of those operations are desirable in general. That doesn’t stop us from considering Gph-enriched functors as type refinement systems, though.

Let <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> be the free model of a theory on the empty graph. Our plan for future work is to show how different notions of a collection of edges of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> give rise to different kinds of logics. For example, if we take subsets of the edges of <semantics>G,<annotation encoding="application/x-tex">G,</annotation></semantics> we get subgraphs of <semantics>G,<annotation encoding="application/x-tex">G,</annotation></semantics> which form a Heyting algebra. On the other hand, if we consider sets of lists of composable edges in <semantics>G,<annotation encoding="application/x-tex">G,</annotation></semantics> we get quantale semantics for linear logic. Specific collections will be the types in the type system, and proofs should be graph homomorphisms mapped over the collection. Edges will feature in proof normalization.

At the end, we should have a system where given a formal semantics for a language, we algorithmically derive a type system tailored to the language. We should also get a nice Curry-Howard style approach to operational semantics that even denotational semantics people won’t turn up their noses at!

Appendix

Gph

For our purposes, a graph is a set <semantics>E<annotation encoding="application/x-tex">E</annotation></semantics> of edges and a set <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> of vertices together with three functions <semantics>s:EV<annotation encoding="application/x-tex">s\colon E \to V</annotation></semantics> for the source of the edge, <semantics>t:EV<annotation encoding="application/x-tex">t\colon E \to V</annotation></semantics> for the target, and <semantics>a:VE<annotation encoding="application/x-tex">a\colon V \to E</annotation></semantics> such that <semantics>sa=V<annotation encoding="application/x-tex">s\circ a = V</annotation></semantics> and <semantics>ta=V<annotation encoding="application/x-tex">t \circ a = V</annotation></semantics>—that is, <semantics>a<annotation encoding="application/x-tex">a</annotation></semantics> assigns a chosen self-loop to each vertex. A graph homomorphism maps vertices to vertices and edges to edges such that sources, targets and chosen self-loops are preserved. Gph is the category of graphs and graph homomorphisms. Gph has finite products: the terminal graph is the graph with one vertex and one loop, while the product of two graphs <semantics>(E,V,s,t,a)×(E,V,s,t,a)<annotation encoding="application/x-tex">(E , V , s, t, a) \times (E' , V' , s' , t' , a')</annotation></semantics> is <semantics>(E×E,V×V,s×s,t×t,a×a).<annotation encoding="application/x-tex">(E \times E', V \times V', s \times s', t\times t', a \times a').</annotation></semantics>

Gph is a topos; the subobject classifier has two vertices <semantics>t,f<annotation encoding="application/x-tex">t, f</annotation></semantics> and five edges: the two self-loops, an edge from <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> to <semantics>f,<annotation encoding="application/x-tex">f,</annotation></semantics> an edge from <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> to <semantics>t,<annotation encoding="application/x-tex">t,</annotation></semantics> and an extra self-loop on <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>. Any edge in a subgraph maps to the chosen self-loop on <semantics>t,<annotation encoding="application/x-tex">t,</annotation></semantics> while an edge not in the subgraph maps to one of the other four edges depending on whether the source and target vertex are included or not.

A Gph-enriched category consists of

  • a set of objects;
  • for each pair of objects <semantics>x,y,<annotation encoding="application/x-tex">x, y,</annotation></semantics> a graph <semantics>hom(x,y);<annotation encoding="application/x-tex">\hom(x,y);</annotation></semantics>
  • for each triple of objects <semantics>x,y,z,<annotation encoding="application/x-tex">x, y, z,</annotation></semantics> a composition graph homomorphism <semantics>:hom(y,z)×hom(x,y)hom(x,z);<annotation encoding="application/x-tex">\quad \circ\colon \hom(y, z) \times \hom(x, y) \to \hom(x, z);</annotation></semantics> and
  • for each object <semantics>x,<annotation encoding="application/x-tex">x,</annotation></semantics> a vertex of <semantics>hom(x,x),<annotation encoding="application/x-tex">\hom(x, x),</annotation></semantics> the identity on <semantics>x,<annotation encoding="application/x-tex">x,</annotation></semantics>

such that composition is associative, and composition and the identity obey the unit laws. A Gph-enriched category has finite products if the underlying category does.

Any category is trivially Gph-enrichable by treating the elements of the hom sets as vertices and adjoining a self loop to each vertex. The category Gph is nontrivially Gph-enriched: Gph is a topos, and therefore cartesian closed, and therefore enriched over itself. Given two graph homomorphisms <semantics>F,F:(E,V,s,t,a)(E,V,s,t,a),<annotation encoding="application/x-tex">F, F'\colon (E, V, s, t, a) \to (E', V', s', t', a'),</annotation></semantics> a graph transformation assigns to each vertex <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> in <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> an edge <semantics>e<annotation encoding="application/x-tex">e'</annotation></semantics> in <semantics>E<annotation encoding="application/x-tex">E'</annotation></semantics> such that <semantics>s(e)=F(v)<annotation encoding="application/x-tex">s'(e') = F(v)</annotation></semantics> and <semantics>t(e)=F(v).<annotation encoding="application/x-tex">t'(e') = F'(v).</annotation></semantics> Given any two graphs <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and <semantics>G,<annotation encoding="application/x-tex">G',</annotation></semantics> there is an exponential graph <semantics>G G<annotation encoding="application/x-tex">G'^G</annotation></semantics> whose vertices are graph homomorphisms between them and whose edges are graph transformations. There is a natural isomorphism between the graphs <semantics>C A×B<annotation encoding="application/x-tex">C^{A\times B}</annotation></semantics> and <semantics>(C B) A.<annotation encoding="application/x-tex">(C^B)^A.</annotation></semantics>

A Gph-enriched functor between two Gph-enriched categories <semantics>C,D<annotation encoding="application/x-tex">C, D</annotation></semantics> is a functor between the underlying categories such that the graph structure on each hom set is preserved, i.e. the functions between hom sets are graph homomorphisms between the hom graphs.

Let <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> be a finite set, FinSet be a skeleton of the category of finite sets and functions between them, and <semantics>FinSet/S<annotation encoding="application/x-tex">FinSet/S</annotation></semantics> be the category of functions into <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> and commuting triangles. A multisorted Gph-enriched Lawvere theory, hereafter Gph-theory is a Gph-enriched category with finite products Th equipped with a finite set <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> of sorts and a Gph-enriched functor <semantics>θ:FinSet op/STh<annotation encoding="application/x-tex">\theta\colon FinSet^{op}/S \to Th</annotation></semantics> that preserves products strictly. Any Gph-theory has an underlying multisorted Lawvere theory given by forgetting the edges of each hom graph.

A model of a Gph-theory Th is a Gph-enriched functor from Th to Gph that preserves products up to natural isomorphism. A homomorphism of models is a braided Gph-enriched natural transformation between the functors. Let FPGphCat be the 2-category of small Gph-enriched categories with finite products, product-preserving Gph-functors, and braided Gph-natural transformations. The forgetful functor <semantics>U:FPGphCat[Th,Gph]Gph<annotation encoding="application/x-tex">U\colon FPGphCat[\Th, \Gph] \to \Gph</annotation></semantics> that picks out the underlying graph of a model has a left adjoint that picks out the free model on a graph.

Gph-enriched categories are part of a spectrum of 2-category-like structures. A strict 2-category is a category enriched over Cat with its usual product. Sesquicategories are categories enriched over Cat with the “funny” tensor product; a sesquicategory can be thought of as a generalized 2-category where the interchange law does not always hold. A Gph-enriched category can be thought of as a generalized sesquicategory where 2-morphisms (now edges) cannot always be composed. Any strict 2-category has an underlying sesquicategory, and any sesquicategory has an underlying Gph-enriched category; these forgetful functors have left adjoints.

Some examples

The SK calculus

Here’s a presentation of the Gph-theory for the SK calculus:

  • objects
    • <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>
  • morphisms
    • <semantics>S:1T<annotation encoding="application/x-tex">S\colon 1 \to T</annotation></semantics>
    • <semantics>K:1T<annotation encoding="application/x-tex">K\colon 1 \to T</annotation></semantics>
    • <semantics>():T×TT<annotation encoding="application/x-tex">(-\; -)\colon T \times T \to T</annotation></semantics>
  • equations
    • none
  • edges
    • <semantics>σ:(((Sx)y)z)((xz)(yz))<annotation encoding="application/x-tex">\sigma\colon (((S\; x)\; y)\; z) \Rightarrow ((x\; z)\; (y\; z))</annotation></semantics>
    • <semantics>κ:((Kx)z)x<annotation encoding="application/x-tex">\kappa\colon ((K\; x)\; z) \Rightarrow x</annotation></semantics>

The free model of this theory on the empty graph has a vertex for every term in the SK calculus and an edge for every reduction.

The SK calculus with the weak head normal form reduction strategy

Here’s the theory above modified to account for weak head normal form:

  • objects
    • <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>
  • morphisms
    • <semantics>S:1T<annotation encoding="application/x-tex">S\colon 1 \to T</annotation></semantics>
    • <semantics>K:1T<annotation encoding="application/x-tex">K\colon 1 \to T</annotation></semantics>
    • <semantics>():T×TT<annotation encoding="application/x-tex">(-\; -)\colon T \times T \to T</annotation></semantics>
    • <semantics>R:TT<annotation encoding="application/x-tex">R\colon T \to T</annotation></semantics>
  • equations
    • <semantics>R(xy)=(Rxy)<annotation encoding="application/x-tex">R(x\; y) = (Rx\; y)</annotation></semantics>
  • edges
    • <semantics>σ:(((RSx)y)z)((Rxz)(yz))<annotation encoding="application/x-tex">\sigma\colon (((RS\; x)\; y)\; z) \Rightarrow ((Rx\; z)\; (y\; z))</annotation></semantics>
    • <semantics>κ:((RKx)z)Rx<annotation encoding="application/x-tex">\kappa\colon ((RK\; x)\; z) \Rightarrow Rx</annotation></semantics>

If <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is an <semantics>SK<annotation encoding="application/x-tex">SK</annotation></semantics> term with no uses of <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> and <semantics>M<annotation encoding="application/x-tex">M'</annotation></semantics> is its weak head normal form, then <semantics>RM<annotation encoding="application/x-tex">RM</annotation></semantics> reduces to <semantics>RM.<annotation encoding="application/x-tex">RM'.</annotation></semantics>

Once we have <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics>, we can think about other things to do with it. The rewrites treat the morphism <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> as a linear resource. If we treat <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> as “fuel” rather than “catalyst”—that is, if we modify <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> and <semantics>κ<annotation encoding="application/x-tex">\kappa</annotation></semantics> so that <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> only appears on the left-hand side—then the reduction of <semantics>R nM<annotation encoding="application/x-tex">R^n M</annotation></semantics> counts how many steps the computation takes; this is similar to Ethereum’s notion of “gasoline.”

by john (baez@math.ucr.edu) at June 10, 2017 06:38 AM

June 09, 2017

ZapperZ - Physics and Physicists

Host Interrupts Female Physicst Too Much, Audience Member Intervened
Hey, good for her!

The moderator of this panel interrupted physicist Veronika Hubeny of UC-Davis so much that audience member Marilee Talkington (appropriate name) got so frustrated that she intervened.

While watching a panel titled “Pondering the Imponderable: The Biggest Questions of Cosmology,” Marilee Talkington noticed that the moderator wasn’t giving physicist Veronika Hubeny, a professor at UC Davis and the only female on the panel, her fair share of speaking time.

So when the moderator, New Yorker contributor Jim Holt, finally asked Hubeny a question about her research in string theory and quantum gravity, then immediately began speaking over her to explain it himself, Talkington was furious.

Fed up with the continuous mansplaining, Talkington interrupted Holt by yelling loudly, “Let her speak, please!” The crowd applauded the request. 

You can read the rest of the story here.

Certainly, while it is awfully annoying, based on what Dr. Hubeny described, she didn't think it was a blatant sexism. Rather, she thought that the host was just overly enthusiastic. But you may judge that for yourself if the host didn't give the only female member of the panel a chance to speak.



But yeah, good for Ms. Talkington for intervening.

Zz.

by ZapperZ (noreply@blogger.com) at June 09, 2017 07:55 PM

Robert Helling - atdotde

Relativistic transformation of temperature
Apparently, there is a long history of controversy going back to Einstein an Planck about the proper way to deal with temperature relativistically. And I admit, I don't know what exactly the modern ("correct") point of view is. So I would like to ask your opinion about a puzzle we came up during yesterday's after colloquium dinner with Erik Verlinde:

Imagine a long rail of a railroad track. It is uniformly heated to a temperature T and is in thermodynamic equilibrium (if you like a mathematical language: it is in a KMS state). On this railroad track travels Einstein's relativistic train at velocity v. From the perspective of the conductor, the track in front of the train is approaching the train with velocity v, so one might expect that the temperature T appears blue shifted while behind the train, the track is moving away with v and so the temperature appears red-shifted. 

Following this line of thought, one would conclude that the conductor thinks the rail has different temperatures in different places and thus is out of equilibrium.  

On the other hand, the question of equilibrium should be independent of the observer. So, is the assumption of the Doppler shift wrong? 

A few remarks: If you are worried that Doppler shifts should apply to radiation then you are free to assume that both in front and in the back, there are black bodies in thermal contact with the rail and thus exhibiting a photon gas at the same temperature as the rail.

You could probably also make the case for the temperature transforming like the time component of a four vector (since it is essentially an energy). Then the transformed temperature would be independent of the sign of v. This you could for example argue for by assuming the temperature is so high that in your black body photon gas you also create electron-positron pairs which would be heavier due to their relativistic speed relative to the train and thus requiring more energy (and thus temperature) for their creation.

A final remark is about an operational definition of temperature at relativistic speeds: It might be difficult to bring a relativistic thermometer in equilibrium with a system if there is a large relative velocity (when we define temperature as the criterium for two systems in contact to be in equilibrium). Or to operate a heat engine between he front part of the rail and the back while moving along at relativistic speed and then arguing about the efficiency (and defining the temperature  that way).

Update one day later:
Thanks for all your comments. We also had some further discussions here and I would like to share my conclusions:

1) It probably boils down to what exactly you mean when you say ("temperature"). Of course, you want that his at least in familiar situations agrees with what thermometers of this type or another measure. (In the original text I had hinted at two possible definitions that I learned about from a very interesting paper by Buchholz and Solveen discussing the Unruh effect and what would actually be observed there: Either you define temperature that the property that characterises equilibrium states of systems such there is no heat exchange when you bring in contact two systems of the same temperature. This is for example close to what a mercury thermometer measures. Alternatively, you operate a perfect heat engine between two reservoirs and define your temperatures via
$$\eta = \frac{T_h - T_c}{T_h}.$$
This is for example hinted at in the Feynamn lectures on physics.

One of the commentators suggested using the ratio of eigenvalues of the energy momentum tensor as definition of temperature. Even though this might give the usual thing for a perfect fluid I am not really convinced that this generalises in the right way.

2) I would rather define the temperature as the parameter in the Gibbs (or rather KMS) state (it should only exist in equilibrium, anyway). So if your state is described by density matrix $\rho$, and it can be written as
$$\rho = \frac{e^{-\beta H}}{tr(e^{-\beta H})}$$
then $1/\beta$ is the temperature. Obviously, this requires the a priori knowledge of what the Hamiltonian is.

For such states, under mild assumptions, you can prove nice things: Energy-entropy inequalities ("minimisation of free energy"), stability, return to equilibrium and most important here: passivity, i.e. the fact you cannot extract mechanical work out of this state in a cyclic process.

2) I do not agree that it is out of the question to have a thermometer with a relative velocity in thermal equilibrium with a heat bath at rest. You could for example imagine a mirror fixed next to the track and in thermal equilibrium with the track. A second mirror is glued to the train (and again in thermal equilibrium, this time with a thermometer). Between the mirrors is is a photon gas (black body) that you could imagine equilibrating with the mirrors on both ends. The question is if that is the case.

3) Maybe rails and trains a a bit too non-spherical cows, so lets better look at an infinitely extended free quantum gas (bosons or fermions, your pick). You put it in a thermal state at rest, i.e. up to normalisation, its density matrix is given by
$$\rho = e^{-\beta P^0}.$$
Here $P^0$ is the Poincaré generator of time translations.

Now, the question above can be rephrased as: Is there a $\beta'$ such that also
$$\rho = e^{-\beta' (\cosh\alpha P^0 + \sinh \alpha P^1)}?$$
And to the question formulated this way, the answer is pretty clearly "No". A thermal state singles out  a rest frame and that's it. It is not thermal in the moving frame and thus there is no temperature.

It's also pretty easy to see this state is not passive (in the above sense): You could operate a windmill in the slipstream of particles coming more likely from the front than the back. So in particular, this state is not KMS (this argument I learned from Sven Bachmann).

4) Another question would be about gravitational redshift: Let's take some curve space-time and for simplicity assume it has no horizons (for example, let the far field be Schwarzschild but in the center, far outside the Schwarzschild radius, you smooth it out. Like the space-time created by the sun). Make it static, so it contains a timeline Killing vector (otherwise no hope for a thermal state). Now prepare a scalar field in the thermal state with temperature T. Couple to it a harmonic oscillator via
$$ H_{int}(r) = a^\dagger a + \phi(t, r) (a^\dagger + a).$$
You could now compute a "local temperature" by computing the probability that the harmonic oscillator is in the first excited state. Then, how does this depend on $r$?

by Robert Helling (noreply@blogger.com) at June 09, 2017 12:58 PM

Tommaso Dorigo - Scientificblogging

Physics World's Review Of "Anomaly!"
A new review of my book, "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab", has appeared on the June issue of "Physics World". It is authored by Gavin Hesketh, a lecturer at University College London, and you can read it here.

read more

by Tommaso Dorigo at June 09, 2017 12:18 PM

June 08, 2017

Clifford V. Johnson - Asymptotia

Arundhati Roy Interviews

Listening to interviews with Arundhati Roy always fills me with joy, admiration, hope, and renewed love of the great use of language in speech and writing. (If you are interested, I refer to interviews recently on Front Row, but even better with Phillip Dodd on Free Thinking. BBC Radio 4 … Click to continue reading this post

The post Arundhati Roy Interviews appeared first on Asymptotia.

by Clifford at June 08, 2017 08:06 PM

Clifford V. Johnson - Asymptotia

American Cinématheque Event

Tonight at the Aero Theatre in Santa Monica there's a special screening of the last two episodes of the current season of the National Geographic drama Genius, about the life and work of Albert Einstein. After the screening there'll be a panel discussion and Q&A with the show runner Ken Biller, the actor T.R. Knight, and me, in my capacity as the science advisor for the series (as I've discussed earlier here). The details are here, and admission is apparently free. It will be moderated by Corey Powell. (Image is from National Geographic publicity.)

Also, apparently if you arrive early enough you'll get a free Einstein mask. So there's that.

-cvj Click to continue reading this post

The post American Cinématheque Event appeared first on Asymptotia.

by Clifford at June 08, 2017 03:32 PM

Symmetrybreaking - Fermilab/SLAC

Another year wiser

In honor of Fermilab’s upcoming 50th birthday, Symmetry presents physics birthday cards.

Header: Another year wiser

Some say there are five fundamental interactions: gravitational, electromagnetic, strong, weak and the exchange of birthday greetings on Facebook. But even if you prefer paper to pixels, Symmetry is here to help you celebrate another year. Try these five physics birthday cards, available as both gifs and printable PDFs.


Like two beams of particles in the Large Hadron Collider, your lives intersected. Tell a friend you’re grateful:

Have a smashing birthday!
Artwork by Corinne Mucha

Like a neutrino, they may change over time, but you still appreciate their friendship:

You're basically unstoppable. Happy Birthday!
Artwork by Corinne Mucha

Whether it's dark energy or another force that pushes them forward, it’s an honor to see them grow: ​

You expand my horizons. Happy Birthday!
Artwork by Corinne Mucha

Let them know that, like dark matter, good friends can be hard to find:​

I'm glad you're part of the observable universe. Happy Birthday!
Artwork by Corinne Mucha

And you’re so glad that, like a long-sought gravitational wave or a Higgs boson, they finally appeared:​

I'm glad I discovered you. Happy Birthday!
Artwork by Corinne Mucha

Can’t wait to send your first card? We happen to know of a laboratory with a big day coming up on June 15.

Fermilab
PO Box 500, MS 206
Batavia, IL 60510-5011

(Or reach them on Facebook.)

Print setting recommendations:

Paper Size: Letter
Scale: 100 percent

How to fold your card:

Fold your 8.5 x 11 inch paper in half on the horizontal axis, then fold in half again on the vertical axis. Voilà!

Inline 6: Another year wiser
Artwork by Sandbox Studio, Chicago

by Kathryn Jepsen at June 08, 2017 02:00 PM

John Baez - Azimuth

The Mathematics of Open Reaction Networks

Next week, Blake Pollard and I will talk about our work on reaction networks. We’ll do this at Dynamics, Thermodynamics and Information Processing in Chemical Networks, a workshop at the University of Luxembourg organized by Massimiliano Esposito and Matteo Polettini. We’ll do it on Tuesday, 13 June 2017, from 11:00 to 13:00, in room BSC 3.03 of the Bâtiment des Sciences. If you’re around, please stop by and say hi!

Here are the slides for my talk:

The mathematics of open reaction networks.

Abstract. To describe systems composed of interacting parts, scientists and engineers draw diagrams of networks: flow charts, electrical circuit diagrams, signal-flow graphs, Feynman diagrams and the like. In principle all these different diagrams fit into a common framework: the mathematics of monoidal categories. This has been known for some time. However, the details are more challenging, and ultimately more rewarding, than this basic insight. Here we explain how various applications of reaction networks and Petri nets fit into this framework.

If you see typos or other problems please let me know now!

I hope to blog a bit about the workshop… it promises to be very interesting.


by John Baez at June 08, 2017 10:34 AM

June 06, 2017

Tommaso Dorigo - Scientificblogging

A Giant Fireball Thrown At Venice
In the evening of May 30 a giant fireball lit up the skies south of Venice, Italy. The object, which was traveling very slowly along a south-north trajectory, was captured by three video stations in the area, plus observed by countless bystanders and recorded in pictures. The video data allowed to precisely measure the trajectory, which made it clear that the rock was headed straight toward the Venice metropolitan area, and that it would have landed there if it had not disintegrated in flight.

read more

by Tommaso Dorigo at June 06, 2017 07:26 PM

Tommaso Dorigo - Scientificblogging

Physics-Inspired Artwork In Venice: The Works By The Foscarini School Students

This article continues the series of postings in this blog on the results of artistic work by high-school students of three schools in Venice (out of five who took part initially) that participate in a contest and exposition connected to the initiative "Art and Science across Italy", an initiative of the network CREATIONS, funded by the Horizon 2020 programme of the EU.

read more

by Tommaso Dorigo at June 06, 2017 02:08 PM

Symmetrybreaking - Fermilab/SLAC

A tale of three cities

An enormous neutrino detector named ICARUS unites physics labs in Italy, Switzerland and the US.

Scientists work inside the empty ICARUS detector

Born in Italy, revitalized at CERN and bound for the US, the ICARUS detector is emblematic of modern particle physics experiments: international, collaborative and really, really big.

The ICARUS T600 (if you’re inclined to use the full name) was a pioneer in particle physics technology and is still the largest detector of its kind. When operational, the detector is filled with 760 tons of liquid argon, the same element that, in gas form, makes up about 1 percent of our atmosphere. Since its creation, the ICARUS detector has become a model for modern experiments in the worldwide quest to better understand hard-to-catch particles called neutrinos.

Neutrinos are incredibly small, neutral and rarely interact with other particles, making them difficult to study. Even now, 60 years after their discovery, neutrinos continue to surprise and confound scientists. That’s why this detector with a special talent for neutrino-hunting is undertaking a long journey across the Atlantic to a new home in the United States.

Breaking boundaries at INFN: L’Aquila, Italy

ICARUS got its start in Italy. A groundbreaking large-scale detector, it was the prototype of a sci-fi-sounding instrument called a liquid argon time projection chamber. It functions like four giant cameras, each taking separate 3D images of the signals from neutrinos interacting inside. The active section of the detector is about twice the height of a refrigerator, a couple of meters wider than that and about the length of a bowling lane.

The concept of a liquid argon time projection chamber was proposed in 1977 by physicist Carlo Rubbia, who would later win the Nobel Prize for the discovery of the massive, short-lived subatomic W and Z particles, the carriers of the so-called electroweak force. ICARUS came to life in 2010 at the Gran Sasso National Laboratory, run by Italy’s National Institute for Nuclear Physics (INFN) after decades of development to advance technology and construct the experiment.

At the heart of Gran Sasso Mountain, shielded from cosmic rays raining down from space beneath 1400 meters of rock, it gathered thousands of neutrino interactions during its lifetime. The detector measured neutrinos that traveled 450 miles (730 kilometers) from CERN, but it also saw neutrinos born through natural processes in our sun and our atmosphere. Thus its name: Imaging Cosmic and Rare Underground Signals.

The ICARUS collaboration studied various properties of neutrinos, including a surprising phenomenon called neutrino oscillation. Neutrinos come in three varieties, or flavors, and have the uncommon ability to change from one type to another as they travel. But the proof of technology was just as important as the knowledge the experiment gained about neutrinos. ICARUS showed that liquid argon technology was an efficient, reliable and precise way to study the elusive particles.

“Following its initial conception, the experimental development from a table-top device to the huge ICARUS detector has required a number of successive steps in an experimental journey that has lasted almost 20 years,” says Carlo Rubbia, spokesperson of the ICARUS collaboration. “The liquid argon, although initially coming from air, must reduce impurities to a few parts per trillion, a tiny amount in volume and free electron lifetimes of 20 milliseconds. Many truly remarkable collaborators have participated in Italy in the creation of such a novel technology.”

CERN shut down its neutrino beam in 2012, but ICARUS had more to offer. Scientists decided to move the detector to the US Department of Energy’s Fermi National Accelerator Laboratory, to make use of one of its intense neutrino beams.

To make the transition, ICARUS needed an upgrade. Workers maneuvered ICARUS out of the crowded Gran Sasso lab, packed it in two modules (drained of liquid argon) onto special transporters, and wound their way through the Alps to just the place to get an upgrade, the European particle physics laboratory CERN.

A rebirth at CERN: Geneva, Switzerland

After traversing the Mont Blanc tunnel and winding through small French villages toward Geneva, the two large ICARUS modules arrived at CERN in December 2014. After several years of operation at Gran Sasso, the detector was ready for a reboot. One of the main tasks was updating all the electronics and the read-out system.

“The detector itself is very modern and sophisticated, but the supporting technology has evolved over the last 20 years,” says Andrea Zani, a CERN researcher working on the ICARUS experiment. “For example, the original cables are not produced anymore, and the new data read-out system will be higher-performing, exploiting newer components that are far more compact.”

Zani and his colleagues started disassembling parts of the detector at Gran Sasso and then continued their work at CERN. They are replacing the old electronics with 50,000 new read-out channels, which streamline the data collection process and will improve the experiment’s performance overall. Other upgrades involved realigning components to improve the detector’s precision.

“The high-voltage cathode plates were slightly deformed in a few places, which was fine when the experiment first started operation,” Zani says, “but we now we have the capability to make even more precise measurements. We had to heat and then press the plates until they were almost perfectly flat.”

The team also replaced a few dozen outdated light sensors with 360 new photomultiplier tubes, which are now nested behind the wires lining the inner walls of their detectors.

When neutrinos strike atoms of argon in one of the detectors, they release a flash of light and a cascade of charged particles. As these charged particles pass through the detector they ionize other argon atoms releasing electrons. An electric field across the detector causes these electrons to drift toward a plane of roughly 13,000 wires (52,000 in total, counting all four sections of the detector), which measure the incoming particles and enable scientists to reconstruct finely detailed images.

“In addition to the cascade of ionized particles, neutrinos produce a tiny flash of ultraviolet light when they interact with argon atoms,” Zani says. “We know the velocity of electrons as they travel through the liquid argon, and can calculate a particle’s distance from the wire detectors based on the time it takes for the electrical signal to arrive after this flash.

These precise location measurements help scientists distinguish between interesting neutrino interactions and ordinary cosmic rays. Before their installation, all 360 new photomultiplier tubes had to be dusted with a fine powder that shifts the original UV light into a deep blue glow. Over the course of several months, a dedicated team of physicists and technicians completed the process of dusting, testing and finally installing the new light sensors.

In addition to refurbishing the detector, CERN’s engineering team designed and built two huge coolers that will eventually hold the two large ICARUS modules. These containers work much like a thermos and use a vacuum between their inner and outer walls. A layer of solid foam between them will prevent heat from seeping into the experiment. An international collaboration of scientists and engineers are also developing the supporting infrastructure that will enable ICARUS to integrate into its new home at Fermilab.

The final step was stress-testing the containers and packaging the detector for its long journey across the Atlantic.

“It’s been a lot of work,” Zani says, “and putting this all together has been a close collaboration between many different institutions. But we all have the common goal of preparing this detector for its second life at Fermilab.”

New horizons at Fermilab: Batavia, Illinois

While the ICARUS detector was getting an upgrade at CERN, teams of people at Fermilab were preparing for its arrival.

In July 2015, work began on the building that will house the detector 30 feet underground, precisely in the path of Fermilab’s neutrino beam. To keep the cryogenic vessels cold, a team of workers from CERN and INFN visited Fermilab in May 2017 to help install a steel structure that will hold a hefty amount of insulation.

“We couldn’t do this without our partners around the world, and it’s been very rewarding to see it all come together,” says Peter Wilson, the head of Fermilab’s short-baseline neutrino program. “The steel vessel was designed by CERN and manufactured in Poland. The electronic systems were designed by INFN. We’re working with CERN, INFN and other institutions on cosmic-ray taggers that will go above, around and below the detector.”

When the ICARUS detector arrives, it will spend a couple of months undergoing tests and final preparations before being lowered by crane into the building. Once there, it will take its place as the largest in a suite of three detectors on site at Fermilab with a common purpose: to search for a theorized, but never seen, fourth type of neutrino.

Scientists have observed three types of neutrinos: the muon, the electron and the tau neutrino. But they have also seen hints that those three types might be changing into another type they can’t detect. Two experiments in particular—the Liquid Scintillator Neutrino Detector (LSND) at Los Alamos National Lab and MiniBooNE at Fermilab—saw an unexplained excess of charged particles of unexplained origin. One theory is that they were produced by so-called “sterile” neutrinos, which would not interact in the same way as the other three neutrinos.

ICARUS will join the Short-Baseline Near Detector, currently under construction, and MiniBooNE’s successor, MicroBooNE, which has been taking data for nearly two years, on the hunt for sterile neutrinos. All three detectors use the same liquid-argon technology pioneered for ICARUS.

The journey of the ICARUS detector could have a destination beyond its new home at Fermilab. If evidence of a new kind of neutrino were discovered, it could travel all the way to a new understanding of the universe.  

by Lauren Biron, Sarah Charley, Andre Salles at June 06, 2017 01:00 PM

June 02, 2017

ZapperZ - Physics and Physicists

50 Years Of Fermilab
Don Lincoln takes you on a historical tour of Fermilab as it celebrates its 50th Anniversary this year.



Zz.

by ZapperZ (noreply@blogger.com) at June 02, 2017 02:59 PM

June 01, 2017

Symmetrybreaking - Fermilab/SLAC

Muon magnet’s moment has arrived

The Muon g-2 experiment has begun its search for phantom particles with its well-traveled electromagnet.

Overhead view of people working inside a room-sized blue ring, the Muon g-2 magnet

What do you get when you revive a beautiful 20-year-old physics machine, carefully transport it 3200 miles over land and sea to its new home, and then use it to probe strange happenings in a magnetic field? Hopefully you get new insights into the elementary particles that make up everything.

The Muon g-2 experiment, located at the US Department of Energy’s Fermi National Accelerator Laboratory, has begun its quest for those insights.

Take a 360-degree tour of the Muon g-2 experimental hall.

On May 31, the 50-foot-wide superconducting electromagnet at the center of the experiment saw its first beam of muon particles from Fermilab’s accelerators, kicking off a three-year effort to measure just what happens to those particles when placed in a stunningly precise magnetic field. The answer could rewrite scientists’ picture of the universe and how it works.

“The Muon g-2 experiment’s first beam truly signals the start of an important new research program at Fermilab, one that uses muon particles to look for rare and fascinating anomalies in nature,” says Fermilab Director Nigel Lockyer. “After years of preparation, I’m excited to see this experiment begin its search in earnest.”

Getting to this point was a long road for Muon g-2, both figuratively and literally. The first generation of this experiment took place at Brookhaven National Laboratory in New York State in the late 1990s and early 2000s. The goal of the experiment was to precisely measure one property of the muon—the particles’ precession, or wobble, in a magnetic field. The final results were surprising, hinting at the presence of previously unknown phantom particles or forces affecting the muon’s properties.

The new experiment at Fermilab will make use of the laboratory’s intense beam of muons to definitively answer the questions the Brookhaven experiment raised. And since it would have cost 10 times more to build a completely new machine at Brookhaven rather than move the magnet to Fermilab, the Muon g-2 team transported that large, fragile superconducting magnet in one piece from Long Island to the suburbs of Chicago in the summer of 2013.

The magnet took a barge south around Florida, up the Tennessee-Tombigbee waterway and the Illinois River, and was then driven on a specially designed truck over three nights to Fermilab. And thanks to a GPS-powered map online, it collected thousands of fans over its journey, making it one of the most well-known electromagnets in the world.

“Getting the magnet here was only half the battle,” says Chris Polly, project manager of the Muon g-2 experiment. “Since it arrived, the team here at Fermilab has been working around the clock installing detectors, building a control room and, for the past year, adjusting the uniformity of the magnetic field, which must be precisely known to an unprecedented level to obtain any new physics. It’s been a lot of work, but we’re ready now to really get started.”

That work has included the creation of a new beamline to deliver a pure beam of muons to the ring, the installation of a host of instrumentation to measure both the magnetic field and the muons as they circulate within it, and a year-long process of “shimming” the magnet, inserting tiny pieces of metal by hand to shape the magnetic field. The field created by the magnet is now three times more uniform than the one it created at Brookhaven. 

Over the next few weeks the Muon g-2 team will test the equipment installed around the magnet, which will be storing and measuring muons for the first time in 16 years. Later this year, they will start taking science-quality data, and if their results confirm the anomaly first seen at Brookhaven, it will mean that the elegant picture of the universe that scientists have been working on for decades is incomplete, and that new particles or forces may be out there, waiting to be discovered.

“It’s an exciting time for the whole team, and for physics,” says David Hertzog of the University of Washington, co-spokesperson of the Muon g-2 collaboration. “The magnet has been working, and working fantastically well. It won’t be long until we have our first results, and a better view through the window that the Brookhaven experiment opened for us.”

Editor's note: This article is based on a Fermilab press release.

by Andre Salles at June 01, 2017 04:19 PM

Symmetrybreaking - Fermilab/SLAC

At LIGO, three’s a trend

The third detection of gravitational waves from merging black holes provides a new test of the theory of general relativity.

Artist's conception shows two merging black holes similar to those detected by LIGO

For the third time, the LIGO and Virgo collaborations have announced directly detecting the merger of black holes many times the mass of our sun. In the process, they put general relativity to the test.

On January 4, the twin detectors of the Laser Interferometer Gravitational-Wave Observatory stretched and squeezed ever so slightly, breaking the symmetry between the motions of two sets of laser beams. This barely perceptible shiver, lasting a fraction of a second, was the consequence of a catastrophic event: About 3 billion light-years away, a pair of spinning black holes with a combined mass about 49 times that of our sun sank together into a single entity.

The merger produced more power than is radiated as light by the entire contents of the universe at any given time. “These are the most powerful astronomical events witnessed by human beings,” says Caltech scientist Mike Landry, head of the LIGO Hanford Observatory.

When the black holes merged, about two times the mass of the sun converted into energy and released in the form of ripples in the fabric of existence. These were gravitational waves, predicted by Albert Einstein’s theory of general relativity a century ago and first detected by LIGO in 2015.

“Gravitational waves are distortions in the medium that we live in,” Landry says. “Normally we don’t think of the nothing of space as having any properties of all. It’s counterintuitive to think it could expand or contract or vibrate.”

It was not a given that LIGO would be listening when the signal from the black holes arrived. “The machines don’t run 24-7,” says LIGO research engineer Brian Lantz of Stanford University. The list of distractions that can sabotage the stillness the detectors need includes earthquakes, wind, technical trouble, moving nitrogen tanks, mowing grass, harvesting trees and fires.

When the gravitational waves from the colliding black holes reached Earth in January, the LIGO detectors happened to be coming back online after a holiday break. The system that alerts scientists to possible detections wasn’t even fully back in service yet, but a scientist in Germany was poring over the data anyway.

“He woke us up in the middle of the night,” says MIT scientist David Shoemaker, newly elected spokesperson of the LIGO Scientific Collaboration, a body of more than 1000 scientists who perform LIGO research together with the European-based Virgo Collaboration.

The signal turned out to be worth getting out of bed for. “This clearly establishes a new population of black holes not known before LIGO discovered them,” says LIGO scientist Bangalore Sathyaprakash of Penn State and Cardiff University.

The merging black holes were more than twice as distant as the two pairs that LIGO previously detected, which were located 1.3 and 1.4 billion light-years away. This provided the best test yet of a second prediction of general relativity: gravitons without any mass.

Gravitons are hypothetical particles that would mediate the force of gravity, just as photons mediate the force of electromagnetism. Photons are quanta of light; gravitons would be quanta of gravitational waves.

General relativity predicts that, like photons, gravitons should have no mass, which means they should travel at the speed of light. However, if gravitons did have mass, they would travel at different speeds, depending on their energy.

As merging black holes spiral closer and closer together, they move at a faster and faster pace. If gravitons had no mass, this change would not faze them; they would uniformly obey the same speed limit as they traveled away from the event. But if gravitons did have mass, some of the gravitons produced would travel faster than others. The gravitational waves that arrived at the LIGO detectors would be distorted.

“That would mean general relativity is wrong,” says Stanford University Professor Emeritus Bob Wagoner. “Any one observation can kill a theory.”

LIGO scientists’ observations matched the first scenario, putting a new upper limit on the mass of the graviton—and letting general relativity live another day. “I wouldn’t bet against it, frankly,” Wagoner says.

Like a pair of circling black holes, research at LIGO seems to be picking up speed. Collaboration members continue to make improvements to their detectors. Soon the complementary Virgo detector is expected to come online in Italy, and in 2024 another LIGO detector is scheduled to start up in India. Scientists hope to eventually see new events as often as once per day, accumulating a pool of data with which to make new discoveries about the goings-on of our universe.

by Kathryn Jepsen at June 01, 2017 03:09 PM

ZapperZ - Physics and Physicists

Planning For A Future Circular Collider
The future of the next circular collider to follow up the LHC is currently on the table. The Future Circular Collider (FCC) is envisioned to be 80-100 km in circumference (as compared to 27 km for the LHC) and reaching energy as high as 100 TeV (as compared to 13 TeV for the LHC).

Now you may think that this is way too early to think about such a thing, especially when the LHC is still in its prime and probably will be operating for a very long time. But planning and building one of these things take decades. As stated at the end of the article, the LHC itself took about 30 years from its planning stage all the way to its first operation. So you can't simply decide to get one of these built and hope to have it ready in a couple of years. It is the ultimate in long-term planning. No instant gratification here.

In the meantime, the next big project in high-energy physics collider is a linear collider, some form of the International Linear Collider that has been tossed around for many years. China and Japan look to still be the most likely place where this will be built. I do not foresee the US being a leading candidate during the next 4 years for any of these big, international facilities requiring multinational effort.

Zz.

by ZapperZ (noreply@blogger.com) at June 01, 2017 01:30 PM

Andrew Jaffe - Leaves on the Line

Python Bug Hunting

This is a technical, nerdy post, mostly so I can find the information if I need it later, but possibly of interest to others using a Mac with the Python programming language, and also since I am looking for excuses to write more here. (See also updates below.)

It seems that there is a bug in the latest (mid-May 2017) release of Apple’s macOS Sierra 10.12.5 (ok, there are plenty of bugs, as there in any sufficiently complex piece of software).

It first manifested itself (to me) as an error when I tried to load the jupyter notebook, a web-based graphical front end to Python (and other languages). When the command is run, it opens up a browser window. However, after updating macOS from 10.12.4 to 10.12.5, the browser didn’t open. Instead, I saw an error message:

    0:97: execution error: "http://localhost:8888/tree?token=<removed>" doesn't understand the "open location" message. (-1708)

A little googling found that other people had seen this error, too. I was able to figure out a workaround pretty quickly: this behaviour only happens when I wanted to use the “default” browser, which is set in the “General” tab of the “System Preferences” app on the Mac (I have it set to Apple’s own “Safari” browser, but you can use Firefox or Chrome or something else). Instead, there’s a text file you can edit to explicitly set the browser that you want jupyter to use, located at ~/.jupyter/jupyter_notebook_config.py, by including the line

c.NotebookApp.browser = u'Safari'

(although an unrelated bug in Python means that you can’t currently use “Chrome” in this slot).

But it turns out this isn’t the real problem. I went and looked at the code in jupyter that is run here, and it uses a Python module called webbrowser. Even outside of jupyter, trying to use this module to open the default browser fails, with exactly the same error message (though I’m picking a simpler URL at http://python.org instead of the jupyter-related one above):

>>> import webbrowser
>>> br = webbrowser.get()
>>> br.open("http://python.org")
0:33: execution error: "http://python.org" doesn't understand the "open location" message. (-1708)
False

So I reported this as an error in the Python bug-reporting system, and hoped that someone with more experience would look at it.

But it nagged at me, so I went and looked at the source code for the webbrowser module. There, it turns out that the programmers use a macOS command called “osascript” (which is a command-line interface to Apple’s macOS automation language “AppleScript”) to launch the browser, with a slightly different syntax for the default browser compared to explicitly picking one. Basically, the command is osascript -e 'open location "http://www.python.org/"'. And this fails with exactly the same error message. (The similar code osascript -e 'tell application "Safari" to open location "http://www.python.org/"' which picks a specific browser runs just fine, which is why explicitly setting “Safari” back in the jupyter file works.)

But there is another way to run the exact same AppleScript command. Open the Mac app called “Script Editor”, type open location "http://python.org" into the window, and press the “run” button. From the experience with “osascript”, I expected it to fail, but it didn’t: it runs just fine.

So the bug is very specific, and very obscure: it depends on exactly how the offending command is run, so appears to be a proper bug, and not some sort of security patch from Apple (and it certainly doesn’t appear in the 10.12.5 release notes). I have filed a bug report with Apple, but these are not publicly accessible, and are purported to be something of a black hole, with little feedback from the still-secretive Apple development team.

Updates:

  • The bug has been marked as a duplicate, which means I won’t get any more direct information from Apple, but at least they acknowledge that it’s a bug of some sort.
  • It appears to have been corrected in the latest beta of macOS Sierra 10.12.6, and so should be fixed in the next official release.

by Andrew at June 01, 2017 11:15 AM

May 30, 2017

Symmetrybreaking - Fermilab/SLAC

A brief etymology of particle physics

How did the proton, photon and other particles get their names?

Header: A brief etymology of particle physics

Over the years, physicists have given names to the smallest constituents of our universe.

This pantheon of particles has grown alongside progress in physics. Anointing a particle with a name is not just convenient; it marks a leap forward in our understanding of the world around us. 

The etymology of particle physics contains a story that connects these sometimes outlandish names to a lineage of scientific thought and experiment.

So, without further ado, Symmetry presents a detailed guide to the etymology of particles—some we’ve found and others we have yet to discover.

Editor’s note: PIE, referenced throughout, refers to proto-Indo-European, one of the earliest known languages.

Discovered particles

Named by: William Whewell, 1834

Ions are atoms or molecules that are charged. The term “ion” was coined by 19th-century polymath William Whewell, who developed it for his contemporary Michael Faraday (see their correspondence), who made important discoveries in the realm of electromagnetism. “Ion" comes from the neuter present participle of Greek ienai, “go,” to describe the particle’s attraction, or tendency to move toward opposite charges. Ienai originates from the PIE ei, “to go, to walk.”

The suffix “-on” derives from “ion” and appears in the names of many particles.

Named by: Paul Dirac, 1945

Fermions (which include the proton and electron) were named for physicist Enrico Fermi. Fermi developed the first statistical formulas that govern fermions, particles that follow the Pauli exclusion principle, which states that certain particles can’t occupy the same quantum space.

Named by: Christian Møller and Abraham Pais, 1947

Leptons are a class of particles that includes the electron, muon, tau and neutrinos. The name “lepton” was suggested as a counterpart to the nucleon, a name for the particles that make up the atomic nucleus, according to a biography of Abraham Pais.

The first known lepton, the electron, is much lighter than a nucleon. Hence the root word for lepton: the Greek leptos, meaning “small, slight, slender, delicate, subtle,” which originates from PIE lep, meaning “peel” and “small shaving.” This root is also shared by the word “leprosy,” so named because it is a disease that causes scabbing and weakness.

According to a 1920 edition of Chemical Abstracts, chemists had suggested the name “lepton” for all electrons, atoms, ions and molecules, but it did not catch on.

Named by: George Stoney, 1891

Electrons are negatively charged leptons that orbit the nucleus of an atom. Late-19th-century physicist George Stoney came up with the term “electron” to describe what he called in a letter “this most remarkable fundamental unit of electricity.”

The word "electric” was first used to describe materials with an attractive force in the early 17th century. “Electric” itself derives from New Latin electricus, which was used in 1600 to characterize the magnetic attraction of amber when it was rubbed. Electricus was taken from Latin electrum, from Greek elektron, both of which refer specifically to amber.

Named by: Carl Anderson and Seth Neddermeyer, 1938

Muons are members of the lepton family and behave like heavier cousins to electrons.

The muon was originally called a “mesotron,” from the Greek word mesos, meaning “middle,” or “intermediate,” according to a letter published in Nature. That’s because its mass was considered to be in the middle, between those of an electron and a proton.

However, more particles with masses between that of electrons and protons were discovered, and “meson” became a general term to describe them, according to an article in Engineering and Science Monthly. Around 1949 the initial particle was renamed “mu-meson,” corresponding to the Greek letter mu (µ) (see article, subscription required).

Later, scientists discovered differences between the mu-meson and other mesons, which led to the mu-meson being reclassified as a lepton and having its name shortened to just “muon.”

Named by: Martin Perl, 1975

Known also as “the tau particle,” “tau lepton” and even “tauon,” this particle became the third charged lepton—after the electron and muon—when it was discovered in 1975. Because of its third-place finish, it was given the symbol tau (τ) for the Greek triton, meaning “third” (see paper). (Why they didn’t just name it a “triton” remains a mystery.)

Named by: Enrico Fermi, 1933

In 1930, physicist Wolfgang Pauli was studying the problem of energy going missing in a type of particle decay. He proposed that the energy was being carried away by a neutral particle that scientists could not detect. According to an article published by the American Institute of Physics (subscription required), he called this a “neutron,” a combination of the root of the word “neutral”—which derives from Latin neuter meaning “neither gender”—with the suffix “-on.”

However, in 1932, another neutral particle was discovered and also called a “neutron.” This second neutron was heavy and existed in the nucleus. In 1933, physicist Enrico Fermi discovered the original particle Pauli had been describing. To distinguish it from the second neutron, which was more massive, he added to the name the Italian diminutive suffix “-ino.”

Neutrinos come in three flavors that correspond to their charged-lepton cousins: electron, muon and tau.

Named by: Murray Gell-Mann, 1963

Cheese Curds

Quarks are elementary particles that form hadrons such as protons and neutrons, as well as more exotic particles and states of matter like quark-gluon plasma. They were proposed simultaneously by Murray Gell-Mann and George Zweig (who wanted to call them “aces”), and different types of quarks were discovered throughout the rest of the 20th century by multiple different teams of physicists.

Gell-Man wrote about the name in his popular science book The Quark and the Jaguar:

In 1963, when I assigned the name “quark” to the fundamental constituents of the nucleon, I had the sound first, without the spelling, which could have been “kwork.” Then, in one of my occasional perusals of Finnegans Wake, by James Joyce, I came across the word “quark” in the phrase “Three quarks for Muster Mark”.

Since “quark” (meaning, for one thing, the cry of the gull) was clearly intended to rhyme with “Mark,” as well as “bark” and other such words, I had to find an excuse to pronounce it as “kwork.” But the book represents the dream of a publican named Humphrey Chimpden Earwicker. Words in the text are typically drawn from several sources at once, like the “portmanteau” words in Through the Looking-Glass. From time to time, phrases occur in the book that are partially determined by calls for drinks at the bar.

I argued, therefore, that perhaps one of the multiple sources of the cry “Three quarks for Muster Mark” might be “Three quarts for Mister Mark,” in which case the pronunciation “kwork” would not be totally unjustified. In any case, the number three fitted perfectly the way quarks occur in nature.

Some scholars suspect that the quark in Joyce’s epic derives from the German quark, which is a type of cheese curd. The German quark is likely taken from West Slavic words meaning “to form”—potentially a reference to milk solidifying and becoming curd. Serendipitously, “to form” is also the non-dairy quark’s role as the main constituent of matter.

Physicists have discovered six types of quarks, named “up,” “down,” “strange,” “charm,” “top” and “bottom.”

up and down quarks: Gell-Mann named these quarks in 1964 for their upward and downward isospin, which is a quantum property of particles related to the strong nuclear force.

strange: Unlike up and down quarks, strange quarks were observed before the quark model was developed, as constituents of composite particles called kaons. These particles were deemed "strange" because they had unusually long lifetimes, due to some of their decays occurring through the weak force. Gell-Man called them “strange” quarks in 1964.

charm: The charm quark was predicted in a paper by two physicists, Sheldon Glashow and James Bjorken, in 1964. As they explained in a New York Times article: “We called our construct the ‘charmed quark,’ for we were fascinated and pleased by the symmetry it brought to the subnuclear world.” “Charm,” meaning “pleasing quality,” is derived from the Latin carmen, “song, verse, enchantment.”

top and bottom: Physicists Makoto Kobayashi and Toshihide Maskawa predicted the existence of the last two quarks in 1973, but they did not assign names to the new particles. Many scientists unofficially called them “truth” and “beauty.”

In a 1975 paper, physicist Haim Harari gave them names that stuck. To preserve the initials “t” and “b” and create a fitting counterpart for up and down quarks, Harari called them “top” and “bottom” quarks.

Named by: Paul Dirac, 1945

Bosons were named for physicist Satyendra Nath Bose. Along with Albert Einstein, Bose developed a theory explaining this type of particle, which had integer spins and therefore did not obey the Pauli exclusion principle. Because bosons don’t obey the exclusion principle, they can essentially exist on top of one another, or in superposition. Bose’s work developing a theory for bosons, a class that include force-carriers such as photons and gluons, is an integral part of the Standard Model.

Named by: unclear

Photons are sometimes called particles of light. Although the concept of a particle of light (as opposed to a light wave) had been around for over two decades by the time Einstein’s seminal paper on the photoelectric effect was published in 1905, there was still not a widely accepted name for the phenomenon, according to a paper by historian of science Helge Kragh. The term “photon” became accepted in 1927 after Arthur Compton won the Nobel Prize for the discovery of Compton scattering, a phenomenon that demonstrated unquestionably that light was quantized.

The modern origins of the idea of light as a particle date back to 1901. Physicist Max Planck wrote about “packets of energy” as quanta, from the Latin quantum, meaning “how much.”

This was adapted by Albert Einstein, who referred to discrete “wave packets” of light as das Lichtquant or “the light quantum” (see paper, in German).

The first known use of the word “photon” was by physicist and psychologist Leonard Troland, who used it in 1916 to describe a unit of illumination for the retina. Photon derives from the Greek phos, “light,” from PIE bha “to shine.”

Five years later, Irish physicist John Joly used the word to describe the “unit of luminous sensation” created by the cerebral cortex in his effort to create a “quantum theory of vision.”

In 1924, a French biochemist used the word, and in 1926, a French physicist picked it up as well. But the word did not catch on among the physics community until a few months later, when American physical chemist Gilbert Lewis (famous for discovering the covalent bond) began using it.

As described in Progress in Optics, Lewis’ concept of a photon was fundamentally different from Einstein’s—for one, Gilbert incorrectly posited that the number of photons was a conserved quantity. Still, the term finally stuck, and has been used ever since.

Named by: unclear

The Higgs boson is the particle associated with the field that gives some elementary particles their mass. It is called the “Higgs” in honor of British theorist Peter Higgs, who predicted its existence in 1964.

However, Higgs wasn’t the only theorist to contribute to the theory of the particle. Others credited with its prediction include Robert Brout, Francois Englert, Philip Anderson, Gerald Guralnik, Carl Hagen, Tom Kibble and Gerard t’Hooft.

The particle has also been called the “Bout-Englert-Higgs” particle, the “Anderson-Higgs” particle, or even the “Englert-Brout-Higgs-Guralnik-Hagen-Kibble” or “ABEGHHK’tH” particle.

According to an article in Nature, this extensive list of names was pared down by theorists such as Benjamin Lee, who referred to it as the “Higgs,” and by Steven Weinberg, who (mistakenly) cited Higgs in a paper (subscription required) as having provided the first theory to explain why some particles have mass.

In an effort to drive popular support for the search for the Higgs boson, physicist Leon Lederman gave it the moniker “The God Particle.” For his part, Higgs the theorist often refers to the “scalar boson” or “so-called Higgs particle.”

Named by: T.D. Lee and C.N. Yang, 1960

Carriers of the weak nuclear force in charged current interactions, W bosons were first predicted and named in a paper (subscription required) in 1960. W bosons likely draw their name from the weak nuclear force, so called because its field strength over a given distance is much weaker than the strong and electromagnetic forces. The word weak comes from Old Norse veikr “weak” with potential origins tracing back to PIE weik, “to bend, wind.”

Named by: Sheldon Glashow, 1961

Desert

Like W bosons, Z bosons are mediators for the weak force. Unlike W bosons, though, Z bosons have no charge, so exchanges of Z bosons are called “neutral current interactions.”

When Sheldon Glashow theorized them in a paper in 1961, he did not provide an explanation. Some theories allege that Z stands for “zero” because of the neutral current’s lack of charge. Zero has its roots in Italian zero, which comes from Medieval Latin zephirum. Italian mathematician Leonardo Fibonacci coined zephirum, meaning “zero,” from Arabic sifr, “nothing.” Sifr is likely a translation of Sanskrit sunya-m, “empty place, desert.”

Named by: Murray Gell-Mann, 1962

Glue Bottle

Gluons are mediators of the strong force, which is what holds the nucleus together. Interactions through the strong force can be thought of as exchanges of gluons.

Gluons were ostensibly named for their glue-like properties and ability to keep the nucleus together (see paper, subscription required). “Glue” derives from Early French glu and has its roots in Latin gluten “to glue,” which is also the origin of gluten, the “nitrogenous part of grain.” However, there are no foods that are gluon-free.

Named by: Lev Okun, 1962

The term “hadron” was coined at the 1962 International Conference on High Energy Physics (see report) to refer to heavier partner particles to leptons. Hadron comes from the Greek hadros, meaning “thick, bulky, massive.” It was later discovered that hadrons were composite particles made up of quarks surrounded by a haze of gluons.

Named by: Abraham Pais, 1953

Baryons are a kind of hadron that is made of three quarks held together by gluons. Protons and neutrons, which make up the nucleus of atoms, are both baryons.

The use of the word “baryon” appeared in 1953, when physicist Abraham Pais proposed it in an article as a name for nucleons and other heavy particles. It draws from barys, the Greek word for “heavy.”

Named by: Ernest Rutherford, 1920

The proton is one of the three constituents of an atom, along with neutrons and electrons.

According to an article published in the American Journal of Physics (subscription required), physicist Ernest Rutherford proposed the name in honor of 19th century scientist William Prout. In 1816 Prout proposed calling the hydrogen atom a “protyle,” from the Greek protos, “first,” and húlē, “material.” Prout believed hydrogen was the constituent atom for all elements.

Prout was later proven wrong, but Rutherford suggested calling the particle he discovered either “proton”—after Prout’s hypothetical particle—or “prouton”—after Prout himself. Rutherford and other scientists eventually settled on proton, whose root was also the Greek protos.

Named by: unclear

Neutrons are particles made of up and down quarks. According to a letter published in Nature, it is unclear whether physicist William Harkins or physicist Ernest Rutherford referred to the electrically neutral nucleon as a “neutron” first. What is clear is that both came up with the same name for the same particle in 1921, likely drawing on the same etymology of the root word “neutral.”

Named by: Homi J. Bhabha, 1939

Mesons are particles made of both a quark and an anti-quark.

Mesons were originally referred to as “heavy electrons,” as their masses were between the electron and the proton, or as “U-particles” for their unknown nature, or as “Yukawa particles” after physicist Hideki Yukawa, who first theorized them in 1935. In the past, mesons were also used inaccurately to refer to bosons.

Carl Anderson and Seth Neddermeyer, co-discoverers of the muon, suggested calling the particle a “mesotron,” derived from the Greek word mesos, meaning “middle,” for their intermediate masses. Physicist Homi J. Bhabha, considered the father of nuclear physics in India, suggested in an article (subscription required) the shorter name “meson” in 1939.

Many mesons, such as kaons and pions, are simply contractions named after the letters used to represent them (K-meson, Pi-meson).

Named by: unclear

Particles of matter have partner particles of antimatter, which share the same mass, but have opposite electrical charge and spin. When a matter-antimatter pair meets, the particles annihilate one another.

In 1928, theorist Paul Dirac theorized in a paper what he called the “anti-electron,” the first hypothetical particle of antimatter. However, when Carl Anderson discovered the particle in 1932, he called it a “positron” because of its positive charge. (According to an article by theoretical physicist Cecilia Jarlskog, an international group of physicists suggested in 1948 that the positron should be called a “positon” and the electron should be renamed a “negaton,” but the effort never quite caught on.)

Around 1937, Dirac’s original “anti-” prefix came back into use to describe particles like the positron (see article, subscription required).

Possibly the first reference to modern antimatter came in 1948 (see article, subscription required). It’s likely that it took so long to come up with a generic term due to the limited number of particles and antiparticles that had been discovered at that time.

The actual first use of the term occurred in 1898 as part of a somewhat whimsical letter published in Nature (subscription required) proposing the existence of matter with “negative gravity.”

The prefix “anti-“ originates from Greek anti, meaning “against, opposed to, opposite of, instead.” The word “matter,” meaning “physical substance,” is a 14th-century construction that comes from materie, “subject of thought, speech, or expression,” itself deriving from Latin material, or “substance from which something is made.” This comes from Latin mater "origin, source, mother.”

Hypothetical particles

Named by: Frank Wilczek, 1978

Axion Laundry Box

Axions are hypothetical particles and candidates for the dark matter that is thought to potentially make up most of the mass in the universe. Frank Wilczek said in a Nobel lecture that he “named them after a laundry detergent, since they clean up a problem.”

Said problem is known as “the Strong CP problem,” which is an unsolved question of why quark interactions and anti-quark interactions seem to follow the same rules.

Named by: Justin Khoury and Amanda Weltman, 2003

Chameleon

The chameleon particle is a hypothetical particle of dark energy.

The word “chameleon” comes from the Greek cognate khamaileon, whose root khamai means “on the ground.” Its other root, leon means lion; thus “ground lion.” But the name chameleon comes from the defining characteristic of lizards of that name. In a 2003 paper (subscription required), physicists Justin Khoury and Amanda Weltman proposed and named the particle, the physical characteristics of which would depend on its environment.

Named by: Dmitri Blochinzew and F. M. Gal’perin, 1934

The graviton, an undiscovered particle associated with the force of gravity, is one of the oldest hypothetical particles (see paper, in Russian). It takes its name from the English “gravity,” which itself comes from Old French gravité meaning “seriousness, thoughtfulness.” The Latin root, gravis “heavy,” was repurposed as gravity for scientific use in the 17th century to mean “weight.”

Perhaps the earliest use of the word comes from the 1644 philosophical text Two Treatises: of Bodies and of Man’s Soul. It would be another 40 odd years until Isaac Newton made gravity mathematically rigorous in his Principia.

Named by: Y. Chikashige, Rabindra Mohapatra, and Roberto Peccei, 1980

In particle physics, “lepton number” is the number of leptons in a particle reaction minus the number of antileptons. As far as we know, lepton number must be conserved from the beginning to the end of an interaction.

A majoron is hypothetical type of boson proposed to solve problems with the conservation of lepton number thought to exist in some high-energy collisions (see paper, subscription required). Majorons were named after Majorana fermions, named after physicist Ettore Majorana, who hypothesized the existence of particles that were their own antiparticles. “Majorana,” a variant of Maiorana, an Italian surname popular in Sicily, owes its roots to the herb marjoram, which is common in that area.

Named by: Gerald Feinberg, 1967

Proposed in a 1967 paper (subscription required) as a name for hypothetical faster-than-light particles, tachyons take their name from the Greek takhys for “swift.”

Named by: Abdus Salam, J. Strathdee, 1974

Supersymmetry is a theory that about doubles the number of particles in the Standard Model of particle physics. It states that every particle has a (usually more massive) “super” partner.

Although supersymmetry comes in many forms and flavors and took many years to develop, it owes the name “supersymmetry” to a 1974 paper (subscription required). Super comes from “supergauge,” used to describe the high power of gauge operator, and symmetry, because the theory is global rather than local (see paper, subscription required).

The nomenclature for supersymmetric particles was put forward in 1982 in a paper by physicists Ian Hinchliffe and Laurence Littenberg.

To identify the supersymmetric partner particle of a boson, add the suffix “-ino.” (For example, the supersymmetric partner of a photon would be called a photino.) And to identify the partner of a fermion, add the prefix “s-.” (For example, the partner of a muon would be a smuon.)

by Daniel Garisto at May 30, 2017 01:00 PM

Tommaso Dorigo - Scientificblogging

Meeting Matts Roos
Today I gave a seminar at the Physics Department of the University of Helsinki, to talk of "Controversial Phenomena in Collider Data and the 5-Sigma Criterion in HEP", invited by Juska Pekkanen and Mikko Voutilanen, two CMS colleagues. 
The seminar is more or less the same I have given several times in the past year around Europe and the US. It contains some statistics, some HEP history, and some material taken from my recent book, "Anomaly!".

read more

by Tommaso Dorigo at May 30, 2017 11:18 AM

May 26, 2017

Symmetrybreaking - Fermilab/SLAC

First results from search for a dark light

The Heavy Photon Search at Jefferson Lab is looking for a hypothetical particle from a hidden “dark sector.”

HPS Silicon Vertex Tracker being assembled

In 2015, a group of researchers installed a particle detector just half of a millimeter away from an extremely powerful electron beam. The detector could either start them on a new search for a hidden world of particles and forces called the “dark sector”—or its sensitive parts could burn up in the beam.

Earlier this month, scientists presented the results from that very first test run at the Heavy Photon Search collaboration meeting at the US Department of Energy’s Thomas Jefferson National Accelerator Facility. To the scientists’ delight, the experiment is working flawlessly.

Dark sector particles could be the long-sought components of dark matter, the mysterious form of matter thought to be five times more abundant in the universe than regular matter. To be specific, HPS is looking for a dark-sector version of the photon, the elementary “particle of light” that carries the fundamental electromagnetic force in the Standard Model of particle physics.

Analogously, the dark photon would be the carrier of a force between dark-sector particles. But unlike the regular photon, the dark photon would have mass. That’s why it’s also called the heavy photon.

To search for dark photons, the HPS experiment uses a very intense, nearly continuous beam of highly energetic electrons from Jefferson Lab’s CEBAF accelerator. When slammed into a tungsten target, the electrons radiate energy that could potentially produce the mystery particles. Dark photons are believed to quickly decay into pairs of electrons and their antiparticles, positrons, which leave tracks in the HPS detector.

“Dark photons would show up as an anomaly in our data—a very narrow bump on a smooth background from other processes that produce electron-positron pairs,” says Omar Moreno from SLAC National Accelerator Laboratory, who led the analysis of the first data and presented the results at the collaboration meeting.

The challenge is that, due to the large beam energy, the decay products are compressed very narrowly in beam direction. To catch them, the detector must be very close to the electron beam. But not too close—the smallest beam movements could make the beam swerve into the detector. Even if the beam doesn’t directly hit the HPS apparatus, electrons interacting in the target can scatter into the detector and cause unwanted signals. 

The HPS team implemented a number of precautions to make sure their detector could handle the potentially destructive beam conditions. They installed and carefully aligned a system to intercept any large beam motions, made the detector’s support structure movable to bring the detector close to the beam and measure the exact beam position, and installed a feedback system that would shut the beam down if its motions were too large. They also placed their whole setup in vacuum because interactions of the beam with gas molecules would create too much background. Finally, they cooled the detector to negative 30 degrees Fahrenheit to reduce the effects of radiation damage. These measures allowed the team to operate their experiment so close to the beam.

“That’s maybe as close as anyone has ever come to such a particle beam,” says John Jaros, head of the HPS group at SLAC, which built the innermost part of the HPS detector, the Silicon Vertex Tracker. “So, it was fairly exciting when we gradually decreased the distance between the detector and the beam for the first time and saw that everything worked as planned. A large part of that success lies with the beautiful beams Jefferson Lab provided.” 

SLAC’s Mathew Graham, who oversees the HPS analysis group, says, “In addition to figuring out if we can actually do the experiment, the first run also helped us understand the background signals in the experiment and develop the data analysis tools we need for our search for dark photons.”

So far, the team has seen no signs of dark photons. But to be fair, the data they analyzed came from just 1.7 days of accumulated running time. HPS collects data in short spurts when the CLAS experiment, which studies protons and neutrons using the same beam line, is not in use.

A second part of the analysis is still ongoing: The researchers are also closely inspecting the exact location, or vertex, from which an electron-positron pair emerges.

“If a dark photon lives long enough, it might make it out of the tungsten target where it was produced and travel some distance through the detector before it decays into an electron-positron pair,” Moreno says. The detector was specifically designed to observe such a signal.

Jefferson Lab has approved the HPS project for a total of 180 days of experimental time. Slowly but surely, HPS scientists are finding chances to use it.

by Manuel Gnida at May 26, 2017 01:00 PM

Geraint Lewis - Cosmic Horizons

Falling into a black hole: Just what do you see?
Everyone loves black holes. Immense gravity, a one-way space-time membrane, the possibility of links to other universes. All lovely stuff.

A little trawl of the internets reveals an awful lot of web pages discussing black holes, and discussions about spaghettification, firewalls, lost information, and many other things. Actually, a lot of the stuff out there on the web is nonsense, hand-waving, partly informed guesswork. And one of the questions that gets asked is "What would you see looking out into the universe?"

Some (incorrectly) say that you would never cross the event horizon, a significant mis-understanding of the coordinates of relativity. Other (incorrectly) conclude from this that you actually see the entire future history of the universe play out in front of your eyes.

What we have to remember, of course, is that relativity is a mathematical theory, and instead of hand waving, we can use mathematics to work out what we will see. And that's what I did.

I won't go through the details here, but it is based upon correctly calculating redshifts in relativity and conservation laws embodied in Killing vectors. But the result is an equation, an equation that looks like this


Here,  rs is the radius from which you start to fall, re is the radius at which the photon was emitted, and ro is the radius at which you receive the photon. On the left-hand-side is the ratio of the frequencies of the photon at the time of observation compared to emission. If this is bigger than one, then the photon is observed to have more energy than emitted, and the photon is blueshifted. If it is less than one, then it has less energy, and the photon is redshifted. Oh, and m is the mass of the black hole.

One can throw this lovely equation into python and plot it up. What do you get.

So, falling from a radius of 2.1m, we get
And falling from 3m
And from 5m
And 10m
and finally at 50m

In each of these, each line is a photon starting from different differences.

The key conclusion is that within the event horizon (r=2m) photons are generally seen to be redshifted, irrespective of where you start falling from. In fact in the last moment before you meet your ultimate end in the central singularity, the energy of the observed photon goes to zero and the outside universe is infinitely reshifted and vanishes from view.

How cool is that?

by Cusp (noreply@blogger.com) at May 26, 2017 08:01 AM

May 25, 2017

The n-Category Cafe

A Type Theory for Synthetic ∞-Categories

One of the observations that launched homotopy type theory is that the rule of identity-elimination in Martin-Löf’s identity types automatically generates the structure of an <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid. In this way, homotopy type theory can be viewed as a “synthetic theory of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids.”

It is natural to ask whether there is a similar directed type theory that describes a “synthetic theory of <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-categories” (or even higher categories). Interpreting types directly as (higher) categories runs into various problems, such as the fact that not all maps between categories are exponentiable (so that not all <semantics><annotation encoding="application/x-tex">\prod</annotation></semantics>-types exist), and that there are numerous different kinds of “fibrations” given the various possible functorialities and dimensions of categories appearing as fibers. The 2-dimensional directed type theory of Licata and Harper has semantics in 1-categories, with a syntax that distinguishes between co- and contra-variant dependencies; but since the 1-categorical structure is “put in by hand”, it’s not especially synthetic and doesn’t generalize well to higher categories.

An alternative approach was independently suggested by Mike and by Joyal, motivated by the model of homotopy type theory in the category of Reedy fibrant simplicial spaces, which contains as a full subcategory the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-cosmos of complete Segal spaces, which we call Rezk spaces. It is not possible to model ordinary homotopy type theory directly in the Rezk model structure, which is not right proper, but we can model it in the Reedy model structure and then identify internally some “types with composition,” which correspond to Segal spaces, and “types with composition and univalence,” which correspond to the Rezk spaces.

Almost five years later, we are finally developing this approach in more detail. In a new paper now available on the arXiv, Mike and I give definitions of Segal and Rezk types motivated by these semantics, and demonstrate that these simple definitions suffice to develop the synthetic theory of <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-categories. So far this includes functors, natural transformations, co- and contravariant type families with discrete fibers (<semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids), the Yoneda lemma (including a “dependent” Yoneda lemma that looks like “directed identity-elimination”), and the theory of coherent adjunctions.

Cofibrations and extension types

One of the reasons this took so long to happen is that it required a technical innovation to become feasible. To develop the synthetic theory of Segal and Rezk types, we need to detect the semantic structure of the simplicial spaces model internally, and it seems that the best way to do this is to axiomatize the presence of a strict interval <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> (a totally ordered set with distinct top and bottom elements). This is the geometric theory of which simplicial sets are the classifying topos (and of which simplicial spaces are the classifying <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topos). We can then define an arrow in a type <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> to be a function <semantics>2A<annotation encoding="application/x-tex">2\to A</annotation></semantics>.

However, often we want to talk about arrows with specified source and target. We can of course define the type <semantics>hom A(x,y)<annotation encoding="application/x-tex">\hom_A(x,y)</annotation></semantics> of such arrows to be <semantics> f:2A(f(0)=x)×(f(1)=y)<annotation encoding="application/x-tex">\sum_{f:2\to A} (f(0)=x)\times (f(1)=y)</annotation></semantics>, but since we are in homotopy type theory, the equalities <semantics>f0=x<annotation encoding="application/x-tex">f0=x</annotation></semantics> and <semantics>f1=y<annotation encoding="application/x-tex">f1=y</annotation></semantics> are data, i.e. homotopical paths, that have to be carried around everywhere. When we start talking about 2-simplices and 3-simplices with specified boundaries as well, the complexity becomes unmanageable.

The innovation that solves this problem is to introduce a notion of cofibration in type theory, with a corresponding type of extensions. If <semantics>i:AB<annotation encoding="application/x-tex">i:A\to B</annotation></semantics> is a cofibration and <semantics>X:B𝒰<annotation encoding="application/x-tex">X:B\to \mathcal{U}</annotation></semantics> is a type family dependent on <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>, while <semantics>f: a:AX(i(a))<annotation encoding="application/x-tex">f:\prod_{a:A} X(i(a))</annotation></semantics> is a section of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> over <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>, then we introduce an extension type <semantics> b:BX(b) f i<annotation encoding="application/x-tex">\langle \prod_{b:B} X(b) \mid^i_f\rangle</annotation></semantics> consisting of “those dependent functions <semantics>g: b:BX(b)<annotation encoding="application/x-tex">g:\prod_{b:B} X(b)</annotation></semantics> such that <semantics>g(i(a))f(a)<annotation encoding="application/x-tex">g(i(a)) \equiv f(a)</annotation></semantics> — note the strict judgmental equality! — for any <semantics>a:A<annotation encoding="application/x-tex">a:A</annotation></semantics>”. This is modeled semantically by a “Leibniz” or “pullback-corner” map. In particular, we can define <semantics>hom A(x,y)= t:2A [x,y] 0,1<annotation encoding="application/x-tex">\hom_A(x,y) = \langle \prod_{t:2} A \mid^{0,1}_{[x,y]} \rangle</annotation></semantics>, the type of functions <semantics>f:2A<annotation encoding="application/x-tex">f:2\to A</annotation></semantics> such that <semantics>f(0)x<annotation encoding="application/x-tex">f(0)\equiv x</annotation></semantics> and <semantics>f(1)y<annotation encoding="application/x-tex">f(1) \equiv y</annotation></semantics> strictly, and so on for higher simplices.

General extension types along cofibrations were first considered by Mike and Peter Lumsdaine for a different purpose. In addition to the pullback-corner semantics, they are inspired by the path-types of cubical type theory, which replace the inductively specified identity types of ordinary homotopy type theory with a similar sort of restricted function-type out of the cubical interval. Our paper introduces a general notion of “type theory with shapes” and extension types that includes the basic setup of cubical type theory as well as our simplicial type theory, along with potential generalizations to Joyal’s “disks” for a synthetic theory of <semantics>(,n)<annotation encoding="application/x-tex">(\infty,n)</annotation></semantics>-categories.

Simplices in the theory of a strict interval

In simplicial type theory, the cofibrations are the “inclusions of shapes” generated by the coherent theory of a strict interval, which is axiomatized by the interval <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, top and bottom elements <semantics>0,1:2<annotation encoding="application/x-tex">0,1 : 2</annotation></semantics>, and an inequality relation <semantics><annotation encoding="application/x-tex">\le</annotation></semantics> satisfying the strict interval axioms.

Simplices can then be defined as

<semantics>Δ n:={t 1,,t nt nt n1t 2t 1}<annotation encoding="application/x-tex"> \Delta^n := \{ \langle t_1,\ldots, t_n\rangle \mid t_n \leq t_{n-1} \cdots t_2 \leq t_1 \} </annotation></semantics>

Note that the 1-simplex <semantics>Δ 1<annotation encoding="application/x-tex">\Delta^1</annotation></semantics> agrees with the interval <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>.

Boundaries, e.g. of the 2-simplex, can be defined similarly <semantics>Δ 2:={t 1,t 2:2×2(0t 2t 1)(t 2t 1)(t 2t 11)}<annotation encoding="application/x-tex"> \partial\Delta^2 :=\{&#10216;t_1,t_2&#10217;: 2 \times 2 \mid (0 \equiv t_2 \leq t_1) \vee (t_2 \equiv t_1) \vee (t_2 \leq t_1 \equiv 1)\} </annotation></semantics> making the inclusion of the boundary of a 2-simplex into a cofibration.

Segal types

For any type <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> with terms <semantics>x,y:A<annotation encoding="application/x-tex">x,y : A</annotation></semantics> define

<semantics>hom A(x,y):=2A [x,y] Δ 1<annotation encoding="application/x-tex"> hom_A(x,y) := \langle 2 \to A \mid^{\partial\Delta^1}_{ [x,y]} \rangle </annotation></semantics>

That is, a term <semantics>f:hom A(x,y)<annotation encoding="application/x-tex">f : hom_A(x,y)</annotation></semantics>, which we call an arrow from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> in <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, is a function <semantics>f:2A<annotation encoding="application/x-tex">f: 2 \to A</annotation></semantics> so that <semantics>f(0)x<annotation encoding="application/x-tex">f(0) \equiv x</annotation></semantics> and <semantics>f(1)y<annotation encoding="application/x-tex">f(1) \equiv y</annotation></semantics>. For <semantics>f:hom A(x,y)<annotation encoding="application/x-tex">f : hom_A(x,y)</annotation></semantics>, <semantics>g:hom A(y,z)<annotation encoding="application/x-tex">g : hom_A(y,z)</annotation></semantics>, and <semantics>h:hom A(x,z)<annotation encoding="application/x-tex">h : hom_A(x,z)</annotation></semantics>, a similar extension type

<semantics>hom A(x,y,z,f,g,h):=Δ 2A [x,y,z,f,g,h] Δ 2<annotation encoding="application/x-tex"> hom_A(x,y,z,f,g,h) := \langle \Delta^2 \to A \mid^{\partial\Delta^2}_{[x,y,z,f,g,h]}\rangle </annotation></semantics>

has terms that we interpret as witnesses that <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> is the composite of <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> and <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>. We define a Segal type to be a type in which any composable pair of arrows admits a unique (composite, witness) pair. In homotopy type theory, this may be expressed by saying that <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is Segal if and only if for all <semantics>f:hom A(x,y)<annotation encoding="application/x-tex">f : hom_A(x,y)</annotation></semantics> and <semantics>g:hom A(y,z)<annotation encoding="application/x-tex">g : hom_A(y,z)</annotation></semantics> the type

<semantics> h:hom A(x,z)hom A(x,y,z,f,g,h)<annotation encoding="application/x-tex"> \sum_{h : hom_A(x,z)} hom_A(x,y,z,f,g,h) </annotation></semantics>

is contractible. A contractible type is in particular inhabited, and an inhabitant in this case defines a term <semantics>gf:hom A(x,z)<annotation encoding="application/x-tex">g \circ f : hom_A(x,z)</annotation></semantics> that we refer to as the composite of <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> and <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>, together with a 2-simplex witness <semantics>comp(g,f):hom A(x,y,z,f,g,gf)<annotation encoding="application/x-tex">comp(g,f) : hom_A(x,y,z,f,g,g \circ f)</annotation></semantics>.

Somewhat surprisingly, this single contractibility condition characterizing Segal types in fact ensures coherent categorical structure at all dimensions. The reason is that if <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is Segal, then the type <semantics>XA<annotation encoding="application/x-tex">X \to A</annotation></semantics> is also Segal for any type or shape <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. For instance, applying this result in the case <semantics>X=2<annotation encoding="application/x-tex">X=2</annotation></semantics> allows us to prove that the composition operation in any Segal type is associative. In an appendix we prove a conjecture of Joyal that in the simplical spaces model this condition really does characterize exactly the Segal spaces, as usually defined.

Discrete types

An example of a Segal type is a discrete type, which is one for which the map

<semantics>idtoarr: x,y:A(x= Ay)hom A(x,y)<annotation encoding="application/x-tex"> idtoarr : \prod_{x,y: A} (x=_A y) \to hom_A(x,y) </annotation></semantics>

defined by identity elimination by sending the reflexivity term to the identity arrow, is an equivalence. In a discrete type, the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid structure encoded by the identity types and equivalent to the <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-category structure encoded by the hom types. More precisely, a type <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is discrete if and only if it is Segal, as well as Rezk-complete (in the sense to be defined later on), and moreover “every arrow is an isomorphism”.

The dependent Yoneda lemma

If <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> are Segal types, then any function <semantics>f:AB<annotation encoding="application/x-tex">f:A\to B</annotation></semantics> is automatically a “functor”, since by composition it preserves 2-simplices and hence witnesses of composition. However, not every type family <semantics>C:A𝒰<annotation encoding="application/x-tex">C:A\to \mathcal{U}</annotation></semantics> is necessarily functorial; in particular, the universe <semantics>𝒰<annotation encoding="application/x-tex">\mathcal{U}</annotation></semantics> is not Segal — its hom-types <semantics>hom 𝒰(X,Y)<annotation encoding="application/x-tex">\hom_{\mathcal{U}}(X,Y)</annotation></semantics> consist intuitively of “spans and higher spans”. We say that <semantics>C:A𝒰<annotation encoding="application/x-tex">C:A\to \mathcal{U}</annotation></semantics> is covariant if for any <semantics>f:hom A(x,y)<annotation encoding="application/x-tex">f:\hom_A(x,y)</annotation></semantics> and <semantics>u:C(x)<annotation encoding="application/x-tex">u:C(x)</annotation></semantics>, the type

<semantics> v:C(y) t:2C(f(t)) [u,v] Δ 1<annotation encoding="application/x-tex"> \sum_{v:C(y)} \langle \prod_{t:2} C(f(t)) \mid^{\partial\Delta^1}_{[u,v]}\rangle </annotation></semantics>

of “liftings of <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> starting at <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics>” is contractible. An inhabitant of this type consists of a point <semantics>f *(u):C(y)<annotation encoding="application/x-tex">f_\ast(u):C(y)</annotation></semantics>, which we call the (covariant) transport of <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> along <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>, along with a witness <semantics>trans(f,u)<annotation encoding="application/x-tex">trans(f,u)</annotation></semantics>. As with Segal types, this single contractibility condition suffices to ensure that this action is coherently functorial. It also ensures that the fibers <semantics>C(x)<annotation encoding="application/x-tex">C(x)</annotation></semantics> are discrete, and that the total space <semantics> x:AC(x)<annotation encoding="application/x-tex">\sum_{x:A} C(x)</annotation></semantics> is Segal.

In particular, for any Segal type <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and any <semantics>a:A<annotation encoding="application/x-tex">a:A</annotation></semantics>, the hom-functor <semantics>hom A(a,):A𝒰<annotation encoding="application/x-tex">\hom_A(a,-) :A \to \mathcal{U}</annotation></semantics> is covariant. The Yoneda lemma says that for any covariant <semantics>C:A𝒰<annotation encoding="application/x-tex">C:A\to \mathcal{U}</annotation></semantics>, evaluation at <semantics>(a,id a)<annotation encoding="application/x-tex">(a,id_a)</annotation></semantics> defines an equivalence

<semantics>( x:Ahom A(a,x)C(x))C(a)<annotation encoding="application/x-tex"> \Big(\prod_{x:A} \hom_A(a,x) \to C(x)\Big) \simeq C(a) </annotation></semantics>

The usual proof of the Yoneda lemma applies, except that it’s simpler since we don’t need to check naturality or functoriality; in the “synthetic” world all of that comes for free.

More generally, we have a dependent Yoneda lemma, which says that for any covariant <semantics>C:( x:Ahom A(a,x))𝒰<annotation encoding="application/x-tex">C : \Big(\sum_{x:A} \hom_A(a,x)\Big) \to \mathcal{U}</annotation></semantics>, we have a similar equivalence

<semantics>( x:A f:hom A(a,x)C(x,f))C(a,id a).<annotation encoding="application/x-tex"> \Big(\prod_{x:A} \prod_{f:\hom_A(a,x)} C(x,f)\Big) \simeq C(a,id_a). </annotation></semantics>

This should be compared with the universal property of identity-elimination (path induction) in ordinary homotopy type theory, which says that for any type family <semantics>C:( x:A(a=x))𝒰<annotation encoding="application/x-tex">C : \Big(\sum_{x:A} (a=x)\Big) \to \mathcal{U}</annotation></semantics>, evaluation at <semantics>(a,refl a)<annotation encoding="application/x-tex">(a,refl_a)</annotation></semantics> defines an equivalence

<semantics>( x:A f:a=xC(x,f))C(a,refl a).<annotation encoding="application/x-tex"> \Big(\prod_{x:A} \prod_{f:a=x} C(x,f)\Big) \simeq C(a,refl_a). </annotation></semantics>

In other words, the dependent Yoneda lemma really is a “directed” generalization of path induction.

Rezk types

When is an arrow <semantics>f:hom A(x,y)<annotation encoding="application/x-tex">f : hom_A(x,y)</annotation></semantics> in a Segal type an isomorphism? Classically, <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is an isomorphism just when it has a two-sided inverse, but in homotopy type theory more care is needed, for the same reason that we have to be careful when defining what it means for a function to be an equivalence: we want the notion of “being an isomorphism” to be a mere proposition. We could use analogues of any of the equivalent notions of equivalence in Chapter 4 of the HoTT Book, but the simplest is the following:

<semantics>isiso(f):=( g:hom A(y,x)gf=id x)×( h:hom A(y,x)fh=id y)<annotation encoding="application/x-tex"> isiso(f) := \left(\sum_{g : hom_A(y,x)} g \circ f = id_x\right) \times \left(\sum_{h : hom_A(y,x)} f \circ h = id_y \right) </annotation></semantics>

An element of this type consists of a left inverse and a right inverse together with witnesses that the respective composites with <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> define identities. It is easy to prove that <semantics>g=h<annotation encoding="application/x-tex">g = h</annotation></semantics>, so that <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is an isomorphism if and only if it admits a two-sided inverse, but the point is that any pair of terms in the type <semantics>isiso(f)<annotation encoding="application/x-tex">isiso(f)</annotation></semantics> are equal (i.e., <semantics>isiso(f)<annotation encoding="application/x-tex">isiso(f)</annotation></semantics> is a mere proposition), which would not be the case for the more naive definition.

The type of isomorphisms from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> in <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is then defined to be

<semantics>(x Ay):= f:hom A(x,y)isiso(f).<annotation encoding="application/x-tex"> (x \cong_A y) := \sum_{f : \hom_A(x,y)} isiso(f). </annotation></semantics>

Identity arrows are in particular isomorphisms, so by identity-elimination there is a map

<semantics> x,y:A(x= Ay)(x Ay)<annotation encoding="application/x-tex"> \prod_{x,y: A} (x =_A y) \to (x \cong_A y) </annotation></semantics>

and we say that a Segal type <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is Rezk complete if this map is an equivalence, in which case <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is a Rezk type.

Coherent adjunctions

Similarly, it is somewhat delicate to define homotopy correct types of adjunction data that are contractible when they are inhabited. In the final section to our paper, we compare transposing adjunctions, by which we mean functors <semantics>f:AB<annotation encoding="application/x-tex">f : A \to B</annotation></semantics> and <semantics>u:BA<annotation encoding="application/x-tex">u : B \to A</annotation></semantics> (i.e. functions between Segal types) together with a fiberwise equivalence

<semantics> a:A,b:Bhom A(a,ub)hom B(fa,b)<annotation encoding="application/x-tex"> \prod_{a :A, b: B} \hom_A(a,u b) \simeq \hom_B(f a,b) </annotation></semantics>

with various notions of diagrammatic adjunctions, specified in terms of units and counits and higher coherence data.

The simplest of these, which we refer to as a quasi-diagrammatic adjunction is specified by a pair of functors as above, natural transformations <semantics>η:Id Auf<annotation encoding="application/x-tex">\eta: Id_A \to u f</annotation></semantics> and <semantics>ϵ:fuId B<annotation encoding="application/x-tex">\epsilon : f u \to Id_B</annotation></semantics> (a “natural transformation” is just an arrow in a function-type between Segal types), and witnesses <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> and <semantics>β<annotation encoding="application/x-tex">\beta</annotation></semantics> to both of the triangle identities. The incoherence of this type of data has been observed in bicategory theory (it is not cofibrant as a 2-category) and in <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-catgory theory (as a subcomputad of the free homotopy coherent adjunction it is not parental). One homotopically correct type of adjunction data is a half-adjoint diagrammatic adjunction, which has additionally a witness that <semantics>fα:ϵfuϵfηuϵ<annotation encoding="application/x-tex">f \alpha : \epsilon \circ f u\epsilon \circ f\eta u \to \epsilon</annotation></semantics> and <semantics>βu:ϵϵfufηu<annotation encoding="application/x-tex">\beta u: \epsilon \circ \epsilon f u \circ f \eta u</annotation></semantics> commute with the naturality isomorphism for <semantics>ϵ<annotation encoding="application/x-tex">\epsilon</annotation></semantics>.

We prove that given Segal types <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> and functors <semantics>f:AB<annotation encoding="application/x-tex">f : A \to B</annotation></semantics> and <semantics>u:BA<annotation encoding="application/x-tex">u : B \to A</annotation></semantics>, the type of half-adjoint diagrammatic adjunctions between them is equivalent to the type of transposing adjunctions. More precisely, if in the notion of transposing adjunction we interpret “equivalence” as a “half-adjoint equivalence”, i.e. a pair of maps <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> and <semantics>ψ<annotation encoding="application/x-tex">\psi</annotation></semantics> with homotopies <semantics>ϕψ=1<annotation encoding="application/x-tex">\phi \psi = 1</annotation></semantics> and <semantics>ψϕ=1<annotation encoding="application/x-tex">\psi \phi = 1</annotation></semantics> and a witness to one of the triangle identities for an adjoint equivalence (this is another of the coherent notions of equivalence from the HoTT Book), then these data correspond exactly under the Yoneda lemma to those of a half-adjoint diagrammatic adjunction.

This suggests that similar correspondences for other kinds of coherent equivalences. For instance, if we interpret transposing adjunctions using the “bi-invertibility” notion of coherent equivalence (specification of a separate left and right inverse, as we used above to define isomorphisms in a Segal type), we obtain upon Yoneda-fication a new notion of coherent diagrammatic adjunction, consisting of a unit <semantics>η<annotation encoding="application/x-tex">\eta</annotation></semantics> and two counits <semantics>ϵ,ϵ<annotation encoding="application/x-tex">\epsilon,\epsilon'</annotation></semantics>, together with witnesses that <semantics>η,ϵ<annotation encoding="application/x-tex">\eta,\epsilon</annotation></semantics> satisfy one triangle identity and <semantics>η,ϵ<annotation encoding="application/x-tex">\eta,\epsilon'</annotation></semantics> satisfy the other triangle identity.

Finally, if the types <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> are not just Segal but Rezk, we can show that adjoints are literally unique, not just “unique up to isomorphism”. That is, given a functor <semantics>u:BA<annotation encoding="application/x-tex">u:B\to A</annotation></semantics> between Rezk types, the “type of left adjoints to <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics>” is a mere proposition.

by riehl (eriehl@math.jhu.edu) at May 25, 2017 12:29 AM

May 23, 2017

Andrew Jaffe - Leaves on the Line

Not-quite hacked

This week, the New York Times, The Wall Street Journal and Twitter, along with several other news organizations, have all announced that they were attacked by (most likely) Chinese hackers.

I am not quite happy to join their ranks: for the last few months, the traffic on this blog has been vastly dominated by attempts to get into the various back-end scripts that run this site, either by direct password hacks or just denial-of-service attacks. In fact, I only noticed it because the hackers exceeded my bandwidth allowance by a factor of a few (and costing me a few hundred bucks in over-usage charged by my host in the process, unfortunately).

I’ve since attempted to block the attacks by denying access to the IP addresses which have been the most active (mostly from domains that look like 163data.com.cn, for what it’s worth). So, my apologies if any of this results in any problems for anyone else trying to access the blog.

by Andrew at May 23, 2017 11:31 PM

Andrew Jaffe - Leaves on the Line

JSONfeed

More technical stuff, but I’m trying to re-train myself to actually write on this blog, so here goes…

For no good reason other than it was easy, I have added a JSONfeed to this blog. It can be found at http://andrewjaffe.net/blog/feed.json, and accessed from the bottom of the right-hand sidebar if you’re actually reading this at andrewjaffe.net.

What does this mean? JSONfeed is an idea for a sort-of successor to something called RSS, which may stand for really simple syndication, a format for encapsulating the contents of a blog like this one so it can be indexed, consumed, and read in a variety of ways without explicitly going to my web page. RSS was created by developer, writer, and all around web-and-software guru Dave Winer, who also arguably invented — and was certainly part of the creation of — blogs and podcasting. Five or ten years ago, so-called RSS readers were starting to become a common way to consume news online. NetNewsWire was my old favourite on the Mac, although its original versions by Brent Simmons were much better than the current incarnation by a different software company; I now use something called Reeder. But the most famous one was Google Reader, which Google discontinued in 2013, thereby killing off most of the RSS-reader ecosystem.

But RSS is not dead: RSS readers still exist, and it is still used to store and transfer information between web pages. Perhaps most importantly, it is the format behind subscriptions to podcasts, whether you get them through Apple or Android or almost anyone else.

But RSS is kind of clunky, because it’s built on something called XML, an ugly but readable format for structuring information in files (HTML, used for the web, with all of its < and > “tags”, is a close cousin). Nowadays, people use a simpler family of formats called JSON for many of the same purposes as XML, but it is quite a bit easier for humans to read and write, and (not coincidentally) quite a bit easier to create computer programs to read and write.

So, finally, two more web-and-software developers/gurus, Brent Simmons and Manton Reece realised they could use JSON for the same purposes as RSS. Simmons is behind NewNewsWire and Reece’s most recent project is an “indie microblogging” platform (think Twitter without the giant company behind it), so they both have an interest in these things. And because JSON is so comparatively easy to use, there is already code that I could easily add to this blog so it would have its own JSONfeed. So I did it.

So it’s easy to create a JSONfeed. What there isn’t — so far — are any newsreaders like NetNewsWire or Reeder that can ingest them. (In fact, Maxime Vaillancourt apparently wrote a web-based reader in about an hour, but it may already be overloaded…). Still, looking forward to seeing what happens.

by Andrew at May 23, 2017 11:27 PM

Sean Carroll - Preposterous Universe

Congratulations to Grant and Jason!

Advising graduate students as they make the journey from learners to working scientists is one of the great pleasures and privileges of academic life. Last week featured the Ph.D. thesis defenses of not one, but two students I’ve been working with, Grant Remmen (who was co-advised by Cliff Cheung) and Jason Pollack. It will be tough to see them go — both got great postdocs, Grant accepting a Miller Fellowship at Berkeley, and Jason heading to the University of British Columbia — but it’s all part of the cycle of life.

Jason Pollack (L), Grant Remmen (R), and their proud advisor.

Of course we advisors love all of our students precisely equally, but it’s been a special pleasure to have Jason and Grant around for these past five years. They’ve helped me enormously in many ways, as we worked to establish a research program in the foundations of quantum gravity and the emergence of spacetime. And along the way they tallied up truly impressive publication records (GR, JP). I especially enjoy that they didn’t just write papers with me, but also with other faculty, and with fellow students without involving professors at all.

I’m very much looking forward to seeing how Jason and Grant both continue to progress and grow as theoretical physicists. In the meantime, two more champagne bottles get added to my bookshelf, one for each Ph.D. student — Mark, Eugene, Jennifer, Ignacy, Lotty, Heywood, Chien-Yao, Kim, and now Grant and Jason.

Congrats!

by Sean Carroll at May 23, 2017 08:41 PM

Symmetrybreaking - Fermilab/SLAC

LHC swings back into action

Protons are colliding once again in the Large Hadron Collider.

Overhead view of people sitting in front of two rows of computer screens

This morning at CERN, operators nudged two high-energy beams of protons into a collision course inside the world’s largest and most energetic particle accelerator, the Large Hadron Collider. These first stable beams inside the LHC since the extended winter shutdown usher in another season of particle hunting.

The LHC’s 2017 run is scheduled to last until December 10. The improvements made during the winter break will ensure that scientists can continue to search for new physics and study rare subatomic phenomena. The machine exploits Albert Einstein’s principle that energy and matter are equivalent and enables physicists to transform ordinary protons into the rare massive particles that existed when our universe was still in its infancy.

“Every time the protons collide, it’s like panning for gold,” says Richard Ruiz, a theorist at Durham University. “That’s why we need so much data. It’s very rare that the LHC produces something interesting like a Higgs boson, the subatomic equivalent of a huge gold nugget. We need to find lots of these rare particles so that we can measure their properties and be confident in our results.”

During the LHC’s four-month winter shutdown, engineers replaced one of its main dipole magnets and carried out essential upgrades and maintenance work. Meanwhile, the LHC experiments installed new hardware and revamped their detectors. Over the last several weeks, scientists and engineers have been performing the final checks and preparations for the first “stable beams” collisions.

“There’s no switch for the LHC that instantly turns it on,” says Guy Crockford, an LHC operator. “It’s a long process, and even if it’s all working perfectly, we still need to check and calibrate everything. There’s a lot of power stored in the beam and it can easily damage the machine if we’re not careful.”

In preparation for data-taking, the LHC operations team first did a cold checkout of the circuits and systems without beam and then performed a series of dress rehearsals with only a handful of protons racing around the machine.

“We set up the machine with low intensity beams that are safe enough that we could relax the safety interlocks and make all the necessary tweaks and adjustments,” Crockford says. “We then deliberately made the proton beams unstable to check that all the loose particles were caught cleanly. It’s a long and painstaking process, but we need complete confidence in our settings before ramping up the beam intensity to levels that could easily do damage to the machine.”

The LHC started collisions for physics with only three proton bunches per beam. Over the course of the next month, the operations team will gradually increase the number of proton bunches until they have 2760 per beam. The higher proton intensity greatly increases the rate of collisions, enabling the experiments to collect valuable data at a much faster rate.

“We’re always trying to improve the machine and increase the number of collisions we deliver to the experiments,” Crockford says. “It’s a personal challenge to do a little better every year.”

by Sarah Charley at May 23, 2017 05:21 PM

Andrew Jaffe - Leaves on the Line

SDSS 1416+13B

J1416 It’s not that often that I can find a reason to write about both astrophysics and music — my obsessions, vocations and avocations — at the same time. But the recent release of Scott Walker’s (certainly weird, possibly wonderful) new record Bish Bosch has given me just such an excuse: Track 4 is a 21-minute opus of sorts, entitled “SDSS1416+13B (Zercon, A Flagpole Sitter)”. The title seems a random collection of letters, numbers and words, but that’s not what it is: SDSS1416+13B is the (very slightly mangled) identification of an object in the Sloan Digital Sky Survey (SDSS) catalog — 1416+13B means that it is located at Right Ascension 14h16m and Declination 13° (actually, its full name is SDSS J141624.08+134826.7 which gives the location more precisely) and “B” denotes that it’s actually the second of two objects (the other one is unsurprisingly called “A”).

In fact it’s a pretty interesting object: it was actually discovered not by SDSS alone, but by cross-matching with another survey, the UK Infrared Deep Sky Survey (UKIDSS) and looking at the images by eye. It turns out that the two components are a binary system made up of two brown dwarfs — objects that aren’t massive enough to burn hydrogen via nuclear fusion, but are more massive than even the heaviest planets, often big enough to form at the centre of their own stellar systems, and heavy enough have some nuclear reactions in their core. In fact, the UKIDSS survey has been one of the best ways to find such comparatively cool objects; my colleagues Daniel Mortlock and Steve Warren found one of the coolest known brown dwarfs in UKIDSS in 2007, using techniques very similar to those they also used to find the most distant quasar yet known, recounted by Daniel in a guest-post here. Like that object, SDSS1416+13B is one of the coolest such objects ever found.

What does all this have to do with Scott Walker? I have no idea. Since he started singing as a member of the Walker Brothers in the 60s — and even more so since his 70s solo records, Walker has been known for his classical-sounding baritone, though with his mannered, massive vibrato, he always sounds a bit like a rocker’s caricature of a classical singer. I’ve always thought it was more force of personality than actual skill that drew people — especially here in the UK — to him.

His latest, Bish Bosch, the third in a purported trilogy of records he’s made since resurfacing in the mid-1990s, veers between mannered art-songs and rock’n’roll, silences punctuated with electric guitars, fart-sounds and trumpets.

The song “SDSS1416” itself is an (I assume intentionally funny?) screed, alternating sophomoric insults (my favourite is “don’t go to a mind reader, go to a palmist; I know you’ve got a palm”) with recitations of Roman numerals and, finally, the only link to observations of a brown dwarf I can find, “Infrared, infrared/ I could drop/ into the darkness.” Your guess is as good as mine. It’s compelling, but I can’t tell if that’s as an epic or a train wreck.

by Andrew at May 23, 2017 03:22 PM

Andrew Jaffe - Leaves on the Line

Science as metaphor

In further pop-culture crossover news, I was pleased to see this paragraph in John Keane’s review of Alan Ryan’s “On Politics” in this weekend’s Financial Times:

Ryan sees this period [the 1940s] as the point of triumph of liberal democracy against its Fascist and Stalinist opponents. Closer attention shows this decade was instead a moment of what physicists call dark energy: the universe of meaning of democracy underwent a dramatic expansion, in defiance of the cosmic gravity of contemporary events. The ideal of monitory democracy was born.

Not a bad metaphor. Nice to see that the author, a professor of Politics from Sydney, is paying attention to the stuff that really matters.

by Andrew at May 23, 2017 03:21 PM

May 20, 2017

Geraint Lewis - Cosmic Horizons

The Chick Peck Problem
So, my plans for my blog through 2017 have not quite gone to plan, but things have been horrendously busy, and it seems like the rest of the year is likely to continue this way.

But I did get a chance to do some recreational mathematics, spurred on my a story in the news. It's to do with a problem presented at the 2017 Raytheon MATHCOUNTS® National Competition and reported in the New York Times. Here's the question as presented in the press:



Kudos to 13 year old Texan, Luke Robitialle, who got this right.

With a little thought, you should be able to realise that the answer is 25. For any particular chick, there are four potential out comes, each with equal probability. Either the chick is
  • pecked from the left
  • pecked from the right
  • pecked from left and right
  • not pecked at all
Only one of these options results in the chick being unpecked, and so the expected number of chicks unpecked in a circle of 100 is one quarter of this number, or 25.

ABC journalist and presenter extraordinaire, Leigh Sales, tweeted 
But it's the kind of maths that makes me ask more questions. 

While 25 is the expected number of unpecked chicks, what is the distribution of unpecked chicks? What I mean by this is that they peck left or right at random, there might be 24 unpecked chicks for one group of a hundred chicks, 25 for the next, and 23 for the next. So, the question is, given a large number of 100 chick experiments, what's the distribution of unpecked chicks?

I tried to find an analytic solution, but my brain is old, and so I resorted to good old numerical methods. Generate lots of experiments on a computer, and see what the outcomes look like. But there is an issue that we have to think about, namely the question of how many different configurations of chicks pecking left and right can have?

Well, left and right are two options, and for 100 chicks, the number of possible left and right configurations is 


That's a lot of possibilities! How are we going to sample these? 

Well, if we treat a 0 as "chick pecks to the left", and 1 as "check pecks to the right", then if we choose a random integer between 0 and 2100-1,  and represent it as a binary number, then that will be a random sampling of the pecking order (pecking order, get it!) As an example, all chicks peck to the left would be 100 0s in binary, where as all the chicks peck to the right would be 100 1s in binary. 

Let's try a randomly drawn integer in the range. We get (in base 10) 333483444300232384702347234. In binary this is


0000000000010001001111011001110111011011110101101010111100001100011100100110100100000111011111100010
So, the first bunch of chicks peck to the left, then we have a mix of right to left pecks. 
But how many of these chicks are unpecked (remembering what the original question)? Well, for any particular chick, it will be unpecked if the chick to its left pecks to the left, and the chick to its right pecks to the right. So, we're looking for sequences of '001' and '011', with the middle digit representing the chick we are interested in. 
So, we can chick this into a little python code (had to learn it, all the cool kids are using it these days) and this is what I have
There is a little extra in there to account for the fact that the chicks are sitting in a circle, but as you can see, the code is quite compact.
OK. Let's run for the 100 chicks in the question. What do we get?
Yay! The unpecks peak at 25, but there is a nice distribution (which, I am sure, must have an analytic solution somewhere. 
But given the simplicity of the code, I can easily change the number of chicks. What about 10 chicks in circle?
Hmmm. Interesting. What about 1000 chicks?
And 33 chicks?
Most likely number of unpecked chicks is 8, but again, a nice distribution. 
Now, you might be sitting there wondering why the heck I am doing this? Well, firstly, because it is fun! And interesting! It's a question and it is fun to find the answer. 
Secondly it is not obvious what the distribution would be, and how complex it would be to derive, or even if it exists, and so a numerical approach allows us to find an answer. 
Finally, I can easily generalize this to questions like "what if the left pecks are more likely than right pecks by a factor of two, what would the distribution be like?" It would just take a couple of lines of code and I would have an answer. 
And if you can't see how such curiosity led examinations are integral to science, then you don't know what science is.

by Cusp (noreply@blogger.com) at May 20, 2017 03:18 AM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
June 23, 2017 11:51 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at