Particle Physics Planet


June 24, 2018

Peter Coles - In the Dark

A Lamentation of Swans

Walking to my Cardiff residence from the bus stop after travelling from the airport this evening, I saw this collection of swans on the Taff. I don’t know know how many there should be to justify invoking the proper collective noun, Lamentation, but they certainly looked nice.

In Maynooth you are more likely to come across a Murder of Crows than a Lamentation of Swans but if you’re interested in other terms of venery see here.

I wonder what the collective noun is for a collection of collective nouns?

by telescoper at June 24, 2018 10:22 PM

Christian P. Robert - xi'an's og

ABC²DE

A recent arXival on a new version of ABC based on kernel estimators (but one could argue that all ABC versions are based on kernel estimators, one way or another.) In this ABC-CDE version, Izbicki,  Lee and Pospisilz [from CMU, hence the picture!] argue that past attempts failed to exploit the full advantages of kernel methods, including the 2016 ABCDE method (from Edinburgh) briefly covered on this blog. (As an aside, CDE stands for conditional density estimation.) They also criticise these attempts at selecting summary statistics and hence failing in sufficiency, which seems a non-issue to me, as already discussed numerous times on the ‘Og. One point of particular interest in the long list of drawbacks found in the paper is the inability to compare several estimates of the posterior density, since this is not directly ingrained in the Bayesian construct. Unless one moves to higher ground by calling for Bayesian non-parametrics within the ABC algorithm, a perspective which I am not aware has been pursued so far…

The selling points of ABC-CDE are that (a) the true focus is on estimating a conditional density at the observable x⁰ rather than everywhere. Hence, rejecting simulations from the reference table if the pseudo-observations are too far from x⁰ (which implies using a relevant distance and/or choosing adequate summary statistics). And then creating a conditional density estimator from this subsample (which makes me wonder at a double use of the data).

The specific density estimation approach adopted for this is called FlexCode and relates to an earlier if recent paper from Izbicki and Lee I did not read. As in many other density estimation approaches, they use an orthonormal basis (including wavelets) in low dimension to estimate the marginal of the posterior for one or a few components of the parameter θ. And noticing that the posterior marginal is a weighted average of the terms in the basis, where the weights are the posterior expectations of the functions themselves. All fine! The next step is to compare [posterior] estimators through an integrated squared error loss that does not integrate the prior or posterior and does not tell much about the quality of the approximation for Bayesian inference in my opinion. It is furthermore approximated by  a doubly integrated [over parameter and pseudo-observation] squared error loss, using the ABC(ε) sample from the prior predictive. And the approximation error only depends on the regularity of the error, that is the difference between posterior and approximated posterior. Which strikes me as odd, since the Monte Carlo error should take over but does not appear at all. I am thus unclear as to whether or not the convergence results are that relevant. (A difficulty with this paper is the strong dependence on the earlier one as it keeps referencing one version or another of FlexCode. Without reading the original one, I spotted a mention made of the use of random forests for selecting summary statistics of interest, without detailing the difference with our own ABC random forest papers (for both model selection and estimation). For instance, the remark that “nuisance statistics do not affect the performance of FlexCode-RF much” reproduces what we observed with ABC-RF.

The long experiment section always relates to the most standard rejection ABC algorithm, without accounting for the many alternatives produced in the literature (like Li and Fearnhead, 2018. that uses Beaumont et al’s 2002 scheme, along with importance sampling improvements, or ours). In the case of real cosmological data, used twice, I am uncertain of the comparison as I presume the truth is unknown. Furthermore, from having worked on similar data a dozen years ago, it is unclear why ABC is necessary in such context (although I remember us running a test about ABC in the Paris astrophysics institute once).

by xi'an at June 24, 2018 10:18 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

8th Robert Boyle Summer School

This weekend saw the 8th Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s one of my favourite conferences – a small number of talks on the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

220px-Robert_Boyle_0001

The Irish-born scientist and aristocrat Robert Boyle   

300px-Lismore_Castle_2

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution. A contemporary of Isaac Newton and Robert Hooke, he is recognized the world over for his scientific discoveries, his role in the rise of the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

This year, the theme of the conference was ‘What do we know – and how do we know it?’. There were many interesting talks such as Boyle’s Theory of Knowledge by Dr William Eaton, Associate Professor of Early Modern Philosophy at Georgia Southern University: The How, Who & What of Scientific Discovery by Paul Strathern, author of a great many books on scientists and philosophers such as the well-known Philosophers in 90 Minutes book series: Scientific Enquiry and Brain StateUnderstanding the Nature of Knowledge by Professor William T. O’Connor, Head of Teaching and Research in Physiology at the University of Limerick Graduate Entry Medical School: The Promise and Peril of Big Data by Timandra Harkness, well-know media presenter, comedian and writer. For physicists, there was a welcome opportunity to hear the well-known American philosopher of physics Robert P. Crease present the talk Science Denial: will any knowledge do? The full programme for the conference can be found here.

All in all, a hugely enjoyable summer school, culminating in a garden party in the grounds of Lismore castle, Boyle’s ancestral home. My own contribution was to provide the music for the garden party – a flute, violin and cello trio, playing the music of Boyle’s contemporaries, from Johann Sebastian Bach to Turlough O’ Carolan. In my view, the latter was a baroque composer of great importance whose music should be much better known outside Ireland.

by cormac at June 24, 2018 08:19 PM

Christian P. Robert - xi'an's og

Warwick summer school on computational statistics [last call]

In case ‘Og’s readers are not aware, tomorrow, Monday 25 June, is the final day for registration in our incoming LMS/CRiSM summer school on computational statistics taking place at the University of Warwick, two weeks from now, 9-13 July, 2018. There is still room available till midday tomorrow. Greenwich Mean Time. And no later!

by xi'an at June 24, 2018 10:00 AM

June 23, 2018

Clifford V. Johnson - Asymptotia

Google Talk!

I think that I forgot to post this link when it came out some time ago. I gave a talk at Google when I passed though London last Spring. There was a great Q & A session too - the Google employees were really interested and asked great questions. I talked in some detail about the book (The Dialogues), why I made it, how I made it, and what I was trying to do with the whole project. For a field that is supposed to be quite innovative (and usually is), I think that, although there are many really great non-fiction science books by Theoretical Physicists, we offer a rather narrow range of books to the general public, and I'm trying to broaden the spectrum with The Dialogues. In the months since the book has come out, people have been responding really positively to the book, so that's very encouraging (and thank you!). It's notable that it is a wide range of people, from habitual science book readers to people who say they've never picked up a science book before... That's a really great sign!

Here's the talk on YouTube:

Direct link here. Embed below: [...] Click to continue reading this post

The post Google Talk! appeared first on Asymptotia.

by Clifford at June 23, 2018 10:51 PM

Peter Coles - In the Dark

Chaotic Clouds on Jupiter

I’ve been too busy today with Open Day business to do a proper post so I thought I’d just share this marvellous image from NASA’s Juno Mission.

The picture is like an extraordinary work

of abstract art, but it’s scientifically fascinating too:

The region seen here is somewhat chaotic and turbulent, given the various swirling cloud formations. In general, the darker cloud material is deeper in Jupiter’s atmosphere, while bright cloud material is high. The bright clouds are most likely ammonia or ammonia and water, mixed with a sprinkling of unknown chemical ingredients.

by telescoper at June 23, 2018 07:48 PM

ZapperZ - Physics and Physicists

Super Kamiokande and Extremly Pure Water
This is a rather nice overview of Super Kamiokande, a neutrino detector in Japan. It has produced numerous ground-breaking discoveries, including the confirmation of neutrino oscillation many years ago. Unfortunately, the article omitted an important incident at Super-K several years ago when there was a massive implosion of the phototubes.

The article has an interesting information that many people might not know about extremely pure water, the type that is used to fill up the detector tank.

In order for the light from these shockwaves to reach the sensors, the water has to be cleaner than you can possibly imagine. Super-K is constantly filtering and re-purifying it, and even blasts it with UV light to kill off any bacteria.

Which actually makes it pretty creepy.

"Water that's ultra-pure is waiting to dissolve stuff into it," said Dr Uchida. "Pure water is very, very nasty stuff. It has the features of an acid and an alkaline."
.
.
.
Another tale comes from Dr Wascko, who heard that in 2000 when the tank had been fully drained, researchers found the outline of a wrench at the bottom of it. "Apparently somebody had left a wrench there when they filled it in 1995," he said. "When they drained it in 2000 the wrench had dissolved." 

In other words, such pure, deionized water is not something that you want to drink.

And this leads me to comment on this silly commercial of PUR drinking water filter. It showed an ignorant public complaining about lead in the drinking water, even though he was told that the amount is below the safety level.



A drinking water contains a lot of other dissolved minerals, any one of which, above a certain limit, can be dangerous. Even that PUR commercial can only claim that it can REDUCE the amount of lead in the drinking water, not completely removed it. It will not be zero. So that guy should continue complaining about lead even with PUR filter.

If this person in the commercial is representing the general public, then the general public needs to be told that (i) you'll never be able to get rid completely of all contaminants in drinking water and (ii) pure water will dissolve your guts! This is why we set safety levels in many things (360 mrem of radiation per year, for example, is our acceptable, normal background radiation that we receive).

Zz.

by ZapperZ (noreply@blogger.com) at June 23, 2018 01:30 PM

Lubos Motl - string vacua and pheno

Slow bottom-up HEP research is neither intellectually challenging, nor justified by the null LHC data
Ben Allanach has been a well-known supersymmetry researcher in Cambridge, England whose name has appeared a dozen of times on this blog and he wrote a guest blog on ambulance chasing.



Because of his seemingly bullish presonality, I was surprised by an essay he wrote for Aeon.Co a few days ago,
Going nowhere fast: has the quest for top-down unification of physics stalled?
The most nontrivial statement in the essay is
Now I’ve all but dropped it [SUSY at the LHC] as a research topic.
He wants to do things that are more bottom-up such as the bottom mesons (a different bottom, the Academia is full of bottoms). I find this description bizarre because SUSY at the LHC is a good example of bottom-up physics in my eyes – and the bottom mesons seem really, really boring.




Allanach wrote that other colleagues have left SUSY-like research before him, everyone has his own calibration when he should give up, and Allanach gave up now. One theoretical reason he quotes is that SUSY probably doesn't solve the naturalness problem – and aside from the absence of superpartners of the LHC, it also seems that SUSY is incapable of solving other hierarchy problems such as the cosmological constant problem. So if SUSY doesn't solve that one, why it should be explaining the lightness of the Higgs?




So he attributes all the null data – and the disappointment – to the top-down, "reductive" thinking, the thinking whose current flagship is string theory. He wants to pursue the bottom mesons and perhaps a few other "humble" topics like that. I think that I have compressed his essay by several orders of magnitude and nothing substantial is missing.

OK, his attribution is 100% irrational and the rest of his ideas are half-right, half-wrong. Where should I start?

In April 2007, I quantified dozens of (my subjective) probabilities of statements beyond the established level of particle physics. The probabilities go from 0.000001% to 99.9999% – and the items are more likely to be found near 0% or 100% because there are still many things I find "almost certain". But there's one item that was sitting exactly at 50%:
50% - Supersymmetry will be found at the LHC
Many bullish particle physicists were surely boasting a much higher degree of certainty. And I surely wanted the probability to be higher. But that would quantify my wishful thinking. The post above captured what I really believed about the discovery of SUSY at the LHC and that was associated with 50%, a maximum uncertainty.

By the way, with the knowledge of the absence of any SUSY at the LHC so far, and with some ideas about the future of the LHC, I would quantify the probability of a SUSY discovery at the LHC (High-Luminosity LHC is allowed for that discovery) to be 25% now.

String theory in no way implies that SUSY was obliged to be discovered at the LHC. Such a claim about a low-energy experiment doesn't follow from the equations of string theory, from anything that is "characteristically stringy" i.e. connected with conformal field theory of two-dimensional world sheets (more or less directly). Someone might envision a non-stringy argument – a slightly rational one or a mostly irrational one – and attribute it to string theory because it sounds better when your ideas are linked to string theory. But that's deceitful. Various ideas how naturalness should be applied to effective field theories have nothing to do with string theory per se – on the contrary, string theory is very likely to heavily revolutionize the rules how naturalness should be applied, and it's already doing so.

So Allanach's statement that the null LHC data mean something bad for string theory and similar top-down thinking etc. is just absolutely wrong.

A correct proposition is Allanach's thesis that for a person who believes in naturalness and is interested in supersymmetry because in combination with naturalness, it seems to predict accessible superpartners at the colliders, the absence of such superpartners reduces the probability that this package of ideas is correct – and people who have pursued this bunch of ideas are likely to gradually give up at some points.

It's correct but mostly irrelevant for me – the main reason why I am confident that supersymmetry is realized in Nature (at some scale, possibly one that is inaccessible in practice) is that it seems to be a part of the realistic string vacua. This is an actual example of the top-down thinking because I am actually starting near the Planck scale. Allanach has presented no top-down argumentation – all his argumentation is bottom-up. Any reasoning based on the naturalness of parameters in effective field theories is unavoidable bottom-up reasoning.

A mostly wrong is his statement that the null LHC data reduce the probability of supersymmetry. But this statement is justifiable to the extent to which the existence of supersymmetry is tied to the naturalness – the extent to which the superpartners are "required" to be light. If you connect SUSY with the ideas implying that the superpartners must be light, its probability goes down. But more general SUSY models either don't assume the lightness at all, or have various additional – never fully explored – tricks that allow the superpartners to be much heavier or less visible, while still addressing naturalness equally satisfactorily. So in this broader realm, the probability of SUSY hasn't dropped (at least not much) even if you incorporate the naturalness thinking.

You know, the SUSY GUT is still equally compatible with the experiments as the Standard Model up to the GUT scale. The null LHC data say that some parameters in SUSY GUT have to be fine-tuned more than previously thought – but the Standard Model still has to be fine-tuned even more than that. So as long as you choose any consistent rules for the evaluation of the theories, the ratio of probabilities of a "SUSY framework" over "non-SUSY framework" remained the same or slightly increased. The absence of evidence isn't the evidence of absence.

I think he's also presenting pure speculation as a fact when he says that SUSY has nothing to do with the right explanation of the smallness of the cosmological constant. I think it's still reasonably motivated to assume that some argument based on a SUSY starting point (including some SUSY non-renormalization theorems) and small corrections following from SUSY breaking is a promising sketch of an explanation why the cosmological constant is small. We don't know the right explanation with any certainty. So the answer to this question is "we don't know" rather than "SUSY can't do it".

But again, the most far-reaching incorrect idea of Allanach's is his idea that the "surprisingly null LHC data", relatively to an average researcher, should strengthen the bottom-up thinking relatively to the top-down thinking. His conclusion is completely upside down!

The very point of the bottom-up thinking was to expect new physics "really" around the corner – something that I have always criticized (partly because it is always partly driven by the desire to get prizes soon if one is lucky – and that's an ethically problematic driver in science, I think; the impartial passion for the truth should be the motivation). An assumption that was always made by all bottom-up phenomenologists in recent decades was that there can't be any big deserts – wide intervals on the energy log scale where nothing new happens. Well, the null LHC data surely do weaken these theses, don't they? Deserts are possible (yes, that's why I posted the particular image at the top of the blog post, along with a supersymmetric man or superman for short) which also invalidates the claim that by adding small energy gains, you're guaranteed to see new interesting things.

So I think it's obvious that the right way to adjust one's research focus in the light of the null LHC data is to make the research more theoretical, more top-down – and less bound to immediate wishful thinking about the experiment, to be less bottom-up in this sense! SUSY people posting to hep-ph may want to join the Nima Arkani-Hamed-style subfield of amplitudes and amplituhedrons (which still has SUSY almost everywhere because it seems very useful or unavoidable for technical reasons now, SUSY is easier than non-SUSY, for sure) or something else that is posted to hep-th or that is in between hep-ph and hep-th. Allanach's conclusion is precisely wrong.

You know, the bottom-up thinking expects something interesting (although, perhaps, a bit modest) around the corner. That is what I would also call incrementalism. But given this understanding of "incrementalism" (which is basically the same as "bottom-up", indeed), I am shocked by Allanach's statement
This doesn’t mean we need to give up on the unification paradigm. It just means that incrementalism is to be preferred to absolutism
Holy cow. It's exactly the other way around! It's incrementalism that has failed. The addition of new light particles to the Standard Model, to turn it to the MSSM or something else – so that the additions are being linked to the ongoing experiment – that's both incrementalism and it's what has failed in the recent decade because nothing beyond the Higgs was seen.

So a particle physics thinker simply has to look beyond incrementalism. She has to be interested in absolutism at least a little bit, if you wish. She must be ready for big deserts – a somewhat big desert was just seen. And she must "zoom out", if I borrow a verb from the Bitcoin hodling kids who want to train their eyes and other people's eyes to overlook the 70% drop of the Bitcoin price since December ;-). (For the hodlers, the word "she" would be even more comical than for particle physicists!)

But in particle physics, you really need to zoom out because the research of the small interval of energies around the LHC energy scale wasn't fruitful! Allanach also wrote:
But none of our top-down efforts seem to be yielding fruit.
This is complete nonsense – Allanach is writing this nonsense as a layman who has been away for decades or for his previous life so far. The top-down research in string theory has yielded amazing fruits. In recent 10 years as well as 20 years as well as 30 years, it has yielded many more fruits and much more valuable fruits than what the bottom-up research yielded. Allanach is probably completely unfamiliar with all of this – but this ignorance doesn't change anything about the fact that the quote above places him in the category of crackpots.

Ben, you should learn at least some basics about what has been learned from the top-down approach – about dualities, new transitions, new types of vacua, new realization of well-known low-energy physical concepts within a stringy realization, integrable structures in QFTs, new auxiliary spaces, solution to the information loss paradox, links between entanglement and wormholes, and many others. Unlike the papers presenting possible explanations for the \(750\GeV\) diphoton excess, those aren't going away!

There have been various positive and negative expectations about new physics at the LHC. Things would have been more fun if there had been new physics by now. People may feel vindicated or frustrated because their wishes came true or didn't come true. Their love towards the field or its subfields have changed and they may adjust their career plans and other things. But at the end, scientists should think rationally and produce justifiable statements about the natural world, including questions that aren't quite settled yet. I think that most of Allanach's thinking is just plain irrational and the conclusions are upside down. And he's still one of the reasonable people.

Also, Allanach seems to be willing to switch to things like "chasing hopes surrounding B-mesons, \(g-2\) anomalies, sterile neutrinos", and so on. Well, it seems rather likely to me that all these emerging anomalies result from errors in the experiments. But even if they're not errors in the experiment, I don't see much value in theorists' preemptive bottom-up thinking about these matters. If the experiments force us to add a new neutrino species, great. But immediately, it will be just a straightforward experimental fact. The theory explaining the data, if such an anomaly (or the other ones) is confirmed, will be a straightforward ugly expansion of the Standard Model that will be almost directly extracted from the reliable experiment.

My point is that the experimenters could almost do it themselves – they're the crucial players in this particular enterprise – and Allanach wants himself and lots of colleagues to be hired as theoretical assistants to these experimenters. But these experimenters simply don't need too many assistants, especially not very expensive ones.

Why should a theorist spend much time by doing these things in advance? What is the point of it? If such new and surprising anomalies are found by the experiments, the experimenters represent a big fraction of the big discovery. The only big role for a theorist is to actually find an explanation why this new addition to the Standard Model is sensible or could have been expected – if the theorist finds some top-down explanation! A theorist may find out that the existence of some new particle species follows from some principle that looks sensible or unifying at the GUT scale or a string scale; it's a top-down contribution. Without such a contribution, there's almost no useful role for a theorist here. A theorist may preemptively analyze the consequences of 10 possible outcomes of a B-meson experiment. But isn't it better to simply wait for the outcome and make a simple analysis of the actual one outcome afterwards? The bottom-up analyses of possible outcomes just aren't too interesting for anybody.

More generally, I would find some detailed research of B-mesons and the aforementioned anomalies to be utterly boring and insufficiently intellectually stimulating. I have always been bored by these papers – equivalent to some homework exercises in a QFT course – and it's close to the truth if I say that I have never read a "paper like that" in its entirety. I think that if most high-energy physicists abandon the big picture and the big ambitions, the field will rightfully cease to attract the mankind's best minds and it will be in the process of dying.

If most of the people in the field were looking at some dirty structure of B-mesons, the field would become comparable to climatology or another inferior scientific discipline which is messy, likely to remain imprecise for decades or forever, and connected with no really deep mathematics (because deep mathematics has little to say to messy, complex patterns with huge error margins). B-mesons are similar bound states as atoms or molecules – except that atoms and molecules have far more precisely measurable and predictable spectra. So if I had to do some of these things, I would choose atomic or molecular physics or quantum chemistry instead of the B-meson engineering! Like nuclear physics, subnuclear physics really isn't intellectual deeper than the atomic and molecular physics of the 1930s.

Fundamental physics is the emperor of sciences and the ambitious goals are a necessary condition underlying that fact. The experimental data should help the fundamental physicists to adjust their ideas what the ambitious goals should look like – but the experimental data should never be used as evidence against the ambitious goals in general! Experimental data really cannot ever justify the suppression of ambitions such as the search for a theory of everything. Everyone who claims that they can is being demagogic or irrational.

And that's the memo.

by Luboš Motl (noreply@blogger.com) at June 23, 2018 08:27 AM

June 22, 2018

Jester - Resonaances

Both g-2 anomalies
Two months ago an experiment in Berkeley announced a new ultra-precise measurement of the fine structure constant α using interferometry techniques. This wasn't much noticed because the paper is not on arXiv, and moreover this kind of research is filed under metrology, which is easily confused with meteorology. So it's worth commenting on why precision measurements of α could be interesting for particle physics. What the Berkeley group really did was to measure the mass of the cesium-133 atom, achieving the relative accuracy of 4*10^-10, that is 0.4 parts par billion (ppb). With that result in hand, α can be determined after a cavalier rewriting of the high-school formula for the Rydberg constant:   
Everybody knows the first 3 digits of the Rydberg constant, Ry≈13.6 eV, but actually it is experimentally known with the fantastic accuracy of 0.006 ppb, and the electron-to-atom mass ratio has also been determined precisely. Thus the measurement of the cesium mass can be translated into a 0.2 ppb measurement of the fine structure constant: 1/α=137.035999046(27).

You may think that this kind of result could appeal only to a Pythonesque chartered accountant. But you would be wrong. First of all, the new result excludes  α = 1/137 at 1 million sigma, dealing a mortal blow to the field of epistemological numerology. Perhaps more importantly, the result is relevant for testing the Standard Model. One place where precise knowledge of α is essential is in calculation of the magnetic moment of the electron. Recall that the g-factor is defined as the proportionality constant between the magnetic moment and the angular momentum. For the electron we have
Experimentally, ge is one of the most precisely determined quantities in physics,  with the most recent measurement quoting a= 0.00115965218073(28), that is 0.0001 ppb accuracy on ge, or 0.2 ppb accuracy on ae. In the Standard Model, ge is calculable as a function of α and other parameters. In the classical approximation ge=2, while the one-loop correction proportional to the first power of α was already known in prehistoric times thanks to Schwinger. The dots above summarize decades of subsequent calculations, which now include O(α^5) terms, that is 5-loop QED contributions! Thanks to these heroic efforts (depicted in the film  For a Few Diagrams More - a sequel to Kurosawa's Seven Samurai), the main theoretical uncertainty for the Standard Model prediction of ge is due to the experimental error on the value of α. The Berkeley measurement allows one to reduce the relative theoretical error on adown to 0.2 ppb:  ae = 0.00115965218161(23), which matches in magnitude the experimental error and improves by a factor of 3 the previous prediction based on the α measurement with rubidium atoms.

At the spiritual level, the comparison between the theory and experiment provides an impressive validation of quantum field theory techniques up to the 13th significant digit - an unimaginable  theoretical accuracy in other branches of science. More practically, it also provides a powerful test of the Standard Model. New particles coupled to the electron may contribute to the same loop diagrams from which ge is calculated, and could shift the observed value of ae away from the Standard Model predictions. In many models, corrections to the electron and muon magnetic moments are correlated. The latter famously deviates from the Standard Model prediction by 3.5 to 4 sigma, depending on who counts the uncertainties. Actually, if you bother to eye carefully the experimental and theoretical values of ae beyond the 10th significant digit you can see that they are also discrepant, this time at the 2.5 sigma level. So now we have two g-2 anomalies! In a picture, the situation can be summarized as follows:

If you're a member of the Holy Church of Five Sigma you can almost preach an unambiguous discovery of physics beyond the Standard Model. However, for most of us this is not the case yet. First, there is still some debate about the theoretical uncertainties entering the muon g-2 prediction. Second, while it is quite easy to fit each of the two anomalies separately, there seems to be no appealing model to fit both of them at the same time.  Take for example the very popular toy model with a new massive spin-1 Z' boson (aka the dark photon) kinetically mixed with the ordinary photon. In this case Z' has, much like the ordinary photon, vector-like and universal couplings to electron and muons. But this leads to a positive contribution to g-2, and it does not fit well the ae measurement which favors a new negative contribution. In fact, the ae measurement provides the most stringent constraint in part of the parameter space of the dark photon model. Conversely, a Z' boson with purely axial couplings to matter does not fit the data as it gives a negative contribution to g-2, thus making the muon g-2 anomaly worse. What might work is a hybrid model with a light Z' boson having lepton-flavor violating interactions: a vector coupling to muons and a somewhat smaller axial coupling to electrons. But constructing a consistent and realistic model along these lines is a challenge because of other experimental constraints (e.g. from the lack of observation of μ→eγ decays). Some food for thought can be found in this paper, but I'm not sure if a sensible model exists at the moment. If you know one you are welcome to drop a comment here or a paper on arXiv.

More excitement on this front is in store. The muon g-2 experiment in Fermilab should soon deliver first results which may confirm or disprove the muon anomaly. Further progress with the electron g-2 and fine-structure constant measurements is also expected in the near future. The biggest worry is that, if the accuracy improves by another two orders of magnitude, we will need to calculate six loop QED corrections...

by Mad Hatter (noreply@blogger.com) at June 22, 2018 11:04 PM

ZapperZ - Physics and Physicists

General Relativity Passes Its First Galactic Test
Ethan Siegel is reporting the latest result of a test of General Relativity at the galactic scale.[1]

This effect of gravitational lensing, which occurs in both strong and weak variants, represents the greatest hope we have of testing General Relativity on scales larger than the Solar System. For the first time, a team of scientists led by Tom Collett performed a precise extragalactic test of General Relativity, and Einstein's theory passed with flying colors.

This new result also puts a strong damper on alternative theories of gravity, such as MOND.

For the first time, we've been able to perform a direct test of General Relativity outside of our Solar System and get solid, informative results. The ratio of the Newtonian potential to the curvature potential, which relativity demands be equal to one but where alternatives differ, confirms what General Relativity predicts. Large deviations from Einstein's gravity, therefore, cannot happen on scales smaller than a few thousand light years, or for masses the scale of an individual galaxy. If you want to explain the accelerated expansion of the Universe, you can't simply say you don't like dark energy and throw Einstein's gravity away. For the first time, if we want to modify Einstein's gravity on galactic-or-larger scales, we have an important constraint to reckon with.

This is definitely a big deal of a result.

Zz.

[1] T.E. Collett et al., Science v.360, p.1342 (2018).

by ZapperZ (noreply@blogger.com) at June 22, 2018 02:11 PM

Peter Coles - In the Dark

Summer Open Day in Maynooth

It seems I have volunteered to represent the Department of Theoretical Physics at Maynooth University’s Open Day tomorrow, so I’ll be giving a talk as well as answering questions, handing out leaflets etc on the Theoretical Physics stall (with the aid of some current students). It’s a bit of a flashback to Sussex days, actually, when I used to have to do this sort of thing quite regularly on Saturdays throughout the year. At least the weather looks like it’s going to be nice, even if the post-solstitial nights are now drawing in.

If you’re planning to come tomorrow, the event starts at 10.30 am and the first talk is at 11.15, but I’m not on until 13.35. Lots of information is available here. Please come and say hello if you’ve read this here blog post!

Anyway, Maynooth University has made a nice little video about the Open Day so I thought I’d share it here, mainly to give readers a look at the lovely campus, which is bathed in sunshine as I write this!

by telescoper at June 22, 2018 09:17 AM

June 21, 2018

Emily Lakdawalla - The Planetary Society Blog

Hayabusa2 update: New views of Ryugu and corkscrew course adjustments
Ryugu has continued to grow in Hayabusa2's forward view, resolving into a diamond-shaped body with visible bumps and craters! They've done hazard searches, optical navigation imaging, and measured the rotation rate at 7.6 hours.

June 21, 2018 05:50 PM

Peter Coles - In the Dark

Music for the Solstice

Well, in case you didn’t realize, the summer solstice (when the Sun reaches its most northerly point in the sky and is directly overhead on the Tropic of Cancer) occurred at 10.07 UT (11.07  British Summer Time Daylight Saving Time in Ireland) today. I guess that means it’s all downhill from here. Anyway, this gives me some sort of excuse for me posting a piece of music I’ve loved ever since I was a young child for its energy and wit. It’s the Overture to a Midsummer Night’s Dream,  by Felix Mendelssohn which he started to compose when he was just 16 years old, but didn’t complete until later so it’s his Opus 21. This performance is by the Leipzig Genwandhausorchester conducted by Kurt Mazur. Enjoy!

Incidentally, I listed to a very nice performance of Shakespeare’s A Midsummer Night’s Dream on BBC Radio 3 on Sunday evening when I was still in Cardiff. It reminded me of when we performed that play whhen I was at school, and by Bottom received a warm hand.

by telescoper at June 21, 2018 10:18 AM

June 20, 2018

Emily Lakdawalla - The Planetary Society Blog

New report explores threat from near-Earth asteroids
How dangerous are near-Earth asteroids, and what will we do if we find one headed toward Earth?

June 20, 2018 09:15 PM

Clifford V. Johnson - Asymptotia

News from the Front, XV: Nicely Entangled

This is one of my more technical posts about research activity. It is not written with wide readability in mind, but you may still get a lot out of it since the first part especially talks about about research life.

Some years ago (you'll perhaps recall), I came up with an interesting construction that I called a "Holographic Heat Engine". Basically, it arises as a natural concept when you work in what I call "extended" gravitational thermodynamics where you allow the spacetime heat_enginecosmological constant to be dynamical. It is natural to associate the cosmological constant with a dynamical pressure (in the usual way it appears as a pressure in Einstein's equations) and if you work it though it turns out that there's a natural conjugate quantity playing the role of volume, etc. Black hole thermodynamics (that you get when you turn on quantum effects, giving entropy and temperature) then get enhanced to include pressure and volume, something that was not present for most of the history of the subject. It was all worked out nicely in a paper by Kastor et. al. in 2009. So...anyway, once you have black holes in that setup it seemed to me (when I encountered this extended framework in 2014) that it would be wilful neglect to not define heat engines: closed cycles in the p-V plane that take in heat, output heat, and do mechanical work. So I defined them. See three old posts of mine, here, here, and here, and there are others if you search.

Well, one of the things that has been a major quest of mine since 2014 is to see if I can make sense of the extended thermodynamics for quantum field theory, and then go further and translate the heat engines and their properties into field theory terms. This seemed possible to me all the way back then since for positive pressure, the cosmological constant is negative, and when you have gravity with negative cosmological constant you've got duality to strongly coupled field theories. So those heat engines must be some kind of special tour in the field theories. the efficiency of an engine must mean something about the tour. Moreover, since the efficiency of the engine is bounded by the Carnot efficiency, it seems we have a naturally defined dimensionless number that has a fundamental bound... Alarm bells ringing! - Opportunity knocking to learn something new and powerful! Maybe even important!

So I chipped away at this for some time, over years, doing various projects that [...] Click to continue reading this post

The post News from the Front, XV: Nicely Entangled appeared first on Asymptotia.

by Clifford at June 20, 2018 05:06 PM

Peter Coles - In the Dark

Big News for Big Data in Cardiff

I know I’m currently in Maynooth, but I am still employed part-time by Cardiff University, and specifically by the Data Innovation Research Institute there. When I started there a couple of years ago, I moved into a big empty office that looked like this:

Over the last two years the DIRI office has gradually filled up. It is now home to an Administrative Officer (Clare), two Research Software Engineers (Ian & Unai),  Ben and Owain from Supercomputing Wales,and the newest arrival, a Manager for the Centre for Doctoral Training in Data-Intensive Physics (Rosemary). That doesn’t include, myself, the Director of DIRI (Steve Fairhurst), DIRI Board member Bernard Schutz and a number of occasional users of various `hot desks’. And there’s another Research Software Engineer on the way.

Now the latest news is of a huge injection of cash (£3.5M) for a new Data Innovation Accelerator, funded by the Welsh Government and the European Regional Development Fund. The Welsh Government has joined forces with Cardiff University to develop the project, which has the aim of transferring data science and analytics knowledge from Cardiff University to Small to Medium Sized Enterprises (SMEs) in Wales so they can develop and grow their businesses. The funding will enable researchers to work on collaborative projects with companies specialising in things like cyber security, advanced materials, energy and eco-innovation. For more information, see here.

Among other things this project will involve the recruitment of no less than eight data scientists to kick-start the project, which will probably launch in November 2018. With another eight people to be based in the Data Innovation Research Institute by the end of the year, the office promises to be a really crowded place. My departure next month will release one desk space, but it will still be a crush! That’s what you call being a victim of your own success.

Anyway, it’s exciting times for Data Science at Cardiff University and it has been nice to have played a small part in building up the DIRI activity over the last two years. I’m sure it will go on developing and expanding for a very long time indeed.

by telescoper at June 20, 2018 11:25 AM

June 19, 2018

John Baez - Azimuth

The Behavioral Approach to Systems Theory

 

Two more students in the Applied Category Theory 2018 school wrote a blog article about something they read:

• Eliana Lorch and Joshua Tan, The behavioral approach to systems theory, 15 June 2018.

Eliana Lorch is a mathematician based in San Francisco. Joshua Tan is a grad student in computer science at the University of Oxford and one of the organizers of Applied Category Theory 2018.

They wrote a great summary of this paper, which has been an inspiration to me and many others:

• Jan Willems, The behavioral approach to open and interconnected systems, IEEE Control Systems 27 (2007), 46–99.

They also list many papers influenced by it, and raise a couple of interesting problems with Willems’ idea, which can probably be handled by generalizing it.

by John Baez at June 19, 2018 01:00 AM

June 18, 2018

Emily Lakdawalla - The Planetary Society Blog

Rotatin' Ryugu!
Hayabusa2 continues to approach asteroid Ryugu, revealing the 900-meter-wide world in all its glory.

June 18, 2018 06:30 PM

The n-Category Cafe

∞-Atomic Geometric Morphisms

Today’s installment in the ongoing project to sketch the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-elephant: atomic geometric morphisms.

Chapter C3 of Sketches of an Elephant studies various classes of geometric morphisms between toposes. Pretty much all of this chapter has been categorified, except for section C3.5 about atomic geometric morphisms. To briefly summarize the picture:

  • Sections C3.1 (open geometric morphisms) and C3.3 (locally connected geometric morphisms) are steps <semantics>n=1<annotation encoding="application/x-tex">n=-1</annotation></semantics> and <semantics>n=0<annotation encoding="application/x-tex">n=0</annotation></semantics> on an infinite ladder of locally n-connected geometric morphisms, for <semantics>1n<annotation encoding="application/x-tex">-1 \le n \le \infty</annotation></semantics>. A geometric morphism between <semantics>(n+1,1)<annotation encoding="application/x-tex">(n+1,1)</annotation></semantics>-toposes is locally <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-connected if its inverse image functor is locally cartesian closed and has a left adjoint. More generally, a geometric morphism between <semantics>(m,1)<annotation encoding="application/x-tex">(m,1)</annotation></semantics>-toposes is locally <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-connected, for <semantics>n<m<annotation encoding="application/x-tex">n\lt m</annotation></semantics>, if it is “locally” locally <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-connected on <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-truncated maps.

  • Sections C3.2 (proper geometric morphisms) and C3.4 (tidy geometric morphisms) are likewise steps <semantics>n=1<annotation encoding="application/x-tex">n=-1</annotation></semantics> and <semantics>n=0<annotation encoding="application/x-tex">n=0</annotation></semantics> on an infinite ladder of n-proper geometric morphisms.

  • Section C3.6 (local geometric morphisms) is also step <semantics>n=0<annotation encoding="application/x-tex">n=0</annotation></semantics> on an infinite ladder: a geometric morphism between <semantics>(n+1,1)<annotation encoding="application/x-tex">(n+1,1)</annotation></semantics>-toposes is <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-local if its direct image functor has an indexed right adjoint. Cohesive toposes, which have attracted a lot of attention around here, are both locally <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-connected and <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-local. (Curiously, the <semantics>n=1<annotation encoding="application/x-tex">n=-1</annotation></semantics> case of locality doesn’t seem to be mentioned in the 1-Elephant; has anyone seen it before?)

So what about C3.5? An atomic geometric morphism between elementary 1-toposes is usually defined as one whose inverse image functor is logical. This is an intriguing prospect to categorify, because it appears to mix the “elementary” and “Grothendieck” aspects of topos theory: a geometric morphisms are arguably the natural morphisms between Grothendieck toposes, while logical functors are more natural for the elementary sort (where “natural” means “preserves all the structure in the definition”). So now that we’re starting to see some progress on elementary higher toposes (my post last year has now been followed by a preprint by Rasekh), we might hope be able to make some progress on it.

Unfortunately, the definitions of elementary <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topos currently under consideration have a problem when it comes to defining logical functors. A logical functor between 1-toposes can be defined as a cartesian closed functor that preserves the subobject classifier, i.e. <semantics>F(Ω)Ω<annotation encoding="application/x-tex">F(\Omega) \cong \Omega</annotation></semantics>. The higher analogue of the subobject classifier is an object classifier — but note the switch from definite to indefinite article! For Russellian size reasons, we can’t expect to have one object classifer that classifies all objects, only a tower of “universes” each of which classifies some subcollection of “small” objects.

What does it mean for a functor to “preserve” the tower of object classifiers? If an <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topos came equipped with a specified tower of object classifiers (indexed by <semantics><annotation encoding="application/x-tex">\mathbb{N}</annotation></semantics>, say, or maybe by the ordinal numbers), then we could ask a logical functor to preserve them one by one. This would probably be the relevant kind of “logical functor” when discussing categorical semantics of homotopy type theory: since type theory does have a specified tower of universe types <semantics>U 0:U 1:U 2:<annotation encoding="application/x-tex">U_0 : U_1 : U_2 : \cdots</annotation></semantics>, the initiality conjecture for HoTT should probably say that the syntactic category is an elementary <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topos that’s initial among logical functors of this sort.

However, Grothendieck <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topoi don’t really come equipped with such a tower. And even if they did, preserving it level by level doesn’t seem like the right sort of “logical functor” to use in defining atomic geometric morphisms; there’s no reason to expect such a functor to “preserve size” exactly.

What do we want of a logical functor? Well, glancing through some of the theorems about logical functors in the 1-Elephant, one result that stands out to me is the following: if <semantics>F:SE<annotation encoding="application/x-tex">F:\mathbf{S}\to \mathbf{E}</annotation></semantics> is a logical functor with a left adjoint <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics>, then <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> induces isomorphisms of subobject lattices <semantics>Sub E(A)Sub S(LA)<annotation encoding="application/x-tex">Sub_{\mathbf{E}}(A) \cong Sub_{\mathbf{S}}(L A)</annotation></semantics>. This is easy to prove using the adjointness <semantics>LF<annotation encoding="application/x-tex">L\dashv F</annotation></semantics> and the fact that <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> preserves the subobject classifier:

<semantics>Sub E(A)E(A,Ω E)E(A,FΩ S)E(LA,Ω S)Sub S(LA).<annotation encoding="application/x-tex"> Sub_{\mathbf{E}}(A) \cong \mathbf{E}(A,\Omega_{\mathbf{E}}) \cong \mathbf{E}(A,F \Omega_{\mathbf{S}}) \cong \mathbf{E}(L A,\Omega_{\mathbf{S}})\cong Sub_{\mathbf{S}}(L A).</annotation></semantics>

What would be the analogue for <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topoi? Well, if we imagine hypothetically that we had a classifier <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> for all objects, then the same argument would show that <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> induces an equivalence between entire slice categories <semantics>E/AS/LA<annotation encoding="application/x-tex">\mathbf{E}/A \simeq \mathbf{S}/L A</annotation></semantics>. (Actually, I’m glossing over something here: the direct arguments with <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> and <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> show only an equivalence between sets of subobjects and cores of slice categories. The rest comes from the fact that <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> preserves local cartesian closure as well as the (sub)object classifier, so that we can enhance <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> to an internal poset and <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> to an internal full subcategory and both of these are preserved by <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> as well.)

In fact, the converse is true too: reversing the above argument shows that <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> preserves <semantics>Ω<annotation encoding="application/x-tex">\Omega</annotation></semantics> if and only if <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> induces isomorphisms of subobject lattices, and similarly <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> preserves <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> if and only if <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> induces equivalences of slice categories. The latter condition, however, is something that can be said without reference to the nonexistent <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>. So if we have a functor <semantics>F:ES<annotation encoding="application/x-tex">F:\mathbf{E}\to \mathbf{S}</annotation></semantics> between <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-toposes that has a left adjoint <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics>, then I think it’s reasonable to define <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> to be logical if it is locally cartesian closed and <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> induces equivalences <semantics>E/AS/LA<annotation encoding="application/x-tex">\mathbf{E}/A \simeq \mathbf{S}/L A</annotation></semantics>.

Furthermore, a logical functor between 1-toposes has a left adjoint if and only if it has a right adjoint. (This follows from the monadicity of the powerset functor <semantics>P:E opE<annotation encoding="application/x-tex">P : \mathbf{E}^{op} \to \mathbf{E}</annotation></semantics> for 1-toposes, which we don’t have an analogue of (yet) in the <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-case.) In particular, if the inverse image functor in a geometric morphism is logical, then it automatically has a left adjoint, so that the above characterization of logical-ness applies. And since a logical functor is locally cartesian closed, this geometric morphism is automatically locally connected as well. This suggests the following:

Definition: A geometric morphism <semantics>p:ES<annotation encoding="application/x-tex">p:\mathbf{E}\to \mathbf{S}</annotation></semantics> between <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topoi is <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-atomic if

  1. It is locally <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-connected, i.e. <semantics>p *<annotation encoding="application/x-tex">p^\ast</annotation></semantics> is locally cartesian closed and has a left adjoint <semantics>p !<annotation encoding="application/x-tex">p_!</annotation></semantics>, and
  2. <semantics>p !<annotation encoding="application/x-tex">p_!</annotation></semantics> induces equivalences of slice categories <semantics>E/AS/p !A<annotation encoding="application/x-tex">\mathbf{E}/A \simeq \mathbf{S}/p_! A</annotation></semantics> for all <semantics>AE<annotation encoding="application/x-tex">A\in \mathbf{E}</annotation></semantics>.

This seems natural to me, but it’s very strong! In particular, taking <semantics>A=1<annotation encoding="application/x-tex">A=1</annotation></semantics> we get an equivalence <semantics>EE/1S/p !1<annotation encoding="application/x-tex">\mathbf{E}\simeq \mathbf{E}/1 \simeq \mathbf{S}/p_! 1</annotation></semantics>, so that <semantics>E<annotation encoding="application/x-tex">\mathbf{E}</annotation></semantics> is equivalent to a slice category of <semantics>S<annotation encoding="application/x-tex">\mathbf{S}</annotation></semantics>. In other words, <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-atomic geometric morphisms coincide with local homeomorphisms!

Is that really reasonable? Actually, I think it is. Consider the simplest example of an atomic geometric morphism of 1-topoi that is not a local homeomorphism: <semantics>[G,Set]Set<annotation encoding="application/x-tex">[G,Set] \to Set</annotation></semantics> for a group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. The corresponding geometric morphism of <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topoi <semantics>[G,Gpd]Gpd<annotation encoding="application/x-tex">[G,\infty Gpd] \to \infty Gpd</annotation></semantics> is a local homeomorphism! Specifically, we have <semantics>[G,Gpd]Gpd/BG<annotation encoding="application/x-tex">[G,\infty Gpd] \simeq \infty Gpd / B G</annotation></semantics>. So in a sense, the difference between atomic and locally-homeomorphic vanishes in the limit <semantics>n<annotation encoding="application/x-tex">n\to \infty</annotation></semantics>.

To be sure, there are other atomic geometric morphisms of 1-topoi that do not extend to local homeomorphisms of <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-topoi, such as <semantics>Cont(G)Set<annotation encoding="application/x-tex">Cont(G) \to Set</annotation></semantics> for a topological group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. But it seems reasonable to me to regard these as “1-atomic morphisms that are not <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-atomic” — a thing which we should certainly expect to exist, just as there are locally 0-connected morphisms that are not locally <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-connected, and 0-proper morphisms that are not <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-proper.

We can also “see” how the difference gets “pushed off to <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>” to vanish, in terms of sites of definition. In C3.5.8 of the 1-Elephant it is shown that every atomic Grothendieck topos has a site of definition in which (among other properties) all morphisms are effective epimorphisms. If we trace through the proof, we see that this effective-epi condition comes about as the “dual” class to the monomorphisms that the left adjoint of a logical functor induces an equivalence on. Since an <semantics>(n+1,1)<annotation encoding="application/x-tex">(n+1,1)</annotation></semantics>-topos has classifiers for <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-truncated objects, we would expect an atomic one to have a site of definition in which all morphisms belong to the dual class of the <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-truncated morphisms, i.e. the <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-connected morphisms. So as <semantics>n<annotation encoding="application/x-tex">n\to \infty</annotation></semantics>, we get stronger and stronger conditions on the morphisms in our site, until in the limit we have a classifier for all morphisms, and the morphisms in our site are all required to be equivalences. In other words, the site is itself an <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoid, and thus the topos of (pre)sheaves on it is a slice of <semantics>Gpd<annotation encoding="application/x-tex">\infty Gpd</annotation></semantics>.

However, it could be that I’m missing something and this is not the best categorification of atomic geometric morphisms. Any thoughts from readers?

by shulman (viritrilbia@gmail.com) at June 18, 2018 05:06 AM

June 17, 2018

John Baez - Azimuth

Dynamical Systems and Their Steady States

 

As part of the Applied Category Theory 2018 school, Maru Sarazola wrote a blog article on open dynamical systems and their steady states. Check it out:

• Maru Sarazola, Dynamical systems and their steady states, 2 April 2018.

She compares two papers:

• David Spivak, The steady states of coupled dynamical systems compose according to matrix arithmetic.

• John Baez and Blake Pollard, A compositional framework for reaction networks, Reviews in Mathematical Physics 29 (2017), 1750028.
(Blog article here.)

It’s great, because I’d never really gotten around to understanding the precise relationship between these two approaches. I wish I knew the answers to the questions she raises at the end!

by John Baez at June 17, 2018 01:00 AM

June 16, 2018

Tommaso Dorigo - Scientificblogging

On The Residual Brightness Of Eclipsed Jovian Moons
While preparing for another evening of observation of Jupiter's atmosphere with my faithful 16" dobsonian scope, I found out that the satellite Io will disappear behind the Jovian shadow tonight. This is a quite common phenomenon and not a very spectacular one, but still quite interesting to look forward to during a visual observation - the moon takes some time to fully disappear, so it is fun to follow the event.
This however got me thinking. A fully eclipsed jovian moon should still be able to reflect back some light picked up from the still lit other satellites - so it should not, after all, appear completely dark. Can a calculation be made of the effect ? Of course - and it's not that difficult.

read more

by Tommaso Dorigo at June 16, 2018 04:47 PM

June 15, 2018

Emily Lakdawalla - The Planetary Society Blog

The Mars Exploration Rovers Update Special Report: Opportunity Pummeled by Massive Dust Storm, Hunkers Down to Sleep
Entrenched in the west rim of Endeavour Crater, veteran robot field geologist Opportunity is hunkered down in Perseverance Valley in a kind of hibernation mode.

June 15, 2018 11:46 PM

Emily Lakdawalla - The Planetary Society Blog

Dawn Journal: Spiralling Down
Propelled by the perfect combination of xenon ions, hydrazine rocket propellant and adrenaline, Dawn is on the verge of its most ambitious exploits yet.

June 15, 2018 11:23 PM

ZapperZ - Physics and Physicists

Is Theoretical Physics Wasting Our Best Minds?
Before you continue reading this, let me be very clear right off the bat that there are TWO separate issues here that I will be discussing, and they are thinly connected simply by the over-general reference of "theoretical physics" made by the author of the article that I will be citing.

In this Forbes article, Ethan Siegel highlights the main point made by Sabine Hossenfelder in her book "Lost In Math". Siegel not only pointed this out, but also did an in-depth description leading up to the "naturalness" philosophy that is prevalent in the esoteric fields of physics such as string, etc.

If you are a theoretical particle physicist, a string theorist, or a phenomenologist — particularly if you suffer from cognitive dissonance — you will not like this book. If you are a true believer in naturalness as the guiding light of theoretical physics, this book will irritate you tremendously. But if you're someone who isn't afraid to ask that big question of "are we doing it all wrong," the answer might be a big, uncomfortable "yes." Those of us who are intellectually honest physicists have been living with this discomfort for many decades now. In Sabine's book, Lost In Math, this discomfort is now made accessible to the rest of us.

Certainly this is thought-provoking, and it isn't something I disagree about. For science to give up on empirical evidence, and simply pursue something that looks "natural" or "beautiful" is dangerous and verging on being a religion. So my feelings are consistent with what has been said in the article.

Now comes the other part of the issue. It has always been my pet peeve when someone over-generalize physics as being predominantly being "high-energy physics, astrophysics, string theory, etc...", i.e. the esoteric fields of study. In this case, "theoretical physics" certainly is NOT dominated by those fields. There are theoretical studies in condensed matter physics, atomic/molecular physics, medical physics, accelerator physics, etc... etc., i.e. fields of studies that are certainly not esoteric, have lots of practical applications, etc.

In fact, I would argue that the esoteric fields of physics represents the MINORITY in terms of the number of practicing physicists that we have around the world. As a zeroth-order approximation of this claim, I decided to look at the members of the APS. The APS Divisions correspond to the number of members who declared themselves to be in a certain field within physics. Note that not all members made the declaration, and it is also not uncommon for a member to declare more than one division.


First of all, 79% of APS members are accounted for in this chart for the 2018 membership. Now, what is the percentage of members within the so-called esoteric fields of Astrophysics, Gravitation, and Particles and Fields? 14.9%. Even if you include Nuclear Physics into this, it will come up to 19.8%

Now, forget about theoretical or experimental. Can 19.8% represents ALL of physics? The fields of studies that a lot of people associate physics with are done by ONLY 19.8% of physicists! Using them, one will get a severely inaccurate representation of physics and physicists.

In fact, if you look at the fields more commonly associated with the physics of materials (condensed matter physics and Materials Physics), we get 18.2%, almost as big as Astrophysics, Gravitation, Particles and Fields, and Nuclear Physics combined! Condensed matter physics alone dwarfs other fields, being almost twice as big as the next division, which is Particles and Fields.

But what is more important here is that outside of the 19.8% of physicists in these esoteric fields, an overwhelming percentage of physicists (59.2%) are in fields of studies that are associated with practical applications of physics. So if you were to bump randomly into a physicist, chances are, you will find someone who works in a field related to something of practical importance and NOT a high-energy physicist, a nuclear physicist, etc.

This is my round-about way of complaining that Ethan Siegel article should not be a damnation of "theoretical physics" in general, because the overwhelming percentage of theoretical physics is NOT about these esoteric topics that have been mentioned in his article. Rather, theories in other parts of physics rely very heavily on empirical observations and verification, i.e. the good and tested way of doing science. In those areas, we are definitely NOT wasting our best minds!

A while back, I said that physics is not just the LHC. It is also your iPhone. Even that requires modification. We should say that physics is predominantly your iPhone, with only a smidgen of LHC added as garnishing. That is a more accurate representation of the field as a whole.

Zz.

by ZapperZ (noreply@blogger.com) at June 15, 2018 06:15 PM

John Baez - Azimuth

Applied Category Theory 2018/2019

A lot happened at Applied Category Theory 2018. Even as it’s still winding down, we’re already starting to plan a followup in 2019, to be held in Oxford. Here are some notes Joshua Tan sent out:

  1. Discussions: Minutes from the discussions can be found here.
  2. Photos: Ross Duncan took some very glamorous photos of the conference, which you can find here.

  3. Videos: Videos of talks are online here: courtesy of Jelle Herold and Fabrizio Genovese.

  4. Next year’s workshop: Bob Coecke will be organizing ACT 2019, to be hosted in Oxford sometime spring/summer. There will be a call for papers.

  5. Next year’s school: Daniel Cicala is helping organize next year’s ACT school. Please contact him at if you would like to get involved.

  6. Look forward to the official call for submissions, coming soon, for the first issue of Compositionality!

The minutes mentioned above contain interesting thoughts on these topics:

• Day 1: Causality
• Day 2: AI & Cognition
• Day 3: Dynamical Systems
• Day 4: Systems Biology
• Day 5: Closing

by John Baez at June 15, 2018 04:51 PM

CERN Bulletin

CERN Running Club – Sale of Items

The CERN Running Club is organising a sale of items  on 26 June from 11:30 – 13:00 in the entry area of Restaurant 2 (504 R-202). The items for sale are souvenir prizes of past Relay Races and comprise:

Backpacks, thermos, towels, gloves & caps, lamps, long sleeve winter shirts and windproof vest. All items will be sold at 5 CHF.

June 15, 2018 03:06 PM

John Baez - Azimuth

Cognition, Convexity, and Category Theory

Two more students in the Applied Category Theory 2018 school wrote a blog article about a paper they read:

• Tai-Danae Bradley and Brad Theilman, Cognition, convexity and category theory, The n-Category Café, 10 March 2018.

Tai-Danae Bradley is a mathematics PhD student at the CUNY Graduate Center and well-known math blogger. Brad Theilman is a grad student in neuroscience at the Gentner Lab at U. C. San Diego. I was happy to get to know both of them when the school met in Leiden.

In their blog article, they explain this paper:

• Joe Bolt, Bob Coecke, Fabrizio Genovese, Martha Lewis, Dan Marsden, and Robin Piedeleu, Interacting conceptual spaces I.

Fans of convex sets will enjoy this!

by John Baez at June 15, 2018 04:52 AM

The n-Category Cafe

The Behavioral Approach to Systems Theory

guest post by Eliana Lorch and Joshua Tan

As part of the Applied Category Theory seminar, we discussed an article commonly cited as an inspiration by many papers1 taking a categorical approach to systems theory, The Behavioral Approach to Open and Interconnected Systems. In this sprawling monograph for the IEEE Control Systems Magazine, legendary control theorist Jan Willems poses and answers foundational questions like how to define the very concept of mathematical model, gives fully-worked examples of his approach to modeling from physical first principles, provides various arguments in favor of his framework versus others, and finally proves several theorems about the special case of linear time-invariant differential systems.

In this post, we’ll summarize the behavioral approach, Willems’ core definitions, and his “systematic procedure” for creating behavioral models; we’ll also examine the limitations of Willems’ framework, and conclude with a partial reference list of Willems-inspired categorical approaches to understanding systems.

The behavioral approach

Here’s the view from 10,000 feet of the behavioral approach in contrast with the traditional signal-flow approach:

Image: Comparison table of traditional, functional approach vs Willems' relational approach

Willems’ approach breaks down into: (1) considering a dynamical system as a ‘behavior,’ and (2) defining interconnection as variable sharing.

Dynamical system as behavior

Willems goes so far as to claim: “It is remarkable that the idea of viewing a system in terms of inputs and outputs, in terms of cause and effect, kept its central place in systems and control theory throughout the 20th century. Input/output thinking is not an appropriate starting point in a field that has modeling of physical systems as one of its main concerns.”

To get a sense of the inappropriateness of input/output-based modeling: consider a freely swinging pendulum in 2-dimensional space with a finite-sized bob. Now consider adding to this system a model representing the right-hand half-plane being filled with cement. With soft-contact mechanics, we could determine what force the cement exerts on the pendulum when it bounces against it — that is, when the pendulum bob’s center of mass comes within its radius of the right half-plane.

Traditionally, we might define a function that takes the pendulum’s position as input and produces a force as output. But this is insufficient to model the effect of the wall, which also prevents the pendulum bob’s center of mass from ever being in the right-hand half-plane; the wall imposes a constraint on the possible states of the world. How can we can capture this kind of constraint? In this case, we can extend the state model with inequalities delineating the feasible region of the state space.2

Willems’ insight is that the entire modeling framework can be subsumed by a sufficiently broad notion of “feasible region.” A dynamical system is simply a relation on the variables, forming a subset of all conceivable trajectories — a ‘behavior.’

Interconnection as variable sharing

The signal-flow approach to systems modeling requires labeling the terminals of a system as inputs or outputs before a model can be formulated; Willems argues this does not respect the actual physics. Most physical terminals — electrical wires, mechanical linkages or gears, thermal couplings, etc. — do not have an intrinsic, a priori directionality, and may permit “signals” to “flow” in either or both directions. Rather, physical interconnections constrain certain variables on either side to be equal (or equal-and-opposite). After having modeled a system, one may be able to prove that it obeys a certain partitioning of variables into inputs and outputs, but assuming this up front obscures the underlying physical reality. This paradigm shift amounts to moving from functional composition (given <semantics>a<annotation encoding="application/x-tex">a</annotation></semantics> we compute <semantics>b=f(a)<annotation encoding="application/x-tex">b=f(a)</annotation></semantics>, then <semantics>c=g(b)<annotation encoding="application/x-tex">c=g(b)</annotation></semantics>) to relational composition: <semantics>(a,c)(R;S)<annotation encoding="application/x-tex">(a,c)\in(R;S)</annotation></semantics> iff <semantics>((a,b 0),(b 1,c))(R×S).b 0=b 1<annotation encoding="application/x-tex">\exists ((a,b_0),(b_1,c))\in(R\times S).\;b_0=b_1</annotation></semantics>, which can be read as “variable sharing” between <semantics>b 0<annotation encoding="application/x-tex">b_0</annotation></semantics> and <semantics>b 1<annotation encoding="application/x-tex">b_1</annotation></semantics>. This is a way of restoring symmetry to composition — giving no precedence either between the two entities being composed, nor between each entity’s domain and codomain.

Image: Examples of systems to model in various domains
Examples of systems to model within the behavioral framework.

Core definitions

Given some natural phenomenon we wish to model mathematically, the first step is to establish the universum, the set of all a priori feasible outcomes, notated <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics>. Then, Willems asserts a mathematical model to be a restriction of possible outcomes to a subset of <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics>.3 This subset itself is called the behavior of the model, and is written <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>. This concept is, as the name suggests, at the center of Willems’ “behavioral approach”: he asserts that “equivalence of models, properties of models, model representations, and system identification must refer to the behavior.”

A dynamical system is a model in which elements of the universum <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics> are functions of time, that is, a triple

<semantics>Σ=(𝕋,𝕎,)<annotation encoding="application/x-tex">\Sigma = \left(\mathbb{T}, \mathbb{W}, \mathcal{B}\right)</annotation></semantics>

in which <semantics>𝕍:=𝕎 𝕋<annotation encoding="application/x-tex">\mathcal{B} \subseteq \mathbb{V} := \mathbb{W}^\mathbb{T}</annotation></semantics>. <semantics>𝕋<annotation encoding="application/x-tex">\mathbb{T}</annotation></semantics> is referred to as the time set (which may be discrete or continuous), and <semantics>𝕎<annotation encoding="application/x-tex">\mathbb{W}</annotation></semantics> is referred to as the signal space. The elements of <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> are trajectories <semantics>w:𝕋𝕎<annotation encoding="application/x-tex">w: \mathbb{T}\rightarrow\mathbb{W}</annotation></semantics>.

A dynamical system with latent variables is one whose signal space is a Cartesian product of manifest variables and latent variables: the “full” system is a tuple

<semantics>Σ full=(𝕋,𝕄,𝕃, full)<annotation encoding="application/x-tex"> \Sigma_{full}= \left(\mathbb{T}, \mathbb{M}, \mathbb{L}, \mathcal{B}_{full}\right) </annotation></semantics>

where the behavior <semantics> full𝕍:=(𝕄×𝕃) 𝕋<annotation encoding="application/x-tex">\mathcal{B}_{{full}}\subseteq \mathbb{V} := \left(\mathbb{M} \times \mathbb{L}\right)^\mathbb{T}</annotation></semantics>. Here <semantics>𝕄<annotation encoding="application/x-tex">\mathbb{M}</annotation></semantics> is the set of manifest values and <semantics>𝕃<annotation encoding="application/x-tex">\mathbb{L}</annotation></semantics> is the set of latent values.

A full behavior <semantics>Σ full<annotation encoding="application/x-tex">\Sigma_{{full}}</annotation></semantics> is said to induce or represent a manifest dynamical system <semantics>Σ=(𝕋,𝕄,)<annotation encoding="application/x-tex">\Sigma=\left(\mathbb{T}, \mathbb{M}, \mathcal{B}\right)</annotation></semantics>, with the manifest behavior <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> defined by

<semantics>:={m:𝕋𝕄|:𝕋𝕃.m, full}<annotation encoding="application/x-tex"> \mathcal{B}:=\left\{m: \mathbb{T} \rightarrow \mathbb{M}\;|\;\exists \ell: \mathbb{T} \rightarrow \mathbb{L}. \left\langle m,\ell\right\rangle \in \mathcal{B}_{{full}}\right\} </annotation></semantics>

The behavior of all the variables is determined by the equations specifying the first-principles physical laws, together with the equations expressing all the constraints from interconnection. Willems treats interconnection as simply “variable sharing,” that is, restricting behavior such that the trajectories assigned to the interconnected variables are constrained to be equal (or sometimes “equal and opposite,” depending on sign conventions).

An interconnection architecture is a sort of wiring diagram (analogous to operadic wiring diagrams) that describes the way in which a collection of systems is interconnected. Willems formalises this as a graph with leaves: a set <semantics>V<annotation encoding="application/x-tex">V</annotation></semantics> of vertices (which are systems or modules), a set <semantics>E<annotation encoding="application/x-tex">E</annotation></semantics> of edges (terminals), and a set <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> of leaves (open wires), with an assignment map <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>: to each edge an unordered pair of vertices and to each leaf a single vertex. A leaf is depicted as an open half-edge emanating from the graph, like a wire sticking out of a circuit — the “open” part of “open and interconnected systems.” (Note that this interpretation of a “leaf” differs from the usual graph-theoretic “vertex with degree one”; here, a leaf is like an “edge with degree one.”) There’s a type-checking condition that the set of leaves and internal half-edges emanating from a vertex must be completely matched to the set of terminals of the associated module, so that you don’t have hidden dangling wires.

Finally, the interconnection architecture requires a module embedding that specifies how each vertex is interpreted as a module: either a primitive model made of physical laws, or a sub-model within which there’s a further module embedding. Here we get a sense of the “zooming” nature of the modeling procedure.

Tearing, Zooming, Linking

Willems proposes a “systematic procedure” for generating models in this behavioral form: first decompose (tear) the system under investigation into smaller subsystems, then recursively apply the modeling process (zoom in) to each subsystem, and finally compose (link) the resulting submodels together into an overall system model. Rendered in pseudocode, it looks like this:

define makeModel(System system) => Model {
  if (system is directly governed by known physics) {
    return knownModel(system)
  } else {
    WiringDiagram<System> decomposition := tear(system)
    List<Model> submodels := decomposition.listSubsystems().fmap(makeModel)
    return decomposition.link(submodels)
  }
}

As an example, Willems analyzes an open hydraulic system made of two tanks interconnected by a pipe:

Image: Two tanks interconnected by a pipe

In the “tear” step, he breaks the system apart into three subsystems: the two tanks, (1) and (3) in the figure, and the pipe (2). In the “zoom” step, each of the three subsystems is “simple enough to be modeled using first-principles physical laws,” so he fills in the known model for each one (reaching the recursive base case, rather than starting again from “tear”). For the pipe, flows on each end are equal and opposite, and the difference in pressure is proportional to the flow; for each of the tanks, conservation of mass and Bernoulli’s laws relate the pressures, flows, and height of water in the tank.

Then, in the “link” step, he starts with, for each subsystem, a copy of the corresponding model (initially each relating a completely separate set of variables), then combines the models according to the links between subsystems, using the appropriate “interconnection laws” for each pair of connected terminals. In this example, the interconnection laws consist of setting connected pressures to be equal and connected flows to be equal and opposite.

An essential claim of Willems’ philosophy is that for physical systems for which we can use his modeling procedure of breaking systems into subsystems with interconnections, the hierarchical structure of our model will match reality closely enough that there will be straightforward physical principles governing the interactions at the interfaces (which tend to correspond to partitions of physical space). Many engineered systems deliberately have their important interfaces clearly delineated, but he explicitly disclaims that there are forms of interaction, such as gravitational interaction and colliding objects, which do not perfectly fit this framework.

Limitations of Willems’ framework

Not all interconnections fit. This framework assumes that the interface of a module can be specified—as a finite set of terminals—prior to composition with other modules, and Willems identifies three situations where this assumption fails:

  • <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-body problems exhibiting “virtual terminals” which are not so much a property of each module, but of each pair of modules. The classic example of this phenomenon is the <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-body problem in gravitational (or electrostatic, etc.) dynamics: an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-body system has <semantics>O(n 2)<annotation encoding="application/x-tex">O(n^2)</annotation></semantics> interactions, but the combination of an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-body system and an <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>-body system has more than <semantics>O(n 2+m 2)<annotation encoding="application/x-tex">O(n^2+m^2)</annotation></semantics> interactions.

  • “Distributed interconnections” in which a terminal has continuous spatial extent (e.g. heat conduction along a surface), calling for partial differential equations involving coordinate variables.

  • Contact mechanics such as rolling, sliding, bouncing, collisions, etc., in which interconnections appear and vanish depending on the values of certain position variables, as objects come into and out of contact.

Directional components and systems. In contrast to an a posteriori partitioning of variables into inputs and outputs (meaning that any setting of the “inputs” uniquely determines the trajectories of the “outputs”), some components fundamentally exhibit a priori input/output behavior (that is, they cannot be back-driven), and Willems’ framework can’t accommodate these.

  • Ideal amplifier. The behavior of an ideal amplifier with gain <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics>, input <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and output <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> would be <semantics>{(x,y)|y=Kx}<annotation encoding="application/x-tex">\left\{ (x,y) | y=K x\right\}</annotation></semantics> (constant-gain model), yet Willems’ approach here would make the incorrect prediction that we could back-drive terminal <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> by interconnecting it to a signal source and expect to observe the signal scaled by <semantics>1/K<annotation encoding="application/x-tex">1/K</annotation></semantics> at terminal <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>. However, an “ideal amplifier” is not a first-principles physical law; the modeling procedure might suggest we “tear” an amplifier further into its component parts, and then “tear” the constituent transistors with a deconstruction (such as the Gummel-Poon model) into passive circuit primitives. This might result in a more realistic model of actual amplifier behavior, though it would have at least an order of magnitude more components than the constant-gain model.

  • Humans, etc. Willems lists additional signal-flow systems for which the behavioral approach is not quite adequate: actuator inputs and sensor outputs interconnected via a controller, reactions of humans or animals, devices that respond to external commands, or switches and other logical devices.

Cartesian latent/manifest partitioning. Among Willems’ arguments against mandatory input/output partitioning is the simple and compelling example of a particle moving on a sphere, whose position is truly an output and whose velocity is truly an input—yet even in this seemingly favorable setup, the full state space (the tangent bundle of the sphere) cannot be decomposed as a Cartesian product of positions on the sphere with any vector space. However, Willems uses exactly the same hidden assumption in his definition of a dynamical system with latent variables. If a dynamical system’s full state space can’t be written as a Cartesian product, then its behavior can’t be represented in the way Willems defines.

Probability. Willems’ non-deterministic approach to behaviors is a kind of unquantified uncertainty; it doesn’t natively give us a way of associating probabilities with elements of a “behavior” (although behaviors could be considered as always having an implicitly uniform distribution). Nontrivial distributions could also be modeled by defining the universum <semantics>𝕍:=P(𝕎 𝕋)<annotation encoding="application/x-tex">\mathbb{V}:=P\!\left(\mathbb{W}^\mathbb{T}\right)</annotation></semantics> (where <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is probability, namely the Giry monad), but the non-determinism of <semantics>𝕍<annotation encoding="application/x-tex">\mathcal{B}\subseteq \mathbb{V}</annotation></semantics> introduces “Knightian uncertainty”, in that models are now sets of distributions, with no probabilities specified at the top level—and it’s unclear how such models should compose with non-stochastic models.

Categorical approaches to systems

As mentioned earlier, Willems has been an inspiration for many papers in applied category theory. One common feature is that many take a relational approach to semantics, providing a functor into (some subcategory of) <semantics>Rel ×<annotation encoding="application/x-tex">\mathbf{Rel}_\times</annotation></semantics>. Here is a (non-exhaustive!) reference list of these and related works.

Applications to specific domains:

  • Passive linear networks: Baez and Fong, 2015. Constructs a “black-boxing” functor from a decorated-cospan category of passive linear circuits (composed of resistors, inductors and capacitors) to a behavior category of Lagrangian relations.

  • Generalized circuit networks: Baez, Coya and Rebro, 2018. Generalizes the black-boxing functor to potentially nonlinear components/circuits.

  • Reversible Markov processes: Baez, Fong and Pollard, 2016. Constructs a “black-boxing” functor from a decorated-cospan category of reversible Markov processes to a category of linear relations describing steady states.

  • Petri nets / reaction networks: Baez and Pollard, 2017. Constructs a “black-boxing” functor from a decorated-cospan category of Petri nets to a category of semi-algebraic relations describing steady states, with an intermediate stop at a “grey-box” category of algebraic vector fields.

  • Digital circuits: Ghica, Jung and Lopez, 2017. Defines a symmetric monoidal theory of circuits including a discrete-delay operator and feedback, with operational semantics.

  • Discrete linear time-invariant dynamical systems (LTIDS): Fong, Sobocinski, and Rapisarda, 2016. Constructs a full and faithful functor from a freely generated symmetric monoidal theory into the PROP of LTIDSs and characterizes controllability in terms of spans, among other things—not only using Willems’ definitions and philosophy, but even some of his theorems.

General frameworks:

  • Algebra of Open and Interconnected Systems: Brendan Fong’s 2016 doctoral thesis. Covers his technique of decorated cospans as well as more recent work on decorated corelations, both of which are especially useful to construct syntactic categories for various kinds of non-categorical diagrams.

  • Topos of behavior types: Schultz and Spivak, 2017. Constructs a temporal type theory, as the internal logic of the category of sheaves on an interval domain, in which every object represents a behavior that seems to be essentially in Willems’ sense of the word.

  • Operad of wiring diagrams: Vagner, Spivak and Lerman, 2015. Formalises construction of systems of differential equations on manifolds using Spivak’s “operad of wiring diagrams” approach to composition, which is conceptually similar to Willems’ notion of hierarchical zooming into modules.

  • Signal flow graphs: Bonchi, Sobocinski and Zanasi, 2017. Sound and complete axiomatization of signal flow graphs, arguably the primary incumbent against which Willems’ behavioral approach contends.

  • Bond graphs: Brandon Coya, 2017. Defines a category of bond graphs (an older general modeling framework which Willems acknowledges as a step in the right direction toward a behavioral approach) with functorial semantics as Lagrangian relations.

  • Cospan/Span(Graph): Gianola, Kasangian and Sabadini in 2017 review a line of work mostly done in the 90’s by Sabadini, Walters and collaborators on what are essentially open and interconnected labeled-transition systems.

Many thanks to Pawel Sobocinski and Brendan Fong for feedback on this post, and to Sophie Raynor and other members of the seminar for thoughts and discussions.


1 see e.g. Spivak and Schultz’s Temporal type theory; Fong, Sobocinski, and Rapisarda’s Categorical approach to open and interconnected dynamical systems; Bonchi, Sobocinski, and Zanasi’s Categorical semantics of signal flow graphs

2 Depending on the formalisation, a system of differential equations could technically contain equality constraints without any actual derivatives, such as <semantics>x 2+y 21=0<annotation encoding="application/x-tex">x^2 + y^2 - 1 = 0</annotation></semantics>, which can restrict the feasible region without augmenting the modeling framework to include inequalities. We could even impose the constraint <semantics>x0<annotation encoding="application/x-tex">x\leq 0</annotation></semantics> by using a non-analytic function: <semantics>0={e 1/x ifx>0 0 ifx0<annotation encoding="application/x-tex">0 = \begin{cases}e^{-1/x}& if x\gt 0\\ 0& if x\leq 0\end{cases}</annotation></semantics>

3 Note: From a computer science perspective, this says that any “mathematical model” must be a non-deterministic model, as opposed to, on the one hand, a deterministic model (which would pick out an element of <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics>), or, on the other hand, a probabilistic model (which would give a distribution over <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics>). If we are given free choice of <semantics>𝕍<annotation encoding="application/x-tex">\mathbb{V}</annotation></semantics>, any of these kinds of model is encodable as any other, but the choice is significant when it comes to composition.

by john (baez@math.ucr.edu) at June 15, 2018 02:18 AM

June 14, 2018

Lubos Motl - string vacua and pheno

If you have trouble with string theory, it simply proves you're not too smart
The author of a new embarrassing anti-physics book that was released today is finally receiving the expected affirmative action from the political activists who pretend to be science journalists and who pretend that the author of the book is a physicist who is worth the name – she is definitely not one.

One of the uncritical reviews was published in Nature. She has a vagina so she must surely be right in her disagreements with Wilczek, Weinberg, Polchinski, and Arkani-Hamed – to suggest otherwise would be an example of sexism. But I had to laugh when I saw the title and the punch line of a Forbes text by Ethan Siegel:
Is Theoretical Physics Wasting Our Best Living Minds On Nonsense?
That's a nice question! Siegel must be applauded for having confronted an actual question that all other members of the organized crackpot movement have so far ignored:
What is your standing? Why do you think you have the right to question the legitimacy of the research voluntarily chosen by a few hundred or at most a few thousand people in the world who think that they're doing something important?
You know, this question is a very important one. When one of these crackpots spends much of his time by fighting against modern physics, it's hard to justify this jihad by financial considerations. Why? Less than 1,000 people are actually being paid as string theorists or something "really close" in the world now, and even if you realistically assume that the average string theorist is paid more than the average person, the fraction of the mankind's money that goes to string theory is some "one millionth" or so. Or 1/100,000 of the money that goes to porn or any other big industry. Moreover, the funds are allocated by special institutions or donors – they're too technical decisions that the taxpayer simply shouldn't make directly.

So the taxpayer money is unlikely to be a good justification of the frantic hateful efforts by which scumbags of the W*it and Sm*lin type are trying to hurt the image of physics in the eyes of the public (and, if possible, to outlaw string theory research), right?




So what is the actual justification of these attacks? Siegel has articulated his idea about the ultimate sin that is taking place:
By allowing string theory (etc.) research to exist, the mankind is wasting its best living minds on nonsense.
Again, that's a really cute accusation.




Now, if your IQ is below 80, pretend that it is above 80 for a while. And whatever it is, try to answer the following question:
Imagine that in a room, there is BLM, one of the 1,000 world's Best Living Minds (well, surely not a member of the Black Lives Matter), and the average Joe AJ. They disagree about the kind of a theory or methodology that should be used to advance cutting-edge theoretical physics.

Based on the aforementioned data and nothing else, decide which of the two answers is more likely or true:
  1. The average Joe is more likely to be right than one of the world's best living minds. He knows which theory is good to advance theoretical physics. The best living mind should be employed by the average Joe and do what the average Joe tells him.
  2. One of the world's best living minds is more likely to be right about the right theory or methodology – because he's smarter than the average Joe.
Now, if you can't really understand why the second answer is right and the first is not, your IQ must be below 80. You're mentally closer to a chimp than to me. Call my attitude elitism or bragging or anything you want but it is really pure common sense and if you don't have it, you're not only stupider than the average "best living mind" among the top 1,000, and you're dumber than the average string theorist. You're also dumber than the average human – by a lot.

You know, the best living minds may or should really be defined as those that have the potential to produce correct answers to difficult enough questions that are too hard for most people. So if he weren't better at answering questions such as "is string theory a far more promising way to go beyond effective quantum field theories than other ideas that were proposed as alternatives?", he just wouldn't be one of the best living minds! It's as simple as that. The statement is a tautology. I don't need to collect any additional evidence.

So you either assume that the 1,000 or so string theorists are among the best living minds – but then it follows that they're far more likely to be right about the choice of string theory than you are when you feel uncomfortable with string theory; or you decide that they're not actually the best living minds, in which case the research of string theory isn't wasting the intellectual capacities – it's wasting at most 1/1,000,000 of the mankind's money which is utterly negligible.

In reality, the number of string critics who systematically buy W*it-like books may be counted at least in tens of thousands – the number vastly exceeds the number of string theorists in the world. So why do they care about the existence of string theorists, a tiny minority in this "combined community"? Well, it's because the critics know that the minority actually matters in finding insights about physics, almost everyone knows that because the knowledge gap and intelligence gap is instantly obvious to the naked eyes, and they know it, too. By fighting against the minority, these anti-string activists know that they're fighting against the truth. It may be a truth that is inconvenient for their egos.

You don't really need to be a string theorist to understand that string theorists are the cream of the cream of the cream. Most people have met someone who belongs to the cream of the cream, e.g. an astronaut. Well, there's some extra selection related to the theoretical physics-related abilities needed to become a string theorist.

Most of you have attended an elementary school. Maybe 20% of someone's classmates really learns how to solve sets of linear equations. Out of those, 20% master linear algebra. Out of those, 20% master the basic mathematical apparatus of Hilbert spaces needed in quantum mechanics. Out of those, 20% have also learned the required physical background to work within quantum mechanics and so on. At the very end, 20% of the theoretical physics grad students may seriously start to work on string theory, 50% of such grad students write a decent paper to show they're not just students, and 20% of those join faculty at some point. 20% of those get some non-negligible grant and 20% of those get some major prize. When you think about similar selection mechanisms, you will determine that some 10,000 living people in the world have the capacity to learn everything needed to become a researcher in string theory – and about 2,000 living people have been actually trained to become string theory researchers.

I don't claim that my numbers are extremely precise or justifiable by surveys etc. It would be hard to define all these categories precisely. But the actual point is that the total selection is clearly massive and consists of many stages. Even much more ordinary people must know about the several stages of the selection – so the general concept just cannot be new for them.

At pretty much every level, the hurdles are almost entirely intellectual. Most people don't learn to systematically solve sets of linear equations simply because this methodology is too hard. Or it's too boring for some people but people are ultimately bored mostly because such activities devour too much of their mental energy. Among those who learn this stuff in the basic linear algebra, a majority cannot learn something more advanced because it's too hard for them. And the process continues like that. Most physics students say that they won't be theoretical physicists because they feel they're not smart enough for that. They are often (but not always) better than theoretical physicists in other, more practical skills. The final step from QFTs to string theory is completely analogous. String theory doesn't really "totally conceptually" differs from QFTs. The step from QFTs to string theory is very analogous to the step from non-relativistic quantum mechanics to QFTs. In both cases, you switch to an "even sharper toolkit", the switch requires additional skills, and there's another stage of selection. The critics sometimes pretend that string theory is a "totally different kind of activity" than QFTs (perhaps it's more like philosophy or religion) which is just plain rubbish. In practice, the work in string theory is a union of advanced methods in QFTs that are used in clever ways and linked to each other by links that aren't obvious from a pure QFT viewpoint.

Every sane person has some humility and realizes that he has some limitations – and some groups of people or at least some individuals are just smarter than himself, at least in some respects. I surely do realize that a hundred of theoretical physicists or more is better (and has always been better) than me at XY where XY may be chosen to be "almost anything" that is actually needed in contemporary research.

Edward Witten may be considered the world's smartest theoretical physicist but make no mistake about it: He genuinely believes that when it comes to various skills, especially those related to the original creative thinking, there are people who beat him. Witten is sometimes saying things just to sound humble. But in this case, he means it. When Witten said e.g. that Sen's discovery of the basic theorems about the tachyonic potentials in the late 1990s looked ingenious and Witten didn't know how and where Sen had managed to get those ideas, he probably wasn't lying. Witten was better at extending these ideas to a full-blown machinery with detailed equations and connections to concepts known to professional mathematicians (such as K-theory) than Sen. But yes, I do think that some physicists have a more penetrating intuition than Witten.

Everyone who is sane realizes that there are people who are smarter when it comes to the creative thinking of a theoretical physicist, other or the same people who are better when it comes to the mathematical abilities, and other things – or at least in some very particular, isolated, special skills. Only arrogant morons – who usually don't belong to the cream, let alone the cream of the cream, let alone the cream of the cream of the cream – may assume otherwise.

So Siegel's accusation is tautologically nonsensical. The best living minds are exactly those who are the best at deciding about the very deepest questions, so by definition, it would be counterproductive for them to obey the instructions related to these very matters issued by someone else – e.g. by the average Joe or by Ethan Siegel.

Unless you are really stupid, a person with the IQ below 80, you must actually understand (even if you pretend otherwise) that the disagreement about the value of string theory (etc.) between the (let us assume that they are) best living minds and the average Joe (or the average college student) has a very simple explanation: the best living minds are right and the average minds are not. The value of string theory is a damn complicated question. The average minds obviously haven't learned string theory, so they can't meaningfully decide whether something is good or bad about it. They may at most estimate the length of the emperor's nose that they've never seen. Their relevance for this question is on par with the monkeys' relevance.

Should the average Joe dictate the framework in which the world's best living minds should operate and think? Should the average Joe decide whether the best living minds will be kindly allowed to think about string theory? Should the best living minds be employed as bricklayers effectively supervised by the average Joe? I hope not. My country has gone through communism that has basically introduced this system. In effect, the system was equivalent to the elimination of the actual elites – on top of those who were elites purely according to their inherited social status – and the whole nations suffered as a result. They had to.

If someone is better at thinking, he should have the freedom to think (well, everyone should have this freedom) – and he should have a bigger impact on decisions, especially esoteric ones and especially decisions about his own life (and his own research if he's a researcher), than the average Joe. The reasons are exactly the same as the reasons why we send good athletes, and not the average ones or non-athletes, to the Olympic games and why the music played on the radio is usually composed by better, and not subpar, composers.

If someone has dedicated a few years to these matters and he has failed to learn string theory and to understand that it's the only known promising way to go beyond quantum field theory as of 2018, then I can assure you that his IQ is below 150. If he has won a Nobel prize, the good luck has played some role (well, there have been several physics Nobel prize winners with the near-average IQ around 120, too – they could only get the prize with some luck but couldn't systematically push the cutting edge of theoretical physics). He or she can be smart relatively to the average Joe (and the average Joe may honestly fail to see that the person isn't a member of the cream of the cream of the cream – because the degree of nesting may effectively look infinite and the difference between 140 and 170 may be hard to see if you look from the level 90) but he or she is also significantly and demonstrably dumber than any of the world's 1,000 best minds if you compare these people with some appropriate tools.

If you investigate what smart enough people – who have cared about these matters – honestly think about string theory, you may really measure their intelligence in this way. The more they appreciate string theory, the smarter they are. I don't say that the relationship is perfectly precise but the correlation is very strong. It's because string theory demonstrably works as a solution to numerous problems of the classical GR and of the QFTs, it's internally consistent, and it provides us with consistency checks and "good news" that weren't a priori guaranteed and that increase the probability that it's a conglomerate of ideas that deserves to be studied seriously. But to understand these seemingly easy claims – about string theory's consistency and its ability to produce insights beyond GR and QFTs – you simply need high intelligence. The higher IQ you have, the clearer picture you will be able to get. Most people, including a majority of string theorists, significantly underestimate the value and importance of string theory.

Some people may say nice things about string theory but if you ask what they really think and how, they won't be sure. It's easy to find out that the wisdom is fabricated if it is fabricated.

The likes of Lee Sm*lin, Peter W*it, Sabine Hossenfelder, Ethan Siegel, and lots of others have tried to learn string theory and understand the reasons why string theorists think that string theory is on the right track. But they have failed. They have failed for a simple reason. String theory is just too hard for them. They don't have the courage to admit (to themselves and others) the self-evident fact that they just don't belong among the world's top 10,000 minds because of this failure. So they invented this whole propaganda about string theory's being evil, terrible, unfalsifiable, too ugly, or too beautiful, but certainly despicable. And there are lots of morons who happily join this movement because at many levels, they also want to pretend that they are smarter than they are. They want to play this game in front of the world – and in their own eyes.

But it's you, comrades, and not string theory, where the problem sits. The problem is with your limited intelligence, arrogance, and lack of integrity. You may find thousands of šitheads who are similar to you, who buy your crappy books, and who will help you to deny the obvious facts. But the facts won't change. You will still be trash intellectually and morally.

And that's the memo.



The Guardian hit piece against Einstein

The British left-wing daily has chosen Albert Einstein as the ultimate racist. Why? Because while in China, he was surprised by the spiritlessness and obtuseness of the Chinese, including the kids. They look like a herd-like nation resembling automatons and the arrangement of their bodies while eating reminds us of the Europeans' configuration while relieving themselves in the leafy woods. If, in combination with the fecundity (which existed at that time), it would mean that the Chinese would dominate the world population, it would be a "pity", Einstein thought.

Einstein also pointed out that the Chinese society already looked like a feminist hell – there was very little difference between men and women – and he didn't understand how the Chinese women can make the men horny which was arguably needed for the survival. (Well, the answer to this Einstein's puzzle is in the proper calibration, I think. Einstein just wasn't calibrated in the same way as the Chinese men so even if he failed to be aroused during his visit to Asia, this fact as well as his surprise had to be unsurprising.)

You know, I've liked numerous Chinese folks, usually students, and I've noticed that people from Taiwan etc. usually behaved basically just like the Westerners. But I would feel like a complete idiot if I tried to contradict Einstein about the typical people from the mainland China. The differences in the personality between the Europeans – and the Jews – on one side and the Chinese on the other side are so obvious that only a pure moron or a hypocrite may deny them. They must struck every Westerner. In some cases, the differences may be imagined to be neutral – a matter of different cultural habits. In others, it's hard to "retrain" ourselves in this way. Instead of attacking Albert Einstein, comrades, what if you considered the possibility that the smarter person was right in this case as well, while you're either wrong or you pretend to believe something else than you believe?

Einstein described the folks in Ceylon as living in filth while the Japanese were appealing and admirable, but were still mentioned with some negative labels elsewhere.

by Luboš Motl (noreply@blogger.com) at June 14, 2018 10:48 AM

Lubos Motl - string vacua and pheno

An Indian interview with Juan Maldacena
If you have 16 spare minutes, you should listen to this fresh interview with Juan Maldacena (transcript).



The audio sucks but he says a couple of interesting things. In the 1990s, he and numerous classmates in Argentina were into string theory. They were also dreaming about not starving to death after their PhD, and they were sorry about the canceled collider in Texas.

You know, this is a setup that makes AdS/CFT-like breakthroughs much more likely. You start with some substantial pool of young people who are focusing on things that really matter, young people in a third world country or elsewhere, and approximately one of them makes breakthroughs similar to Maldacena's.




He got to Princeton, AdS/CFT was embraced easily, and so on. That part of the story is boring, of course. I was a bit surprised that Juan doesn't remember even the years when various events of the First Superstring Revolution of the mid 1980s took place. I am not really into social affairs but I wouldn't forget even the fact that e.g. John and Schwarz announced their anomaly cancellation results in Aspen, Colorado in Summer 1984 – in a theater play in which Schwarz starred as a mad man who had just found a theory of everything.




Maldacena also talks about the wormholes, relationships of string theory to condensed matter physics and quantum information theory, advantage and disadvantage of fashions, and related things. When asked about the doomed spacetime, he suggests that it doesn't have to be the right way of looking at things. Instead, the spacetime could very well be precisely fundamental in some description, too.

I completely sympathize with this viewpoint. After all, AdS/CFT and ER=EPR are supposed to be dualities – relationships between two equally legitimate descriptions – and one of them simply has an explicit and fundamental bulk spacetime (including the region in the neck of the wormhole) while in the other description, the spacetime is emergent. So there could be and maybe there should be an exact description in which the degrees of freedom are built from something localized in the spacetime.

A refreshing aspect of the Indian interview is that he isn't asked about the falsifiability, string wars, the mandatory ugliness of theories and the physicists, and all the would-be important things that monster minds such as Schmoitenfelders and similar hacks are filling the Western media with. India has been spared of this garbage – probably because its journalists aren't neo-Marxist scumbags like the bulk of the journalists in the West. From a domestic viewpoint, the Indians who are connected to the West and string theory are right-wing elites. That makes a difference.

by Luboš Motl (noreply@blogger.com) at June 14, 2018 05:03 AM

June 13, 2018

ZapperZ - Physics and Physicists

MinutePhysics Special Relativity Chapter 6
If you missed Chapter 5 of this series, check it out here.

Here's Chapter 6 of the Minute Physics series on Special Relativity. This time, they are tackling a topic that I see being asked numerous times : velocity addition. ("If I'm traveling close to the speed of light and I turn on my flashlight.....").

I know that this topic has been covered here many times, but it is worth repeating, especially since someone may have missed the earlier ones.



Zz.

by ZapperZ (noreply@blogger.com) at June 13, 2018 02:32 PM

The n-Category Cafe

Fun for Everyone

There’s a been a lot of progress on the ‘field with one element’ since I discussed it back in “week259”. I’ve been starting to learn more about it, and especially its possible connections to the Riemann Hypothesis. This is a great place to start:

Abstract. This text serves as an introduction to <semantics>𝔽 1<annotation encoding="application/x-tex">\mathbb{F}_1</annotation></semantics>-geometry for the general mathematician. We explain the initial motivations for <semantics>𝔽 1<annotation encoding="application/x-tex">\mathbb{F}_1</annotation></semantics>-geometry in detail, provide an overview of the different approaches to <semantics>𝔽 1<annotation encoding="application/x-tex">\mathbb{F}_1</annotation></semantics> and describe the main achievements of the field.

Lorscheid’s paper describes various approaches. Since I’m hoping the key to <semantics>𝔽 1<annotation encoding="application/x-tex">\mathbb{F}_1</annotation></semantics>-mathematics is something very simple and natural that we haven’t fully explored yet, I’m especially attracted to these:

  • Deitmar’s approach, which generalizes algebraic geometry by replacing commutative rings with commutative monoids. A lot of stuff in algebraic geometry, like the ideals and spectra of commutative rings, or the theory of schemes, doesn’t really require the additive aspect of a ring! So, for many purposes we can get away with commutative monoids, where we think of the monoid operation as multiplication. Sometimes it’s good to use commutative monoids equipped with a ‘zero’ element. The main problem is that algebraic geometry without addition seems to be approximately the same as toric geometry — a wonderful subject, but not general enough to handle everything we want from schemes over <semantics>𝔽 1<annotation encoding="application/x-tex">\mathbb{F}_1</annotation></semantics>.

  • Toën and Vaquié’s approach, which goes further and replaces commutative rings by commutative monoid objects in symmetric monoidal categories (which work best when they’re complete, cocomplete and cartesian closed). If our symmetric monoidal category is <semantics>(AbGp,)<annotation encoding="application/x-tex">(\mathbf{AbGp}, \otimes)</annotation></semantics> we’re back to commutative rings, if it’s <semantics>(Set,×)<annotation encoding="application/x-tex">(\mathbf{Set}, \times)</annotation></semantics> we’ve got commutative monoids, but there are plenty of other nice choices: for example if it’s <semantics>(CommMon,)<annotation encoding="application/x-tex">(\mathbf{CommMon}, \otimes)</annotation></semantics> we get commutative rigs, which are awfully nice.

One can also imagine ‘homotopy-coherent’ or ‘<semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-categorical’ analogues of these two approaches, which might provide a good home for certain ways the sphere spectrum shows up in this business as a substitute for the integers. For example, one could imagine that the ultimate replacement for a commutative ring is an <semantics>E <annotation encoding="application/x-tex">E_\infty</annotation></semantics> algebra inside a symmetric monoidal <semantics>(,1)<annotation encoding="application/x-tex">(\infty,1)</annotation></semantics>-category.

However, it’s not clear to me that homotopical thinking is the main thing we need to penetrate the mysteries of <semantics>𝔽 1<annotation encoding="application/x-tex">\mathbb{F}_1</annotation></semantics>. There seem to be some other missing ideas….

Lorscheid’s own approach uses ‘blueprints’. A blueprint <semantics>(R,S)<annotation encoding="application/x-tex">(R,S)</annotation></semantics> is a commutative rig <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> equipped with a subset <semantics>SR<annotation encoding="application/x-tex">S \subseteq R</annotation></semantics> that’s closed under multiplication, contains <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> and <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>, and generates <semantics>R<annotation encoding="application/x-tex">R</annotation></semantics> as a rig.

I have trouble, just on general aesthetic grounds, believing that blueprints are final ultimate solution to the quest for a generalization of commutative rings that can handle the “field with one element”. They just don’t seem ‘god-given’ the way commutative monoids or commutative objects are. But they do various nice things.

Maybe someone has answered this already, since it’s a kind of obvious question:

Question. Is there a symmetric monoidal category <semantics>C<annotation encoding="application/x-tex">\mathbf{C}</annotation></semantics> in which blueprints are the commutative monoid objects?

Maybe something like the category of ‘abelian groups equipped with a set of generators’?

Of course you should want to know what morphisms of blueprints are, because really we should want the category of commutative monoid objects in <semantics>C<annotation encoding="application/x-tex">\mathbf{C}</annotation></semantics> to be equivalent to the category of blueprints. Luckily Lorscheid’s morphisms of blueprints are the obvious thing: a morphism <semantics>f:(R,S)(R,S)<annotation encoding="application/x-tex">f : (R,S) \to (R',S')</annotation></semantics> is a morphism of commutative rigs <semantics>f:RR<annotation encoding="application/x-tex">f: R \to R'</annotation></semantics> with <semantics>f(S)S<annotation encoding="application/x-tex">f(S) \subseteq S'</annotation></semantics>.

Anyway, there’s a lot more to say about <semantics>𝔽 1<annotation encoding="application/x-tex">\mathbb{F}_1</annotation></semantics>, but Lorscheid’s paper is a great way to get into this subject.

by john (baez@math.ucr.edu) at June 13, 2018 07:37 AM

June 12, 2018

CERN Bulletin

Conference: Brain-ways of Working Together, Friday 15 June at 11 am!

FRIDAY 15 JUNE AT 11 AM

CERN Meyrin, Main Auditorium (500-1-001)

Have you heard of Tapping into Collective Intelligence or Reinventing Organizations?

They are new ways and philosophies of working together.

This conference, led by Jorge Cendales, will discuss the neuro-scientific underpinnings of recent key findings in brain science and their implications on how we think today about effective work cultures for businesses, science and governments. (Conference in English)

Find out more and sign up: https://indico.cern.ch/e/brainways

June 12, 2018 05:06 PM

ZapperZ - Physics and Physicists

Work Begins On FACET II at SLAC
The upgrade to FACET facility at SLAC promises to improve the beam electron beam quality at the accelerator facility. One of the direct benefits of this upgrade is further advancement in the plasma wakefield accelerator technique. This technique has previously shown to be capable of producing very high accelerating gradient and thus, has the potential to produce accelerating structures that can accelerate charged particles to higher energies over shorter distances.

Now, when you read the press release that I linked above, make sure you are very clear on what it said. The FACET II facility is NOT a facility that operates using this "plasma wakefield" technique. It is a facility that produces an improved electron beam quality, both in energy and emittance, among other things. This electron beam (which is produced via conventional means) is THEN will be used in the study of this wakefield accelerator technique.

The project is an upgrade to the Facility for Advanced Accelerator Experimental Tests (FACET), a DOE Office of Science user facility that operated from 2011 to 2016. FACET-II will produce beams of highly energetic electrons like its predecessor, but with even better quality. These beams will primarily be used to develop plasma acceleration techniques, which could lead to next-generation particle colliders that enhance our understanding of nature’s fundamental particles and forces and novel X-ray lasers that provide us with unparalleled views of ultrafast processes in the atomic world around us.

So read carefully the "sequence of events" here and not get too highly distracted by thinking that FACET II is a "novel X-ray laser, etc..." facility. It isn't. It is a facility, an important facility, to develop the machines that will give us more knowledge to make all these other capabilities.

Consider this as my public service to you to clarify a press release! :)

Zz.

by ZapperZ (noreply@blogger.com) at June 12, 2018 01:58 PM

Axel Maas - Looking Inside the Standard Model

How to test an idea
As you may have guessed from reading through the blog, our work is centered around a change of paradigm: That there is a very intriguing structure of the Higgs and the W/Z bosons. And that what we observe in the experiments are actually more complicated than what we usually assume. That they are not just essentially point-like objects.

This is a very bold claim, as it touches upon very basic things in the standard model of particle physics. And the interpretation of experiments. However, it is at the same time a necessary consequence if one takes the underlying more formal theoretical foundation seriously. The reason that there is not a huge clash is that the standard model is very special. Because of this both pictures give almost the same prediction for experiments. This can also be understood quantitatively. That is where I have written a review about. It can be imagined in this way:

Thus, the actual particle, which we observe, and call the Higgs is actually a complicated object made from two Higgs particles. However, one of those is so much eclipsed by the other that it looks like just a single one. And a very tiny correction to it.

So far, this does not seem to be something where it is necessary to worry about.

However, there are many and good reasons to believe that the standard model is not the end of particle physics. There are many, many blogs out there, which explain the reasons for this much better than I do. However, our research provides hints that what works so nicely in the standard model, may work much less so in some extensions of the standard model. That there the composite nature makes huge differences for experiments. This was what came out of our numerical simulations. Of course, these are not perfect. And, after all, unfortunately we did not yet discover anything beyond the standard model in experiments. So we cannot test our ideas against actual experiments, which would be the best thing to do. And without experimental support such an enormous shift in paradigm seems to be a bit far fetched. Even if our numerical simulations, which are far from perfect, support the idea. Formal ideas supported by numerical simulations is just not as convincing as experimental confirmation.

So, is this hopeless? Do we have to wait for new physics to make its appearance?

Well, not yet. In the figure above, there was 'something'. So, the ideas make also a statement that even within the standard model there should be a difference. The only question is, what is really the value of a 'little bit'? So far, experiments did not show any deviations from the usual picture. So 'little bit' needs indeed to be really rather small. But we have a calculation prescription for this 'little bit' for the standard model. So, at the very least what we can do is to make a calculation for this 'little bit' in the standard model. We should then see if the value of 'little bit' may already be so large that the basic idea is ruled out, because we are in conflict with experiment. If this is the case, this would raise a lot of question on the basic theory, but well, experiment rules. And thus, we would need to go back to the drawing board, and get a better understanding of the theory.

Or, we get something which is in agreement with current experiment, because it is smaller then the current experimental precision. But then we can make a statement how much better experimental precision needs to become to see the difference. Hopefully the answer will not be so much that it will not be possible within the next couple of decades. But this we will see at the end of the calculation. And then we can decide, whether we will get an experimental test.

Doing the calculations is actually not so simple. On the one hand, they are technically challenging, even though our method for it is rather well under control. But it will also not yield perfect results, but hopefully good enough. Also, it depends strongly on the type of experiment how simple the calculations are. We did a first few steps, though for a type of experiment not (yet) available, but hopefully in about twenty years. There we saw that not only the type of experiment, but also the type of measurement matters. For some measurements the effect will be much smaller than for others. But we are not yet able to predict this before doing the calculation. There, we need still much better understanding of the underlying mathematics. That we will hopefully gain by doing more of these calculations. This is a project I am currently pursuing with a number of master students for various measurements and at various levels. Hopefully, in the end we get a clear set of predictions. And then we can ask our colleagues at experiments to please check these predictions. So, stay tuned.

By the way: This is the standard cycle for testing new ideas and theories. Have an idea. Check that it fits with all existing experiments. And yes, this may be very, very many. If your idea passes this test: Great! There is actually a chance that it can be right. If not, you have to understand why it does not fit. If it can be fixed, fix it, and start again. Or have a new idea. And, at any rate, if it cannot be fixed, have a new idea. When you got an idea which works with everything we know, use it to make a prediction where you get a difference to our current theories. By this you provide an experimental test, which can decide whether your idea is the better one. If yes: Great! You just rewritten our understanding of nature. If not: Well, go back to fix it or have a new idea. Of course, it is best if we have already an experiment which does not fit with our current theories. But there we are at this stage a little short off. May change again. If your theory has no predictions which can be testable in any foreseeable future experimentally. Well, that is a good question how to deal with this, and there is not yet a consensus how to proceed.

by Axel Maas (noreply@blogger.com) at June 12, 2018 10:49 AM

CERN Bulletin

Encounter with Catherine Laverrière

After a 35-year career at CERN, Catherine Laverrière will retire in June.

When you meet Catherine for the first time, you are greeted by a smiling, calm and caring person. Once you work with her, you will get to discover the energy that feeds her great working capacity and witness the accuracy of her words and assessments.

Catherine arrived at CERN as an administrative assistant in the SB division (Services and Buildings) in April 1983 and typed her first calls for tenders for the construction of the LEP with a typewriter. In November 1989, while on maternity leave to welcome her firstborn, she could not participate in the inauguration day of the LEP, the culmination of a long and exciting collaboration.

As after any major project, CERN went through a restructuring and Catherine continued her journey, remaining in the “technical” field and working, among others, for the MT-SM group of Cristoforo Benvenuti, then for Paul Faugeras and, from 1996 to 2004 for the LHC-CRI group (cryostats). The LHC project was launched shortly after and Catherine, guided by her sense of rigour, strove again to write clear and precise technical documents. Her organisational skills and meticulousness led her to get involved in the area of quality assurance, which started to take shape at CERN during the construction phases of the LHC, under the impetus of P. Faugeras and his team. From calls for tenders in “Engineering Change Request” (ECR), through the development of the “Engineering and equipment Database Management Service” (EDMS), and the “Manufacturing and Test Folder” (MTF) databases, Catherine was involved in the development of a data archiving and exchange structure that serves to this day as the basis of the gigantic technical composition of the LHC. At the end of 2007, she joined the Safety Commission and embarked upon a new adventure under the direction of Elena Manola, as a part of the team in charge of developing CERN Safety Files and publishing the CERN Safety Rules. It is within the HSE unit that she ends her career today, after a short secondment from 2013 to 2015 for the restructuring and reorganisation of the CERN Fire Brigade.

Staff Association: How do you feel after 35 years at CERN?

Catherine Laverrière: It has been a great opportunity, a chance to meet people equipped with both great knowledge and great humility.

SA: In your opinion, what is the main reason for CERN’s success? 

C.L.: The goodwill and excellence of everyone involved. Next to each great scientist, including Nobel Prize winners, there was an engineer and a technician who were also brilliant in their field and without whom success and innovation would not have been possible.

Since 2007, Catherine has also been a staff delegate. At the Staff Association, she quickly integrated the “CAPA”, the Individual Cases Commission, where her listening skills and benevolence were highly appreciated. Her ability to write articles also brought her to the In-Form-Action Commission, which she chaired for a while. In 2007, she also joined the Executive Committee. Finally, Catherine was elected Vice-President of the Staff Association for the term 2016-2017.

SA: Over the years, what changes have you noticed in career development at CERN?

C.L.: The introduction of the recognition of merit has cultivated individualism by putting people in competition. I fear that values that have made CERN successful, such as team spirit and teamwork, suffer as a result.

SA: Would you have any advice for the future?

C.L.: No, rather a wish: that we recognise and reward the technical expertise of our personnel. In my opinion, the specificity and exceptional expertise of many of our technicians and technical engineers are not, or are no longer, properly understood and recognised. To deprive ourselves of this expertise and the associated knowledge would be a serious mistake for the Organization! I am talking about, among other things, the shortening of the career path D, which in a way denies the existence of these exceptional levels of competence.

Catherine, life goes on, and now you can devote yourself to your family and enjoy your well-deserved retirement, which will surely be as fulfilling as your professional life was. We imagine you in Africa advocating for a charity dear to your heart or in pays de Gex, surrounded by your family and friends.

We will miss you, Catherine.

June 12, 2018 08:06 AM

June 11, 2018

CERN Bulletin

New offer for our members

Evolution 2, your specialist for Outdoor Adventures

Be it for a ski lesson, a parachute jump or for a mountain bike descent, come live an unforgettable experience with our outdoor specialists.

Benefit from a 10 % discount on all activities:

  • Offer is open to SA members and their family members living in the same household, upon presentation of the membership card.
  • Offer available for all bookings made between 1 June 2018 and 30 May 2019.
  • Offer available on all the Evoltion2 sites.

A wide range of summer and winter activities.

More information on http://evolution2.com/

Contact and reservation :
+33 (0)4.50.02.63.35
management@evolution2.com

June 11, 2018 02:06 PM

CERN Bulletin

In FAIRness

At the beginning of May, all staff members received an email informing them about their performance qualification. A few days later, a colleague, Pi1, said, disheartened: “That’s it, my boss has landed me with a ‘fair’. I’ve had it now!” Yet he felt he had done a good job throughout the year. Certainly, he strives to find a good balance between work and private life, as one could expect from someone with three children, while others clearly prioritise work… but, after all, isn’t that a personal choice?

Certainly, during the evaluation interview, he had had the opportunity to speak with his supervisor, but on that occasion, no more than during the year, the latter had not stated that he wanted more of this or less of that. It was not even him who made the announcement about the qualification: it was an email, with no clear information on the progress that is perhaps expected! Then again, Pi does not see the point in being told the same as Martina1 from the floor above: “It’s not the end of the world; quite the opposite: you have had the chance to talk about your performance during your appraisal interview.” So, he decides to go meet his delegate at the Staff Association and then his HRA.

At the Association, Pi has a good discussion with his delegate. He leaves the meeting full of ideas. He could ask his supervisor for an interview, perhaps in the presence of his delegate, to finally understand what is happening and what is expected of him in the future. He could document as much of the information as possible during this interview and send the most important parts to his supervisor for confirmation. And, above all, he could look through the emails from his hierarchy in which the appraisal of his work is indicated and make sure he understands them well. In any case, in the future, he should never hesitate to summarise in an email any interview with his hierarchy and ask if his summary accurately reflects what has been said. He might want to get closer to his colleagues, see what they say, and what advice they may have, also for issues related to the relationship with the supervisor and his or her expectations.

Now, Pi feels reassured because if it were really necessary, if his hierarchy really had not played its part in the past or would not do so in the future, there is the possibility to contest such a decision. If the expectations of the hierarchy would really imply an imbalance between work and private life and/or too much stress, there is the possibility of discussing it with his supervisor’s supervisor, perhaps accompanied by his delegate.

Finally, Pi has a clearer understanding of what he can expect from his HRA, and he now knows that the HRA has a role to play, especially if the interview with his supervisor is not entirely satisfying. But, above all else, he knows that he can and will be able to count on his delegate and, if necessary, the Association’s Individual Cases Commission.

 

1 Names have been changed

June 11, 2018 02:06 PM

June 10, 2018

Tommaso Dorigo - Scientificblogging

Modeling Issues Or New Physics ? Surprises From Top Quark Kinematics Study
Simulation, noun:
1. Imitation or enactment
2. The act or process of pretending; feigning.
3. An assumption or imitation of a particular appearance or form; counterfeit; sham.

Well, high-energy physics is all about simulations. 

We have a theoretical model that predicts the outcome of the very energetic particle collisions we create in the core of our giant detectors, but we only have approximate descriptions of the inputs to the theoretical model, so we need simulations. 

read more

by Tommaso Dorigo at June 10, 2018 11:18 AM

June 09, 2018

The n-Category Cafe

Sets of Sets of Sets of Sets of Sets of Sets

The covariant power set functor <semantics>P:SetSet<annotation encoding="application/x-tex">P : Set \to Set</annotation></semantics> can be made into a monad whose multiplication <semantics>m X:P(P(X))P(X)<annotation encoding="application/x-tex">m_X: P(P(X)) \to P(X)</annotation></semantics> turns a subset of the set of subsets of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> into a subset of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> by taking their union. Algebras of this monad are complete semilattices.

But what about powers of the power set functor? Yesterday Jules Hedges pointed out this paper:

The authors prove that <semantics>P n<annotation encoding="application/x-tex">P^n</annotation></semantics> cannot be made into a monad for <semantics>n2<annotation encoding="application/x-tex">n \ge 2</annotation></semantics>.

I’ve mainly looked at their proof for the case <semantics>n=2<annotation encoding="application/x-tex">n = 2</annotation></semantics>. I haven’t completely worked through it, but it focuses on the unit of any purported monad structure for <semantics>P 2<annotation encoding="application/x-tex">P^2</annotation></semantics>, rather than its multiplication. Using a cute Yoneda trick they show there are only four possible units, corresponding to the four elements of <semantics>P(P(1))<annotation encoding="application/x-tex">P(P(1))</annotation></semantics>. Then they show these can’t work. The argument involves sets like this:

As far as I’ve seen, they don’t address the following question:

Question. Does there exist an associative multiplication <semantics>m:P 2P 2P 2<annotation encoding="application/x-tex">m: P^2 P^2 \Rightarrow P^2</annotation></semantics>? In other words, is there a natural transformation <semantics>m:P 2P 2P 2<annotation encoding="application/x-tex">m: P^2 P^2 \Rightarrow P^2</annotation></semantics> such that

<semantics>P 2P 2P 2mP 2P 2P 2mP 2<annotation encoding="application/x-tex"> P^2 P^2 P^2 \stackrel{m P^2 }{\Rightarrow} P^2 P^2 \stackrel{m}{\Rightarrow} P^2 </annotation></semantics>

equals

<semantics>P 2P 2P 2P 2mP 2P 2mP 2.<annotation encoding="application/x-tex"> P^2 P^2 P^2 \stackrel{P^2 m}{\Rightarrow} P^2 P^2 \stackrel{m}{\Rightarrow} P^2 .</annotation></semantics>

I’m not very good at these things, so this question might be very easy to answer. But if the answer were “obviously no” then you’d think Klin and Salamanca might have mentioned that. They do prove there is no distributive law <semantics>PPPP<annotation encoding="application/x-tex">P P \Rightarrow P P</annotation></semantics>. But they also give examples of monads <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> for which there’s no distributive law <semantics>TTTT<annotation encoding="application/x-tex">T T \Rightarrow T T</annotation></semantics>, yet there’s still a way to make <semantics>T 2<annotation encoding="application/x-tex">T^2</annotation></semantics> into a monad.

As far as I can tell, my question is fairly useless: does anyone consider “semigroupads”, namely monads without unit? Nonetheless I’m curious.

If there were a positive answer, we’d have a natural way to take a set of sets of sets of sets and turn it into a set of sets in such a way that the two most obvious resulting ways to turn a set of sets of sets of sets of sets of sets into a set of sets agree!

by john (baez@math.ucr.edu) at June 09, 2018 09:44 PM

Jester - Resonaances

Dark Matter goes sub-GeV
It must have been great to be a particle physicist in the 1990s. Everything was simple and clear then. They knew that, at the most fundamental level, nature was described by one of the five superstring theories which, at low energies, reduced to the Minimal Supersymmetric Standard Model. Dark matter also had a firm place in this narrative, being identified with the lightest neutralino of the MSSM. This simple-minded picture strongly influenced the experimental program of dark matter detection, which was almost entirely focused on the so-called WIMPs in the 1 GeV - 1 TeV mass range. Most of the detectors, including the current leaders XENON and LUX, are blind to sub-GeV dark matter, as slow and light incoming particles are unable to transfer a detectable amount of energy to the target nuclei.

Sometimes progress consists in realizing that you know nothing Jon Snow. The lack of new physics at the LHC invalidates most of the historical motivations for WIMPs. Theoretically, the mass of the dark matter particle could be anywhere between 10^-30 GeV and 10^19 GeV. There are myriads of models positioned anywhere in that range, and it's hard to argue with a straight face that any particular one is favored. We now know that we don't know what dark matter is, and that we should better search in many places. If anything, the small-scale problem of the 𝞚CDM cosmological model can be interpreted as a hint against the boring WIMPS and in favor of light dark matter. For example, if it turns out that dark matter has significant (nuclear size) self-interactions, that can only be realized with sub-GeV particles. 
                       
It takes some time for experiment to catch up with theory, but the process is already well in motion. There is some fascinating progress on the front of ultra-light axion dark matter, which deserves a separate post. Here I want to highlight the ongoing  developments in direct detection of dark matter particles with masses between MeV and GeV. Until recently, the only available constraint in that regime was obtained by recasting data from the XENON10 experiment - the grandfather of the currently operating XENON1T.  In XENON detectors there are two ingredients of the signal generated when a target nucleus is struck:  ionization electrons and scintillation photons. WIMP searches require both to discriminate signal from background. But MeV dark matter interacting with electrons could eject electrons from xenon atoms without producing scintillation. In the standard analysis, such events would be discarded as background. However,  this paper showed that, recycling the available XENON10 data on ionization-only events, one can exclude dark matter in the 100 MeV ballpark with the cross section for scattering on electrons larger than ~0.01 picobarn (10^-38 cm^2). This already has non-trivial consequences for concrete models; for example, a part of the parameter space of milli-charged dark matter is currently best constrained by XENON10.   

It is remarkable that so much useful information can be extracted by basically misusing data collected for another purpose (earlier this year the DarkSide-50 recast their own data in the same manner, excluding another chunk of the parameter space).  Nevertheless, dedicated experiments will soon  be taking over. Recently, two collaborations published first results from their prototype detectors:  one is SENSEI, which uses 0.1 gram of silicon CCDs, and the other is SuperCDMS, which uses 1 gram of silicon semiconductor.  Both are sensitive to eV energy depositions, thanks to which they can extend the search region to lower dark matter mass regions, and set novel limits in the virgin territory between 0.5 and 5 MeV.  A compilation of the existing direct detection limits is shown in the plot. As you can see, above 5 MeV the tiny prototypes cannot yet beat the XENON10 recast. But that will certainly change as soon as full-blown detectors are constructed, after which the XENON10 sensitivity should be improved by several orders of magnitude.
     
Should we be restless waiting for these results? Well, for any single experiment the chance of finding nothing are immensely larger than that of finding something. Nevertheless, the technical progress and the widening scope of searches offer some hope that the dark matter puzzle may be solved soon.

by Mad Hatter (noreply@blogger.com) at June 09, 2018 05:39 PM

June 08, 2018

Jester - Resonaances

Massive Gravity, or You Only Live Twice
Proving Einstein wrong is the ultimate ambition of every crackpot and physicist alike. In particular, Einstein's theory of gravitation -  the general relativity -  has been a victim of constant harassment. That is to say, it is trivial to modify gravity at large energies (short distances), for example by embedding it in string theory, but it is notoriously difficult to change its long distance behavior. At the same time, motivations to keep trying go beyond intellectual gymnastics. For example, the accelerated expansion of the universe may be a manifestation of modified gravity (rather than of a small cosmological constant).   

In Einstein's general relativity, gravitational interactions are mediated by a massless spin-2 particle - the so-called graviton. This is what gives it its hallmark properties: the long range and the universality. One obvious way to screw with Einstein is to add mass to the graviton, as entertained already in 1939 by Fierz and Pauli. The Particle Data Group quotes the constraint m ≤ 6*10^−32 eV, so we are talking about the De Broglie wavelength comparable to the size of the observable universe. Yet even that teeny mass may cause massive troubles. In 1970 the Fierz-Pauli theory was killed by the van Dam-Veltman-Zakharov (vDVZ) discontinuity. The problem stems from the fact that a massive spin-2 particle has 5 polarization states (0,±1,±2) unlike a massless one which has only two (±2). It turns out that the polarization-0 state couples to matter with the similar strength as the usual polarization ±2 modes, even in the limit where the mass goes to zero, and thus mediates an additional force which differs from the usual gravity. One finds that, in massive gravity, light bending would be 25% smaller, in conflict with the very precise observations of stars' deflection around the Sun. vDV concluded that "the graviton has rigorously zero mass". Dead for the first time...           

The second coming was heralded soon after by Vainshtein, who noticed that the troublesome polarization-0 mode can be shut off in the proximity of stars and planets. This can happen in the presence of graviton self-interactions of a certain type. Technically, what happens is that the polarization-0 mode develops a background value around massive sources which, through the derivative self-interactions, renormalizes its kinetic term and effectively diminishes its interaction strength with matter. See here for a nice review and more technical details. Thanks to the Vainshtein mechanism, the usual predictions of general relativity are recovered around large massive source, which is exactly where we can best measure gravitational effects. The possible self-interactions leading a healthy theory without ghosts have been classified, and go under the name of the dRGT massive gravity.

There is however one inevitable consequence of the Vainshtein mechanism. The graviton self-interaction strength grows with energy, and at some point becomes inconsistent with the unitarity limits that every quantum theory should obey. This means that massive gravity is necessarily an effective theory with a limited validity range and has to be replaced by a more fundamental theory at some cutoff scale 𝞚. This is of course nothing new for gravity: the usual Einstein gravity is also an effective theory valid at most up to the Planck scale MPl~10^19 GeV.  But for massive gravity the cutoff depends on the graviton mass and is much smaller for realistic theories. At best,
So the massive gravity theory in its usual form cannot be used at distance scales shorter than ~300 km. For particle physicists that would be a disaster, but for cosmologists this is fine, as one can still predict the behavior of galaxies, stars, and planets. While the theory certainly cannot be used to describe the results of table top experiments,  it is relevant for the  movement of celestial bodies in the Solar System. Indeed, lunar laser ranging experiments or precision studies of Jupiter's orbit are interesting probes of the graviton mass.

Now comes the latest twist in the story. Some time ago this paper showed that not everything is allowed  in effective theories.  Assuming the full theory is unitary, causal and local implies non-trivial constraints on the possible interactions in the low-energy effective theory. These techniques are suitable to constrain, via dispersion relations, derivative interactions of the kind required by the Vainshtein mechanism. Applying them to the dRGT gravity one finds that it is inconsistent to assume the theory is valid all the way up to 𝞚max. Instead, it must be replaced by a more fundamental theory already at a much lower cutoff scale,  parameterized as 𝞚 = g*^1/3 𝞚max (the parameter g* is interpreted as the coupling strength of the more fundamental theory). The allowed parameter space in the g*-m plane is showed in this plot:

Massive gravity must live in the lower left corner, outside the gray area  excluded theoretically  and where the graviton mass satisfies the experimental upper limit m~10^−32 eV. This implies g* ≼ 10^-10, and thus the validity range of the theory is some 3 order of magnitude lower than 𝞚max. In other words, massive gravity is not a consistent effective theory at distance scales below ~1 million km, and thus cannot be used to describe the motion of falling apples, GPS satellites or even the Moon. In this sense, it's not much of a competition to, say, Newton. Dead for the second time.   

Is this the end of the story? For the third coming we would need a more general theory with additional light particles beyond the massive graviton, which is consistent theoretically in a larger energy range, realizes the Vainshtein mechanism, and is in agreement with the current experimental observations. This is hard but not impossible to imagine. Whatever the outcome, what I like in this story is the role of theory in driving the progress, which is rarely seen these days. In the process, we have understood a lot of interesting physics whose relevance goes well beyond one specific theory. So the trip was certainly worth it, even if we find ourselves back at the departure point.

by Mad Hatter (noreply@blogger.com) at June 08, 2018 08:35 AM

June 07, 2018

Jester - Resonaances

Can MiniBooNE be right?
The experimental situation in neutrino physics is confusing. One one hand, a host of neutrino experiments has established a consistent picture where the neutrino mass eigenstates are mixtures of the 3 Standard Model neutrino flavors νe, νμ, ντ. The measured mass differences between the eigenstates are Δm12^2 ≈ 7.5*10^-5 eV^2 and Δm13^2 ≈ 2.5*10^-3 eV^2, suggesting that all Standard Model neutrinos have masses below 0.1 eV. That is well in line with cosmological observations which find that the radiation budget of the early universe is consistent with the existence of exactly 3 neutrinos with the sum of the masses less than 0.2 eV. On the other hand, several rogue experiments refuse to conform to the standard 3-flavor picture. The most severe anomaly is the appearance of electron neutrinos in a muon neutrino beam observed by the LSND and MiniBooNE experiments.


This story begins in the previous century with the LSND experiment in Los Alamos, which claimed to observe νμνe antineutrino oscillations with 3.8σ significance.  This result was considered controversial from the very beginning due to limitations of the experimental set-up. Moreover, it was inconsistent with the standard 3-flavor picture which, given the masses and mixing angles measured by other experiments, predicted that νμνe oscillation should be unobservable in short-baseline (L ≼ km) experiments. The MiniBooNE experiment in Fermilab was conceived to conclusively prove or disprove the LSND anomaly. To this end, a beam of mostly muon neutrinos or antineutrinos with energies E~1 GeV is sent to a detector at the distance L~500 meters away. In general, neutrinos can change their flavor with the probability oscillating as P ~ sin^2(Δm^2 L/4E). If the LSND excess is really due to neutrino oscillations, one expects to observe electron neutrino appearance in the MiniBooNE detector given that L/E is similar in the two experiments. Originally, MiniBooNE was hoping to see a smoking gun in the form of an electron neutrino excess oscillating as a function of L/E, that is peaking at intermediate energies and then decreasing towards lower energies (possibly with several wiggles). That didn't happen. Instead, MiniBooNE finds an excess increasing towards low energies with a similar shape as the backgrounds. Thus the confusion lingers on: the LSND anomaly has neither been killed nor robustly confirmed.     

In spite of these doubts, the LSND and MiniBooNE anomalies continue to arouse interest. This is understandable: as the results do not fit the 3-flavor framework, if confirmed they would prove the existence of new physics beyond the Standard Model. The simplest fix would be to introduce a sterile neutrino νs with the mass in the eV ballpark, in which case MiniBooNE would be observing the νμνsνe oscillation chain. With the recent MiniBooNE update the evidence for the electron neutrino appearance increased to 4.8σ, which has stirred some commotion on Twitter and in the blogosphere. However, I find the excitement a bit misplaced. The anomaly is not really new: similar results showing a 3.8σ excess of νe-like events were already published in 2012.  The increase of the significance is hardly relevant: at this point we know anyway that the excess is not a statistical fluke, while a systematic effect due to underestimated backgrounds would also lead to a growing anomaly. If anything, there are now less reasons than in 2012 to believe in the sterile neutrino origin the MiniBooNE anomaly, as I will argue in the following.

What has changed since 2012? First, there are new constraints on νe appearance from the OPERA experiment (yes, this OPERA) who did not see any excess νe in the CERN-to-Gran-Sasso νμ beam. This excludes a large chunk of the relevant parameter space corresponding to large mixing angles between the active and sterile neutrinos. From this point of view, the MiniBooNE update actually adds more stress on the sterile neutrino interpretation by slightly shifting the preferred region towards larger mixing angles...  Nevertheless, a not-too-horrible fit to all appearance experiments can still be achieved in the region with Δm^2~0.5 eV^2 and the mixing angle sin^2(2θ) of order 0.01.     

Next, the cosmological constraints have become more stringent. The CMB observations by the Planck satellite do not leave room for an additional neutrino species in the early universe. But for the parameters preferred by LSND and MiniBooNE, the sterile neutrino would be abundantly produced in the hot primordial plasma, thus violating the Planck constraints. To avoid it, theorists need to deploy a battery of  tricks (for example, large sterile-neutrino self-interactions), which makes realistic models rather baroque.

But the killer punch is delivered by disappearance analyses. Benjamin Franklin famously said that only two things in this world were certain: death and probability conservation. Thus whenever an electron neutrino appears in a νμ beam, a muon neutrino must disappear. However, the latter process is severely constrained by long-baseline neutrino experiments, and recently the limits have been further strengthened thanks to the MINOS and IceCube collaborations. A recent combination of the existing disappearance results is available in this paper.  In the 3+1 flavor scheme, the probability of a muon neutrino transforming into an electron  one in a short-baseline experiment is
where U is the 4x4 neutrino mixing matrix.  The Uμ4 matrix elements controls also the νμ survival probability
The νμ disappearance data from MINOS and IceCube imply |Uμ4|≼0.1, while |Ue4|≼0.25 from solar neutrino observations. All in all, the disappearance results imply that the effective mixing angle sin^2(2θ) controlling the νμνsνe oscillation must be much smaller than 0.01 required to fit the MiniBooNE anomaly. The disagreement between the appearance and disappearance data had already existed before, but was actually made worse by the MiniBooNE update.
So the hypothesis of a 4th sterile neutrino does not stand scrutiny as an explanation of the MiniBooNE anomaly. It does not mean that there is no other possible explanation (more sterile neutrinos? non-standard interactions? neutrino decays?). However, any realistic model will have to delve deep into the crazy side in order to satisfy the constraints from other neutrino experiments, flavor physics, and cosmology. Fortunately, the current confusing situation should not last forever. The MiniBooNE photon background from π0 decays may be clarified by the ongoing MicroBooNE experiment. On the timescale of a few years the controversy should be closed by the SBN program in Fermilab, which will add one near and one far detector to the MicroBooNE beamline. Until then... years of painful experience have taught us to assign a high prior to the Standard Model hypothesis. Currently, by far the most plausible explanation of the existing data is an experimental error on the part of the MiniBooNE collaboration.

by Mad Hatter (noreply@blogger.com) at June 07, 2018 01:20 PM

June 06, 2018

John Baez - Azimuth

Applied Category Theory Course: Databases

 

In my online course we’re now into the third chapter of Fong and Spivak’s book Seven Sketches. Now we’re talking about databases!

To some extent this is just an excuse to (finally) introduce categories, functors, natural transformations, adjoint functors and Kan extensions. Great stuff, and databases are a great source of easy examples.

But it’s also true that Spivak helps run a company called Categorical Informatics that actually helps design databases using category theory! And his partner, Ryan Wisnesky, would be happy to talk to people about it. If you’re interested, click the link: he’s attending my course.

To read and join discussions on Chapter 3 go here:

Chapter 3

You can also do exercises and puzzles, and see other people’s answers to these.

Here are the lectures I’ve given so far:

Lecture 34 – Chapter 3: Categories
Lecture 35 – Chapter 3: Categories versus Preorders
Lecture 36 – Chapter 3: Categories from Graphs
Lecture 37 – Chapter 3: Presentations of Categories
Lecture 38 – Chapter 3: Functors
Lecture 39 – Chapter 3: Databases
Lecture 40 – Chapter 3: Relations
Lecture 41 – Chapter 3: Composing Functors
Lecture 42 – Chapter 3: Transforming Databases
Lecture 43 – Chapter 3: Natural Transformations
Lecture 44 – Chapter 3: Categories, Functors and Natural Transformations
Lecture 45 – Chapter 3: Composing Natural Transformations
Lecture 46 – Chapter 3: Isomorphisms
Lecture 47 – Chapter 3: Adjoint Functors
Lecture 48 – Chapter 3: Adjoint Functors
Lecture 49 – Chapter 3: Kan Extensions
Lecture 50 – Chapter 3: Kan Extensions

by John Baez at June 06, 2018 09:15 PM

Lubos Motl - string vacua and pheno

Monte Carlo fails at leptonic top pair production
The LHC collaborations have produced hundreds of papers that mostly say "everything agrees with the Standard Model, nothing new to be seen here".

Well, a new CMS preprint
Measurements of differential cross sections for \(t \bar t\) production in proton-proton collisions at \(\sqrt{s} = 13\TeV\) using events containing two leptons
says something completely different.




Go to the page 58/64 and you will see something like:
Measurements were made of the cross section for the appearance of the top quark and its antiparticle in the proton-proton collisions. Some overall cross sections could agree with the theory. However, the next-to-leading Monte Carlo simulations in any combinations suck at predicting the jet multiplicity. They suck at predicting the transverse momentum of the leptons, quarks, invariant masses, and all other kinematic observables you could think of. Higher-order models seem to suck, too.
Nice.




What should you think about it? I am not sure. This is a surprising disagreement, I think, because the top quarks should be relatively easy. They're very heavy so they should be created like new elementary particles that don't care much about the weak glue that attaches them to other quarks and gluons. The theoretical simulations shouldn't be this bad.

On the other hand, the very fact that the Monte Carlo simulators have to be used indicates that the QCD phenomena must matter. This is not an experiment whose results could be easily predicted just by considering the elementary particles such as top quarks – the gluons and their interactions matter. So I would say that this is a failure of the theory in a regime where both the simple Feynman diagrams describing high-mass top quarks; and messy calculations where Monte Carlo calculations are useful have to be combined.

When the combination is needed, theoretical predictions seem to be off.



MG5 aMC@NLO, MG5 aMC@NLO, PYTHIA8, MADSPIN and other calculations have to be made in Monte Carlo because the casino produces the random numbers that are needed in the calculations. Donald Trump is planning the American answer to the Monte Carlo simulations in several casinos he owns. The first generator will be MLVGA1 (Make Las Vegas Great Again One).

Of course, this could be a hint of new physics. Perhaps the top quark is composite. Or something else. But this is a complicated enough experiment – lots of processes are combined to produce the final results – that it may be very hard to reverse engineer the discrepancy. In other words, the discrepancy is no straightforward guide towards the right way to fix our theory if our theory is incomplete at all.

by Luboš Motl (noreply@blogger.com) at June 06, 2018 03:11 PM

June 05, 2018

Lubos Motl - string vacua and pheno

Dijkgraaf, parameters, and omnipresent Šmoitian trolls
MathPix is a rather amazing app for your phone. Write e.g.\[

\int_{-\infty}^{\infty} e^{-x^2} dx

\] with your hand, take a photo of this expression while it's in the rectangle, and the app will convert it to perfect \(\rm\LaTeX\) and calculate that it's \(\sqrt{\pi}\), with graphs and analyses by WolframAlpha. It's like PhotoMath for adult mathematicians. Hat tip: mmanuF@Twitter
Robbert Dijkgraaf is the director of the Institute for Advanced Studies in Princeton, New Jersey – he is the boss of Edward Witten and he would be a boss of Albert Einstein if Einstein avoided death in the 1950s. He's also a late co-father of matrix string theory (and once a co-author of mine).

In the Netherlands, he's a rather well-known scientific talking head. So if you search for "Robbert Dijkgraaf" on YouTube, you will get videos from Dutch TV where Dijkgraaf talks about Einstein for an hour and stuff like that.

Well, Dijkgraaf just posted the newest text at the Quanta Magazine:
There Are No Laws of Physics. There’s Only the Landscape.
He says that there are dualities in string theory which physically identify two or several seemingly different descriptions, but there's still a huge landscape of possible effective laws that are left. The article doesn't say much more – except that it's possible for a Chinese to cook a Chinese food and notice it's identical to an Italian food. ;-)

Well, dualities in the world of cousins seem rather unlikely.




Now, this could have been a helpful popular article some 20 years ago. But today? What kind of reactions would you expect from the readers?

Since 2006, the science sections of the mainstream media have been almost totally conquered by crackpots that are parroting the holy words by Mr W*it, Mr Sm*lin, and similar self-anointed populist opinion makers. The religious anti-physics wave has created thousands of aggressive trolls and tens of thousands of their sockpuppets who spend hours by trying to find places on the Internet where they could write something like: "String theory has many solutions so there must be something wrong with it!" They use a much worse language, however.

Well, sorry, comrades, equations of physics have the number of solutions they prefer. Unless you may rule out some number of solutions, then you must treat all the possible numbers – and the theories predicting those numbers – equally.




So it was guaranteed that the article wouldn't be received well. After a day, Dijkgraaf's short essay has attracted two comments, by users named Weylguy and Steve Blazo, and both of them parroting the W*itian garbage. In that way, they reveal that they're hopeless morons but W*it has trained these stupid puppies to feel smart while parroting these idiocies, so these morons do feel smart.

They frame their W*itian dictum as a response to Dijkgraaf's assertion that string theory has no parameters. Well, more precisely,
string theory has no adjustable non-dynamical continuous dimensionless parameters.
This statement says that if there is an apparent number whose value may be a priori adjusted and must be chosen to make predictions using string theory, it must be either
  1. dimensionful such as \(c,\hbar\): no theory may predict their values because these values depend on the choice of units, i.e. social conventions, and social conventions can't really be calculated from a fundamental physical theory based on crisp mathematics. Adult theoretical physicists normally use units with \(1=c=\hbar=\dots\), so these parameters are no longer adjustable.
  2. Or discrete – one has to decide that we live around a particular vacuum that is chosen from a finite set or a countable set. That covers the choice of the vacuum from the often repeated \(10^{500}\) possibilities.
  3. Or dynamical: the parameter is actually the vacuum expectation value of a scalar field with a vanishing potential (so all values are allowed and stable), a modulus. When such moduli (non-stabilized scalar fields) exist, they cause additional "fifth forces" violating the equivalence principle. Because we haven't observed those, it should mean that we live in a vacuum where no non-stabilized moduli exist.
OK, these three points summarize the three main subtleties that beginners may misunderstand when they discuss parameters in quantum field theory or string theory. Some people want \(c\) in SI units to be calculated from the first principles. Or they ask whether \(c\) is changing with time, and so on. Well, in the SI units, \(c\) is by definition constant – and the trend is to redefine the units so that all the other universal constants such as \(\epsilon_0,\hbar,k_B\) are universal constants too.

Obviously, we can't calculate that some people decided to define one meter as 1/40,000,000 of the circumference of the Earth, and how this unit of length got refined.

Some other people misinterpret the discrete parameters. When you only have to pick a discrete label, you're picking "much less information" than if you determine the value of a real, continuous, adjustable parameter. After all, a real number needs infinitely many digits to be specified. A number between 1 and 99 only needs two decimal digits to be specified, and even an element of a set with \(10^{500}\) elements may be fully determined by specifying 500 digits. That number (500) is actually comparable to the number of digits in the parameters of the Standard Model that are in principle measurable with the accuracy that may be obtained in this Universe.

Steve Blazo wrote:
The comment that string theory has less free parameters then the standard model is kind of ridiculous, each member of the "landscape" is it's own free parameter. So string theory instead of less free parameters has like \(10^{500}\) more.
You know, idiots such as this one have been made self-confident because W*it told them that they know what is the most important thing they have to know and repeat. It's the delusion written above. So they scream it with the same passion and for the same reason as the Islamic terrorists are screaming "Allahu Akbar". But sorry, comrades, Allah isn't great and each member of the landscape doesn't bring a new parameter.

Even if you counted the choice of the element of the landscape as a parameter of the theory, and you shouldn't because this choice is actually dynamical, it's a single parameter. You may label the elements of the landscape by an integer \(N\) that goes from \(1\) to \(10^{500}\) and this single integer would be the one and only parameter of the theory. Saying that the number of values that a parameter can take is the same as the number of parameters is isomorphic to the claim that 2+2=22; or that a computer contains 256 animals times the number of bytes in the RAM. Only a total idiot can make such mistakes. The number of dimensions (axes) in a parameter space is something else than the number of points in that space.

The choice of \(N\) produces a much bigger diversity in the qualitative character of the effective laws of physics than e.g. the electric charge \(e\) does in QED. But that's no disadvantage. It's an obvious advantage, and a sort of required one for a candidate for a theory of everything. String theory derives rather messy effective theories such as the Standard Model out of nothing so it's obvious that the effective laws that string theory must be capable of producing must be rather diverse so that a generic one may be as messy as the Standard Model.

Incidentally, the omnipresence of the ludicrous number \(10^{500}\) is a proof that all these critics of string theory only parrot this garbage from each other just like the Islamic terrorists parrot "Allahu Akbar". None of them is capable of independent thought.

How many people who constantly repeat the number \(10^{500}\) could say where it originally came from? I think that none of them. Well, I can tell you: it was written in Section IV of a Douglas-Kachru 2006 paper. Douglas and Kachru – who didn't even write this vaguely estimated (later to be insanely popular) number to the abstract – admit that Lerche and pals had a similar estimate already in the 1980s – but Lerche ended up with \(10^{1,500}\). There was nothing truly new about the claim that the number of vacua may be this large – and the exponent is meant to be just "in hundreds" but no one really knows how many hundreds. Three years ago, Taylor and Wang claimed that the exponent is actually 272,000 if we allow a truly prolific class of F-theory flux vacua.

The other commenter, Weylguy, exhibits a similar fallacious reasoning:
Really? String theory has no free parameters? And there's only one string theory? Show me where the theory predicts that the electron has a mass of \(9.11 \times 10^{-31}\) kilograms and then I'll start believing that spacetime is composed of 10 or 11 dimensions, 6 or 7 of which are conveniently curled up so small they could never be observed.
A big point of Dijkgraaf's article above was that the electron mass shouldn't be expected to have a unique value because string theory has a landscape of possibilities. So if such a real number may be calculated, it must be in more natural units than kilograms, of course; and one must first determine the value of the discrete parameter that labels the elements of the landscape. So why is this Weylguy "demanding" something as a proof of string theory that Dijkgraaf explicitly wrote not to be the case according to string theory?

It's also demagogic to say that the extra dimensions are "conveniently" compactified. In reality, they are both conveniently and (what is more important) demonstrably compactified. Our not easily seeing the extra dimensions is an empirical fact, not just a matter of convenience, and scientists who are worth the name simply have to take empirical facts into account. Also, it's demagogic to implicitly suggest that the small size of the extra dimensions is unnatural. Instead, it is totally natural. The expected radius of the extra dimensions may be calculated from the dimensional analysis. And if there are no special considerations, the only length scale that may come out of this dimensional analysis is the Planck scale. Some special models – e.g. braneworlds and/or those with large or warped extra dimensions – may produce longer radii than the Planck scale. But they still give some finite estimates from some dimensional analysis enhanced by some parameteric dependence.

The finite value of the radius of the dimensions is absolutely natural. In fact, the large radius of the visible, macroscopic dimensions is more unnatural – and that unnaturalness is known as the cosmological constant problem.

There's no reason to think that the total number of spacetime dimensions is 4, there is nothing wrong mathematically about the numbers 10 and 11, they're in fact preferred by more detailed calculations, and there's nothing unnatural about the compactification to microscopic radii.

But the main point I wanted to convey is that Dijkgraaf seems to deny the reality in these media altogether. He wrote the text as if no "string wars" have ever taken place. But the string wars did take place more than a decade ago. Dijkgraaf was among those who preferred his convenience and didn't do anything at all to help me and others to defeat the enemy – so the enemy has won the battle for the space in the mainstream media. Until you realize your mistake, you can't ever write an article about string theory or state-of-the-art high-energy theoretical physics into similar media that will be well received. It's really painful if a director of the IAS hasn't been capable of noticing this fact because directors of important scientific institutions are exactly those who should know something about the atmosphere in various parts of the society.

If you ever want articles written for the popular magazines – including the Quanta Magazine – about modern theoretical physics to be meaningful again, you will first have to restart the string wars and win them. I am afraid that the society has sufficiently deteriorated over those 10+ years of your inaction that a physical elimination of the enemy may be needed.

by Luboš Motl (noreply@blogger.com) at June 05, 2018 05:53 PM

June 04, 2018

The n-Category Cafe

Applied Category Theory: Resource Theories

My course on applied category theory is continuing! After a two-week break where the students did exercises, I went back to lecturing about Fong and Spivak’s book Seven Sketches. The second chapter is about ‘resource theories’.

Resource theories help us answer questions like this:

  1. Given what I have, is it possible to get what I want?
  2. Given what I have, how much will it cost to get what I want?
  3. Given what I have, how long will it take to get what I want?
  4. Given what I have, what is the set of ways to get what I want?

Resource theories in their modern form were arguably born in these papers:

We are lucky to have Tobias in our course, helping the discussions along! He’s already posted some articles on resource theory on the Azimuth blog:

In the course, we had fun bouncing between the relatively abstract world of monoidal preorders and their very concrete real-world applications to chemistry, scheduling, manufacturing and other topics. Here are the lectures:

by john (baez@math.ucr.edu) at June 04, 2018 05:29 PM

June 01, 2018

Jester - Resonaances

WIMPs after XENON1T
After today's update from the XENON1T experiment, the situation on the front of direct detection of WIMP dark matter is as follows

WIMP can be loosely defined as a dark matter particle with mass in the 1 GeV - 10 TeV range and significant interactions with ordinary matter. Historically, WIMP searches have stimulated enormous interest because this type of dark matter can be easily realized in models with low scale supersymmetry. Now that we are older and wiser, many physicists would rather put their money on other realizations, such as axions, MeV dark matter, or primordial black holes. Nevertheless, WIMPs remain a viable possibility that should be further explored.
 
To detect WIMPs heavier than a few GeV, currently the most successful strategy is to use huge detectors filled with xenon atoms, hoping one of them is hit by a passing dark matter particle. Xenon1T beats the competition from the LUX and Panda-X experiments because it has a bigger gun tank. Technologically speaking, we have come a long way in the last 30 years. XENON1T is now sensitive to 40 GeV WIMPs interacting with nucleons with the cross section of 40 yoctobarn (1 yb = 10^-12 pb = 10^-48 cm^2). This is 6 orders of magnitude better than what the first direct detection experiment in the Homestake mine could achieve back in the 80s. Compared to the last year, the  limit is better by a factor of two at the most sensitive mass point. At high mass the improvement is somewhat smaller than expected due to a small excess of events observed by XENON1T, which is probably just a 1 sigma upward fluctuation of the background.

What we are learning about WIMPs is how they can (or cannot) interact with us. Of course, at this point in the game we don't see qualitative progress, but rather incremental quantitative improvements. One possible scenario is that WIMPs experience one of the Standard Model forces,  such as the weak or the Higgs force. The former option is strongly constrained by now. If WIMPs had interacted in the same way as our neutrino does, that is by exchanging a Z boson,  it would have been found in the Homestake experiment. Xenon1T is probing models where the dark matter coupling to the Z boson is suppressed by a factor cχ ~ 10^-3 - 10^-4 compared to that of an active neutrino. On the other hand, dark matter could be participating in weak interactions only by exchanging W bosons, which can happen for example when it is a part of an SU(2) triplet. In the plot you can see that XENON1T is approaching but not yet excluding this interesting possibility. As for models using the Higgs force, XENON1T is probing the (subjectively) most natural parameter space where WIMPs couple with order one strength to the Higgs field. 

And the arms race continues. The search in XENON1T will go on until the end of this year, although at this point a discovery is extremely unlikely. Further progress is expected on a timescale of a few years thanks to the next generation xenon detectors XENONnT and LUX-ZEPLIN, which should achieve yoctobarn sensitivity. DARWIN may be the ultimate experiment along these lines, in the sense that there is no prefix smaller than yocto it will reach the irreducible background from atmospheric neutrinos, after which new detection techniques will be needed.  For dark matter mass closer to 1 GeV, several orders of magnitude of pristine parameter space will be covered by the SuperCDMS experiment. Until then we are kept in suspense. Is dark matter made of WIMPs? And if yes, does it stick above the neutrino sea?

by Mad Hatter (noreply@blogger.com) at June 01, 2018 05:30 PM

Tommaso Dorigo - Scientificblogging

MiniBoone Confirms Neutrino Anomaly
Neutrinos, the most mysterious and fascinating of all elementary particles, continue to puzzle physicists. 20 years after the experimental verification of a long-debated effect whereby the three neutrino species can "oscillate", changing their nature by turning one into the other as they propagate in vacuum and in matter, the jury is still out to decide what really is the matter with them. And a new result by the MiniBoone collaboration is stirring waters once more.

read more

by Tommaso Dorigo at June 01, 2018 12:49 PM

May 26, 2018

Clifford V. Johnson - Asymptotia

Resolution

Today is the release of the short story anthology Twelve Tomorrows from MIT Press with a wonderful roster of authors. (It is an annual project of the MIT Technology Review.) I’m in there too, with a graphic novella called “Resolution”. It's the first graphic novella in this anthology's five year history, and it is the first time MIT Press is publishing it. Physicists and Mathematicians will appreciate the title choice upon reading. Order! Share!

-cvj Click to continue reading this post

The post Resolution appeared first on Asymptotia.

by Clifford at May 26, 2018 12:36 AM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A festschrift at UCC

One of my favourite academic traditions is the festschrift, a conference convened to honour the contribution of a senior academic. In a sense, it’s academia’s version of an Oscar for lifetime achievement, as scholars from all around the world gather to pay tribute their former mentor, colleague or collaborator.

Festschrifts tend to be very stimulating meetings, as the diverging careers of former students and colleagues typically make for a diverse set of talks. At the same time, there is usually a unifying theme based around the specialism of the professor being honoured.

And so it was at NIALLFEST this week, as many of the great and the good from the world of Einstein’s relativity gathered at University College Cork to pay tribute to Professor Niall O’Murchadha, a theoretical physicist in UCC’s Department of Physics noted internationally for seminal contributions to general relativity.  Some measure of Niall’s influence can be seen from the number of well-known theorists at the conference, including major figures such as Bob WaldBill UnruhEdward Malec and Kip Thorne (the latter was recently awarded the Nobel Prize in Physics for his contribution to the detection of gravitational waves). The conference website can be found here and the programme is here.

IMG_1640

IMG_1644

IMG_1642

University College Cork: probably the nicest college campus in Ireland

As expected, we were treated to a series of high-level talks on diverse topics, from black hole collapse to analysis of high-energy jets from active galactic nuclei, from the initial value problem in relativity to the search for dark matter (slides for my own talk can be found here). To pick one highlight, Kip Thorne’s reminiscences of the forty-year search for gravitational waves made for a fascinating presentation, from his description of early designs of the LIGO interferometer to the challenge of getting funding for early prototypes – not to mention his prescient prediction that the most likely chance of success was the detection of a signal from the merger of two black holes.

All in all, a very stimulating conference. Most entertaining of all were the speakers’ recollections of Niall’s working methods and his interaction with students and colleagues over the years. Like a great piano teacher of old, one great professor leaves a legacy of critical thinkers dispersed around their world, and their students in turn inspire the next generation!

 

by cormac at May 26, 2018 12:16 AM

May 23, 2018

Clifford V. Johnson - Asymptotia

Bull

A pair of panels from my short story “Resolution” in the Science Fiction anthology Twelve Tomorrows, out on Friday from MITPress! Preorder now, share, and tell everyone about it. See here for ordering, for example.

-cvj Click to continue reading this post

The post Bull appeared first on Asymptotia.

by Clifford at May 23, 2018 03:20 AM

May 21, 2018

Andrew Jaffe - Leaves on the Line

Leon Lucy, R.I.P.

I have the unfortunate duty of using this blog to announce the death a couple of weeks ago of Professor Leon B Lucy, who had been a Visiting Professor working here at Imperial College from 1998.

Leon got his PhD in the early 1960s at the University of Manchester, and after postdoctoral positions in Europe and the US, worked at Columbia University and the European Southern Observatory over the years, before coming to Imperial. He made significant contributions to the study of the evolution of stars, understanding in particular how they lose mass over the course of their evolution, and how very close binary stars interact and evolve inside their common envelope of hot gas.

Perhaps most importantly, early in his career Leon realised how useful computers could be in astrophysics. He made two major methodological contributions to astrophysical simulations. First, he realised that by simulating randomised trajectories of single particles, he could take into account more physical processes that occur inside stars. This is now called “Monte Carlo Radiative Transfer” (scientists often use the term “Monte Carlo” — after the European gambling capital — for techniques using random numbers). He also invented the technique now called smoothed-particle hydrodynamics which models gases and fluids as aggregates of pseudo-particles, now applied to models of stars, galaxies, and the large scale structure of the Universe, as well as many uses outside of astrophysics.

Leon’s other major numerical contributions comprise advanced techniques for interpreting the complicated astronomical data we get from our telescopes. In this realm, he was most famous for developing the methods, now known as Lucy-Richardson deconvolution, that were used for correcting the distorted images from the Hubble Space Telescope, before NASA was able to send a team of astronauts to install correcting lenses in the early 1990s.

For all of this work Leon was awarded the Gold Medal of the Royal Astronomical Society in 2000. Since then, Leon kept working on data analysis and stellar astrophysics — even during his illness, he asked me to help organise the submission and editing of what turned out to be his final papers, on extracting information on binary-star orbits and (a subject dear to my heart) the statistics of testing scientific models.

Until the end of last year, Leon was a regular presence here at Imperial, always ready to contribute an occasionally curmudgeonly but always insightful comment on the science (and sociology) of nearly any topic in astrophysics. We hope that we will be able to appropriately memorialise his life and work here at Imperial and elsewhere. He is survived by his wife and daughter. He will be missed.

by Andrew at May 21, 2018 09:27 AM

May 19, 2018

Tommaso Dorigo - Scientificblogging

Piero Martin At TedX: An Eulogy Of The Error
Living in Padova has its merits. I moved here since January 1st and am enjoying every bit of it. I used to live in Venice, my home town, and commute with Padova during weekdays, but a number of factors led me to decide on this move (not last the fact that I could afford to buy a spacious place close to my office in Padova, while in Venice I was confined to a rented apartment).

read more

by Tommaso Dorigo at May 19, 2018 03:28 PM

May 18, 2018

Clifford V. Johnson - Asymptotia

Make with Me!

Bay Area! You're up next! The Maker Faire is a wonderful event/movement that I've heard about for years and which always struck me as very much in line with my own way of being (making, tinkering, building, creating, as time permits...) On Sunday I'll have the honour of being on one of the centre stages (3:45pm) talking with Kishore Hari (of the podcast Inquiring Minds) about how I made The Dialogues, and why. I might go into some extra detail about my research into making graphic books, and the techniques I used, given the audience. Why yes, I'll sign books for you afterwards, of course. Thanks for asking.

I recommend getting a day pass and see a ton of interesting events that day! Here's a link to the Sunday schedule and amor there you can see links to the whole faire and tickets!

-cvj Click to continue reading this post

The post Make with Me! appeared first on Asymptotia.

by Clifford at May 18, 2018 02:48 PM

May 14, 2018

Sean Carroll - Preposterous Universe

Intro to Cosmology Videos

In completely separate video news, here are videos of lectures I gave at CERN several years ago: “Cosmology for Particle Physicists” (May 2005). These are slightly technical — at the very least they presume you know calculus and basic physics — but are still basically accurate despite their age.

  1. Introduction to Cosmology
  2. Dark Matter
  3. Dark Energy
  4. Thermodynamics and the Early Universe
  5. Inflation and Beyond

Update: I originally linked these from YouTube, but apparently they were swiped from this page at CERN, and have been taken down from YouTube. So now I’m linking directly to the CERN copies. Thanks to commenters Bill Schempp and Matt Wright.

by Sean Carroll at May 14, 2018 07:09 PM

May 13, 2018

Tommaso Dorigo - Scientificblogging

Is Dark Matter Lurking In Anomalous Neutron Decays ?
A paper by B. Fornal and B. Grinstein published last week in Physical Review Letters is drawing a lot of interest to one of the most well-known pieces of subnuclear physics since the days of Enrico Fermi: beta decay.

read more

by Tommaso Dorigo at May 13, 2018 02:52 PM

May 10, 2018

Sean Carroll - Preposterous Universe

User-Friendly Naturalism Videos

Some of you might be familiar with the Moving Naturalism Forward workshop I organized way back in 2012. For two and a half days, an interdisciplinary group of naturalists (in the sense of “not believing in the supernatural”) sat around to hash out the following basic question: “So we don’t believe in God, what next?” How do we describe reality, how can we be moral, what are free will and consciousness, those kinds of things. Participants included Jerry Coyne, Richard Dawkins, Terrence Deacon, Simon DeDeo, Daniel Dennett, Owen Flanagan, Rebecca Newberger Goldstein, Janna Levin, Massimo Pigliucci, David Poeppel, Nicholas Pritzker, Alex Rosenberg, Don Ross, and Steven Weinberg.

Happily we recorded all of the sessions to video, and put them on YouTube. Unhappily, those were just unedited proceedings of each session — so ten videos, at least an hour and a half each, full of gems but without any very clear way to find them if you weren’t patient enough to sift through the entire thing.

No more! Thanks to the heroic efforts of Gia Mora, the proceedings have been edited down to a number of much more accessible and content-centered highlights. There are over 80 videos (!), with a median length of maybe 5 minutes, though they range up to about 20 minutes and down to less than one. Each video centers on a particular idea, theme, or point of discussion, so you can dive right into whatever particular issues you may be interested in. Here, for example, is a conversation on “Mattering and Secular Communities,” featuring Rebecca Goldstein, Dan Dennett, and Owen Flanagan.

The videos can be seen on the workshop web page, or on my YouTube channel. They’re divided into categories:

A lot of good stuff in there. Enjoy!

by Sean Carroll at May 10, 2018 02:48 PM

March 29, 2018

Robert Helling - atdotde

Machine Learning for Physics?!?
Today was the last day of a nice workshop here at the Arnold Sommerfeld Center organised by Thomas Grimm and Sven Krippendorf on the use of Big Data and Machine Learning in string theory. While the former (at this workshop mainly in the form of developments following Kreuzer/Skarke and taking it further for F-theory constructions, orbifolds and the like) appears to be quite advanced as of today, the latter is still in its very early days. At best.

I got the impression that for many physicists that have not yet spent too much time with this, deep learning and in particular deep neural networks are expected to be some kind of silver bullet that can answer all kinds of questions that humans have not been able to answer despite some effort. I think this hope is at best premature and looking at the (admittedly impressive) examples where it works (playing Go, classifying images, speech recognition, event filtering at LHC) these seem to be more like those problems where humans have at least a rough idea how to solve them (if it is not something that humans do everyday like understanding text) and also roughly how one would code it but that are too messy or vague to be treated by a traditional program.

So, during some of the less entertaining talks I sat down and thought about problems where I would expect neural networks to perform badly. And then, if this approach fails even in simpler cases that are fully under control one should maybe curb the expectations for the more complex cases that one would love to have the answer for. In the case of the workshop that would be guessing some topological (discrete) data (that depends very discontinuously on the model parameters). Here a simple problem would be a 2-torus wrapped by two 1-branes. And the computer is supposed to compute the number of matter generations arising from open strings at the intersections, i.e. given two branes (in terms of their slope w.r.t. the cycles of the torus) how often do they intersect? Of course these numbers depend sensitively on the slope (as a real number) as for rational slopes [latex]p/q[/latex] and [latex]m/n[/latex] the intersection number is the absolute value of [latex]pn-qm[/latex]. My guess would be that this is almost impossible to get right for a neural network, let alone the much more complicated variants of this simple problem.

Related but with the possibility for nicer pictures is the following: Can a neural network learn the shape of the Mandelbrot set? Let me remind those of you who cannot remember the 80ies anymore, for a complex number c you recursively apply the function
[latex]f_c(z)= z^2 +c[/latex]
starting from 0 and ask if this stays bounded (a quick check shows that once you are outside [latex]|z| < 2[/latex] you cannot avoid running to infinity). You color the point c in the complex plane according to the number of times you have to apply f_c to 0 to leave this circle. I decided to do this for complex numbers x+iy in the rectangle -0.74
I have written a small mathematica program to compute this image. Built into mathematica is also a neural network: You can feed training data to the function Predict[], for me these were 1,000,000 points in this rectangle and the number of steps it takes to leave the 2-ball. Then mathematica thinks for about 24 hours and spits out a predictor function. Then you can plot this as well:


There is some similarity but clearly it has no idea about the fractal nature of the Mandelbrot set. If you really believe in magic powers of neural networks, you might even hope that once it learned the function for this rectangle one could extrapolate to outside this rectangle. Well, at least in this case, this hope is not justified: The neural network thinks the correct continuation looks like this:
Ehm. No.

All this of course with the caveat that I am no expert on neural networks and I did not attempt anything to tune the result. I only took the neural network function built into mathematica. Maybe, with a bit of coding and TensorFlow one can do much better. But on the other hand, this is a simple two dimensional problem. At least for traditional approaches this should be much simpler than the other much higher dimensional problems the physicists are really interested in.

by Robert Helling (noreply@blogger.com) at March 29, 2018 07:35 PM

Axel Maas - Looking Inside the Standard Model

Asking questions leads to a change of mind
In this entry, I would like to digress a bit from my usual discussion of our physics research subject. Rather, I would like to talk a bit about how I do this kind of research. There is a twofold motivation for me to do this.

One is that I am currently teaching, together with somebody from the philosophy department, a course on science philosophy of physics. It cam to me as a surprise that one thing the students of philosophy are interested in is, how I think. What are the objects, or subjects, and how I connect them when doing research. Or even when I just think about a physics theory. The other is the review I have have recently written. Both topics may seem unrelated at first. But there is deep connection. It is less about what I have written in the review, but rather what led me up to this point. This requires some historical digression in my own research.

In the very beginning, I started out with doing research on the strong interactions. One of the features of the strong interactions is that the supposed elementary particles, quarks and gluons, are never seen separately, but only in combinations as hadrons. This is a phenomenon which is called confinement. It always somehow presented as a mystery. And as such, it is interesting. Thus, one question in my early research was how to understand this phenomenon.

Doing that I came across an interesting result from the 1970ies. It appears that a, at first sight completely unrelated, effect is very intimately related to confinement. At least in some theories. This is the Brout-Englert-Higgs effect. However, we seem to observe the particles responsible for and affected by the Higgs effect. And indeed, at that time, I was still thinking that the particles affected by the Brout-Englert-Higgs effect, especially  the Higgs and the W and Z bosons, are just ordinary, observable particles. When one reads my first paper of this time on the Higgs, this is quite obvious. But then there was the results of the 1970ies. It stated that, on a very formal level, there should be no difference between confinement and the Brout-Englert-Higgs effect, in a very definite way.

Now the implications of that serious sparked my interest. But I thought this would help me to understand confinement, as it was still very ingrained into me that confinement is a particular feature of the strong interactions. The mathematical connection I just took as a curiosity. And so I started to do extensive numerical simulations of the situation.

But while trying to do so, things which did not add up started to accumulate. This is probably most evident in a conference proceeding where I tried to put sense into something which, with hindsight, could never be interpreted in the way I did there. I still tried to press the result into the scheme of thinking that the Higgs and the W/Z are physical particles, which we observe in experiment, as this is the standard lore. But the data would not fit this picture, and the more and better data I gathered, the more conflicted the results became. At some point, it was clear that something was amiss.

At that point, I had two options. Either keep with the concepts of confinement and the Brout-Englert-Higgs effect as they have been since the 1960ies. Or to take the data seriously, assuming that these conceptions were wrong. It is probably signifying my difficulties that it took me more than a year to come to terms with the results. In the end, the decisive point was that, as a theoretician, I needed to take my theory seriously, no matter the results. There is no way around it. And it gave a prediction which did not fit my view of the experiments than necessarily either my view was incorrect or the theory. The latter seemed more improbable than the first, as it fits experiment very well. So, finally, I found an explanation, which was consistent. And this explanation accepted the curious mathematical statement from the 1970ies that confinement and the Brout-Englert-Higgs effect are qualitatively the same, but not quantitatively. And thus the conclusion was what we observe are not really the Higgs and the W/Z bosons, but rather some interesting composite objects, just like hadrons, which due to a quirk of the theory just behave almost as if they are the elementary particles.

This was still a very challenging thought to me. After all, this was quite contradictory to usual notions. Thus, it came as a very great relief to me that during a trip a couple months later someone pointed me to a few, almost forgotten by most, papers from the early 1980ies, which gave, for a completely different reason, the same answer. Together with my own observation, this made click, and everything started to fit together - the 1970ies curiosity, the standard notions, my data. That I published in the mid of 2012, even though this still lacked some more systematic stuff. But it required still to shift my thinking from agreement to really understanding. That came then in the years to follow.

The important click was to recognize that confinement and the Brout-Englert-Higgs effect are, just as pointed out in the 1970ies mathematically, really just two faces to the same underlying phenomena. On a very abstract level, essentially all particles which make up the standard model, are really just a means to an end. What we observe are objects which are described by them, but which they are not themselves. They emerge, just like hadrons emerge in the strong interaction, but with very different technical details. This is actually very deeply connected with the concept of gauge symmetry, but this becomes quickly technical. Of course, since this is fundamentally different from the usual way, this required confirmation. So we went, made predictions which could distinguish between the standard way of thinking and this way of thinking, and tested them. And it came out as we predicted. So, seems we are on the right track. And all details, all the if, how, and why, and all the technicalities and math you can find in the review.

To make now full circle to the starting point: That what happened during this decade in my mind was that the way I thought about how the physical theory I tried to describe, the standard model, changed. In the beginning I was thinking in terms of particles and their interactions. Now, very much motivated by gauge symmetry, and, not incidental, by its more deeper conceptual challenges, I think differently. I think no longer in terms of the elementary particles as entities themselves, but rather as auxiliary building blocks of actually experimentally accessible quantities. The standard 'small-ball' analogy went fully away, and there formed, well, hard to say, a new class of entities, which does not necessarily has any analogy. Perhaps the best analogy is that of, no, I really do not know how to phrase it. Perhaps at a later time I will come across something. Right now, it is more math than words.

This also transformed the way how I think about the original problem, confinement. I am curious, where this, and all the rest, will lead to. For now, the next step will be to go ahead from simulations, and see whether we can find some way how to test this actually in experiment. We have some ideas, but in the end, it may be that present experiments will not be sensitive enough. Stay tuned.

by Axel Maas (noreply@blogger.com) at March 29, 2018 01:09 PM

March 28, 2018

Marco Frasca - The Gauge Connection

Paper with a proof of confinement has been accepted

Recently, I wrote a paper together with Masud Chaichian (see here) containing a mathematical proof of confinement of a non-Abelian gauge theory based on Kugo-Ojima criterion. This paper underwent an extended review by several colleagues well before its submission. One of them has been Taichiro Kugo, one of the discoverers of the confinement criterion, that helped a lot to improve the paper and clarify some points. Then, after a review round of about two months, the paper has been accepted in Physics Letters B, one of the most important journals in particle physics.

This paper contains the exact beta function of a Yang-Mills theory. This confirms that confinement arises by the combination of the running coupling and the propagator. This idea was around in some papers in these latter years. It emerged as soon as people realized that the propagator by itself was not enough to grant confinement, after extended studies on the lattice.

It is interesting to point out that confinement is rooted in the BRST invariance and asymptotic freedom. The Kugo-Ojima confinement criterion permits to close the argument in a rigorous way yielding the exact beta funtion of the theory.

by mfrasca at March 28, 2018 09:34 AM

March 20, 2018

Marco Frasca - The Gauge Connection

Good news from Moriond

Some days ago, Rencontres of Moriond 2018 ended with the CERN presenting a wealth of results also about the Higgs particle. The direction that the two great experiments, ATLAS and CMS, took is that of improving the measurements on the Standard Model as no evidence has been seen so far of possible new particles. Also, the studies of the properties of the Higgs particle have been refined as promised and the news are really striking.

In a communicate to the public (see here), CERN finally acknowledge, for the first time, a significant discrepancy between data from CMS and Standard Model for the signal strengths in the Higgs decay channels. They claim a 17% difference. This is what I advocated for some years and I have published in reputable journals. I will discuss this below. I would like only to show you the CMS results in the figure below.

ATLAS, by its side, is seeing significant discrepancy in the ZZ channel (2\sigma) and a 1\sigma compatibility for the WW channel. Here are their results.

On the left the WW channel is shown and on the right there are the combined \gamma\gamma and ZZ channels.

The reason of the discrepancy is due, as I have shown in some papers (see here, here and here), to the improper use of perturbation theory to evaluate the Higgs sector. The true propagator of the theory is a sum of Yukawa-like propagators with a harmonic oscillator spectrum. I solved exactly this sector of the Standard Model. So, when the full propagator is taken into account, the discrepancy is toward an increase of the signal strength. Is it worth a try?

This means that this is not physics beyond the Standard Model but, rather, the Standard Model in its full glory that is teaching something new to us about quantum field theory. Now, we are eager to see the improvements in the data to come with the new run of LHC starting now. In the summer conferences we will have reasons to be excited.

by mfrasca at March 20, 2018 09:17 AM

March 17, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Remembering Stephen Hawking

Like many physicists, I woke to some sad news early last Wednesday morning, and to a phoneful of requests from journalists for a soundbyte. In fact, although I bumped into Stephen at various conferences, I only had one significant meeting with him – he was intrigued by my research group’s discovery that Einstein once attempted a steady-state model of the universe. It was a slightly scary but very funny meeting during which his famous sense of humour was fully at play.

4Hawking_1 (1)

Yours truly talking steady-state cosmology with Stephen Hawking

I recalled the incident in a radio interview with RTE Radio 1 on Wednesday. As I say in the piece, the first words that appeared on Stephen’s screen were “I knew..” My heart sank as I assumed he was about to say “I knew about that manuscript“. But when I had recovered sufficiently to look again, what Stephen was actually saying was “I knew ..your father”. Phew! You can find the podcast here.

Image result for cormac o raifeartaigh stephen hawking

Hawking in conversation with my late father (LHS) and with Ernest Walton (RHS)

RTE TV had a very nice obituary on the Six One News, I have a cameo appearence a few minutes into the piece here.

In my view, few could question Hawking’s brilliant contributions to physics, or his outstanding contribution to the public awareness of science. His legacy also includes the presence of many brilliant young physicists at the University of Cambridge today. However, as I point out in a letter in today’s Irish Times, had Hawking lived in Ireland, he probably would have found it very difficult to acquire government funding for his work. Indeed, he would have found that research into the workings of the universe does not qualify as one of the “strategic research areas” identified by our national funding body, Science Foundation Ireland. I suspect the letter will provoke an angry from certain quarters, but it is tragically true.

Update

The above notwithstanding, it’s important not to overstate the importance of one scientist. Indeed, today’s Sunday Times contains a good example of the dangers of science history being written by journalists. Discussing Stephen’s 1974 work on black holes, Bryan Appleyard states  “The paper in effect launched the next four decades of cutting edge physics. Odd flowers with odd names bloomed in the garden of cosmic speculation – branes, worldsheets , supersymmetry …. and, strangest of all, the colossal tree of string theory”.

What? String theory, supersymmetry and brane theory are all modern theories of particle physics (the study of the world of the very small). While these theories were used to some extent by Stephen in his research in cosmology (the study of the very large), it is ludicrous to suggest that they were launched by his work.

 

by cormac at March 17, 2018 08:27 PM

March 16, 2018

Sean Carroll - Preposterous Universe

Stephen Hawking’s Scientific Legacy

Stephen Hawking died Wednesday morning, age 76. Plenty of memories and tributes have been written, including these by me:

I can also point to my Story Collider story from a few years ago, about how I turned down a job offer from Hawking, and eventually took lessons from his way of dealing with the world.

Of course Hawking has been mentioned on this blog many times.

When I started writing the above pieces (mostly yesterday, in a bit of a rush), I stumbled across this article I had written several years ago about Hawking’s scientific legacy. It was solicited by a magazine at a time when Hawking was very ill and people thought he would die relatively quickly — it wasn’t the only time people thought that, only to be proven wrong. I’m pretty sure the article was never printed, and I never got paid for it; so here it is!

(If you’re interested in a much better description of Hawking’s scientific legacy by someone who should know, see this article in The Guardian by Roger Penrose.)

Stephen Hawking’s Scientific Legacy

Stephen Hawking is the rare scientist who is also a celebrity and cultural phenomenon. But he is also the rare cultural phenomenon whose celebrity is entirely deserved. His contributions can be characterized very simply: Hawking contributed more to our understanding of gravity than any physicist since Albert Einstein.

“Gravity” is an important word here. For much of Hawking’s career, theoretical physicists as a community were more interested in particle physics and the other forces of nature — electromagnetism and the strong and weak nuclear forces. “Classical” gravity (ignoring the complications of quantum mechanics) had been figured out by Einstein in his theory of general relativity, and “quantum” gravity (creating a quantum version of general relativity) seemed too hard. By applying his prodigious intellect to the most well-known force of nature, Hawking was able to come up with several results that took the wider community completely by surprise.

By acclimation, Hawking’s most important result is the realization that black holes are not completely black — they give off radiation, just like ordinary objects. Before that famous paper, he proved important theorems about black holes and singularities, and afterward studied the universe as a whole. In each phase of his career, his contributions were central.

The Classical Period

While working on his Ph.D. thesis in Cambridge in the mid-1960’s, Hawking became interested in the question of the origin and ultimate fate of the universe. The right tool for investigating this problem is general relativity, Einstein’s theory of space, time, and gravity. According to general relativity, what we perceive as “gravity” is a reflection of the curvature of spacetime. By understanding how that curvature is created by matter and energy, we can predict how the universe evolves. This may be thought of as Hawking’s “classical” period, to contrast classical general relativity with his later investigations in quantum field theory and quantum gravity.

Around the same time, Roger Penrose at Oxford had proven a remarkable result: that according to general relativity, under very broad circumstances, space and time would crash in on themselves to form a singularity. If gravity is the curvature of spacetime, a singularity is a moment in time when that curvature becomes infinitely big. This theorem showed that singularities weren’t just curiosities; they are an important feature of general relativity.

Penrose’s result applied to black holes — regions of spacetime where the gravitational field is so strong that even light cannot escape. Inside a black hole, the singularity lurks in the future. Hawking took Penrose’s idea and turned it around, aiming at the past of our universe. He showed that, under similarly general circumstances, space must have come into existence at a singularity: the Big Bang. Modern cosmologists talk (confusingly) about both the Big Bang “model,” which is the very successful theory that describes the evolution of an expanding universe over billions of years, and also the Big Bang “singularity,” which we still don’t claim to understand.

Hawking then turned his own attention to black holes. Another interesting result by Penrose had shown that it’s possible to extract energy from a rotating black hole, essentially by bleeding off its spin until it’s no longer rotating. Hawking was able to demonstrate that, although you can extract energy, the area of the event horizon surrounding the black hole will always increase in any physical process. This “area theorem” was both important in its own right, and also evocative of a completely separate area of physics: thermodynamics, the study of heat.

Thermodynamics obeys a set of famous laws. For example, the first law tells us that energy is conserved, while the second law tells us that entropy — a measure of the disorderliness of the universe — never decreases for an isolated system. Working with James Bardeen and Brandon Carter, Hawking proposed a set of laws for “black hole mechanics,” in close analogy with thermodynamics. Just as in thermodynamics, the first law of black hole mechanics ensures that energy is conserved. The second law is Hawking’s area theorem, that the area of the event horizon never decreases. In other words, the area of the event horizon of a black hole is very analogous to the entropy of a thermodynamic system — they both tend to increase over time.

Black Hole Evaporation

Hawking and his collaborators were justly proud of the laws of black hole mechanics, but they viewed them as simply a formal analogy, not a literal connection between gravity and thermodynamics. In 1972, a graduate student at Princeton University named Jacob Bekenstein suggested that there was more to it than that. Bekenstein, on the basis of some ingenious thought experiments, suggested that the behavior of black holes isn’t simply like thermodynamics, it actually is thermodynamics. In particular, black holes have entropy.

Like many bold ideas, this one was met with resistance from experts — and at this point, Stephen Hawking was the world’s expert on black holes. Hawking was certainly skeptical, and for good reason. If black hole mechanics is really just a form of thermodynamics, that means black holes have a temperature. And objects that have a temperature emit radiation — the famous “black body radiation” that played a central role in the development of quantum mechanics. So if Bekenstein were right, it would seemingly imply that black holes weren’t really black (although Bekenstein himself didn’t quite go that far).

To address this problem seriously, you need to look beyond general relativity itself, since Einstein’s theory is purely “classical” — it doesn’t incorporate the insights of quantum mechanics. Hawking knew that Russian physicists Alexander Starobinsky and Yakov Zel’dovich had investigated quantum effects in the vicinity of black holes, and had predicted a phenomenon called “superradiance.” Just as Penrose had showed that you could extract energy from a spinning black hole, Starobinsky and Zel’dovich showed that rotating black holes could emit radiation spontaneously via quantum mechanics. Hawking himself was not an expert in the techniques of quantum field theory, which at the time were the province of particle physicists rather than general relativists. But he was a quick study, and threw himself into the difficult task of understanding the quantum aspects of black holes, so that he could find Bekenstein’s mistake.

Instead, he surprised himself, and in the process turned theoretical physics on its head. What Hawking eventually discovered was that Bekenstein was right — black holes do have entropy — and that the extraordinary implications of this idea were actually true — black holes are not completely black. These days we refer to the “Bekenstein-Hawking entropy” of black holes, which emit “Hawking radiation” at their “Hawking temperature.”

There is a nice hand-waving way of understanding Hawking radiation. Quantum mechanics says (among other things) that you can’t pin a system down to a definite classical state; there is always some intrinsic uncertainty in what you will see when you look at it. This is even true for empty space itself — when you look closely enough, what you thought was empty space is really alive with “virtual particles,” constantly popping in and out of existence. Hawking showed that, in the vicinity of a black hole, a pair of virtual particles can be split apart, one falling into the hole and the other escaping as radiation. Amazingly, the infalling particle has a negative energy as measured by an observer outside. The result is that the radiation gradually takes mass away from the black hole — it evaporates.

Hawking’s result had obvious and profound implications for how we think about black holes. Instead of being a cosmic dead end, where matter and energy disappear forever, they are dynamical objects that will eventually evaporate completely. But more importantly for theoretical physics, this discovery raised a question to which we still don’t know the answer: when matter falls into a black hole, and then the black hole radiates away, where does the information go?

If you take an encyclopedia and toss it into a fire, you might think the information contained inside is lost forever. But according to the laws of quantum mechanics, it isn’t really lost at all; if you were able to capture every bit of light and ash that emerged from the fire, in principle you could exactly reconstruct everything that went into it, even the print on the book pages. But black holes, if Hawking’s result is taken at face value, seem to destroy information, at least from the perspective of the outside world. This conundrum is the “black hole information loss puzzle,” and has been nagging at physicists for decades.

In recent years, progress in understanding quantum gravity (at a purely thought-experiment level) has convinced more people that the information really is preserved. In 1997 Hawking made a bet with American physicists Kip Thorne and John Preskill; Hawking and Thorne said that information was destroyed, Preskill said that somehow it was preserved. In 2007 Hawking conceded his end of the bet, admitting that black holes don’t destroy information. However, Thorne has not conceded for his part, and Preskill himself thinks the concession was premature. Black hole radiation and entropy continue to be central guiding principles in our search for a better understanding of quantum gravity.

Quantum Cosmology

Hawking’s work on black hole radiation relied on a mixture of quantum and classical ideas. In his model, the black hole itself was treated classically, according to the rules of general relativity; meanwhile, the virtual particles near the black hole were treated using the rules of quantum mechanics. The ultimate goal of many theoretical physicists is to construct a true theory of quantum gravity, in which spacetime itself would be part of the quantum system.

If there is one place where quantum mechanics and gravity both play a central role, it’s at the origin of the universe itself. And it’s to this question, unsurprisingly, that Hawking devoted the latter part of his career. In doing so, he established the agenda for physicists’ ambitious project of understanding where our universe came from.

In quantum mechanics, a system doesn’t have a position or velocity; its state is described by a “wave function,” which tells us the probability that we would measure a particular position or velocity if we were to observe the system. In 1983, Hawking and James Hartle published a paper entitled simply “Wave Function of the Universe.” They proposed a simple procedure from which — in principle! — the state of the entire universe could be calculated. We don’t know whether the Hartle-Hawking wave function is actually the correct description of the universe. Indeed, because we don’t actually have a full theory of quantum gravity, we don’t even know whether their procedure is sensible. But their paper showed that we could talk about the very beginning of the universe in a scientific way.

Studying the origin of the universe offers the prospect of connecting quantum gravity to observable features of the universe. Cosmologists believe that tiny variations in the density of matter from very early times gradually grew into the distribution of stars and galaxies we observe today. A complete theory of the origin of the universe might be able to predict these variations, and carrying out this program is a major occupation of physicists today. Hawking made a number of contributions to this program, both from his wave function of the universe and in the context of the “inflationary universe” model proposed by Alan Guth.

Simply talking about the origin of the universe is a provocative step. It raises the prospect that science might be able to provide a complete and self-contained description of reality — a prospect that stretches beyond science, into the realms of philosophy and theology. Hawking, always provocative, never shied away from these implications. He was fond of recalling a cosmology conference hosted by the Vatican, at which Pope John Paul II allegedly told the assembled scientists not to inquire into the origin of the universe, “because that was the moment of creation and therefore the work of God.” Admonitions of this sort didn’t slow Hawking down; he lived his life in a tireless pursuit of the most fundamental questions science could tackle.

 

by Sean Carroll at March 16, 2018 11:23 PM

Ben Still - Neutrino Blog

Particle Physics Brick by Brick
It has been a very long time since I last posted and I apologise for that. I have been working the LEGO analogy, as described in the pentaquark series and elsewhere, into a book. The book is called Particle Physics Brick by Brick and the aim is to stretch the LEGO analogy to breaking point while covering as much of the standard model of particle physics as possible. I have had enormous fun writing it and I hope that you will enjoy it as much if you choose to buy it.

It has been available in the UK since September 2017 and you can buy it from Foyles / Waterstones / Blackwell's / AmazonUK where it is receiving ★★★★★ reviews

It is released in the US this Wednesday 21st March 2018 and you can buy it from all good book stores and Amazon.com 

I just wanted to share a few reviews of the book as well because it makes me happy!

Spend a few hours perusing these pages and you'll be in a much better frame of mind to understand your place in the cosmos... The astronomically large objects of the universe are no easier to grasp than the atomically small particles of matter. That's where Ben Still comes in, carrying a box of Legos. A British physicist with a knack for explaining abstract concepts... He starts by matching the weird properties and interactions described by the Standard Model of particle physics with the perfectly ordinary blocks of a collection of Legos. Quarks and leptons, gluons and charms are assigned to various colors and combinations of plastic bricks. Once you've got that system in mind, hang on: Still races off to illustrate the Big Bang, the birth of stars, electromagnetism and all matter of fantastical-sounding phenomenon, like mesons and beta decay. "Given enough plastic bricks, the rules in this book and enough time," Still concludes, "one might imagine that a plastic Universe could be built by us, brick by brick." Remember that the next time you accidentally step on one barefoot.--Ron Charles, The Washington Post

Complex topics explained simply An excellent book. I am Head of Physics at a school and have just ordered 60 copies of this for our L6th students for summer reading before studying the topic on particle physics early next year. Highly recommended. - Ben ★★★★★ AmazonUK

It's beautifully illustrated and very eloquently explains the fundamentals of particle ...
This is a gem of a pop science book. It's beautifully illustrated and very eloquently explains the fundamentals of particle physics without hitting you over the head with quantum field theory and Lagrangian dynamics. The author has done an exceptional job. This is a must have for all students and academics of both physics and applied maths! - Jamie ★★★★★ AmazonUK

by Ben (noreply@blogger.com) at March 16, 2018 09:32 PM

March 02, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Snowbound academics are better academics

Like most people in Ireland, I am working at home today. We got quite a dump of snow in the last two days, and there is no question of going anywhere until the roads clear. Worse, our college closed quite abruptly and I was caught on the hop – there are a lot of things (flash drives, books and papers) sitting smugly in my office that I need for my usual research.

IMG_1459

The college on Monday evening

That said, I must admit I’m finding it all quite refreshing. For the first time in years, I have time to read interesting things in my daily email; all those postings from academic listings that I never seem to get time to read normally. I’m enjoying it so much, I wonder how much stuff I miss the rest of the time.

IMG_1470

The view from my window as I write this

This morning, I thoroughly enjoyed a paper by Nicholas Campion on the representation of astronomy and cosmology in the works of William Shakespeare. I’ve often wondered about this as Shakespeare lived long enough to know of Galileo’s ground-breaking astronomical observations. However, anyone expecting coded references to new ideas about the universe in Shakespeare’s sonnets and plays will be disappointed; apparently he mainly sticks to classical ideas, with a few vague references to the changing order.

I’m also reading about early attempts to measure the parallax of light from a comet, especially by the great Danish astronomer Tycho de Brahe. This paper comes courtesy of the History of Astronomy Discussion Group listings, a really useful resource for anyone interested in the history of astronomy.

While I’m reading all this, I’m also trying to keep abreast of a thoroughly modern debate taking place worldwide, concerning the veracity of an exciting new result in cosmology on the formation of the first stars. It seems a group studying the cosmic microwave background think they have found evidence of a signal representing the absorption of radiation from the first stars. This is exciting enough if correct, but the dramatic part is that the signal is much larger than expected, and one explanation is that this effect may be due to the presence of Dark Matter.

If true, the result would be a major step in our understanding of the formation of stars,  plus a major step in the demonstration of the existence of Dark Matter. However, it’s early days – there are many possible sources of a spurious signal and signals that are larger than expected have a poor history in modern physics! There is a nice article on this in The Guardian, and you can see some of the debate on Peter Coles’s blog In the Dark.  Right or wrong, it’s a good example of how scientific discovery works – if the team can show they have taken all possible spurious results into account, and if other groups find the same result, skepticism will soon be converted into excited acceptance.

All in all, a great day so far. My only concern is that this is the way academia should be – with our day-to-day commitments in teaching and research, it’s easy to forget there is a larger academic world out there.

Update

Of course, the best part is the walk into the village when it finally stops chucking down. can’t believe my local pub is open!

IMG_1480

Dunmore East in the snow today

 

by cormac at March 02, 2018 01:44 PM

March 01, 2018

Sean Carroll - Preposterous Universe

Dark Matter and the Earliest Stars

So here’s something intriguing: an observational signature from the very first stars in the universe, which formed about 180 million years after the Big Bang (a little over one percent of the current age of the universe). This is exciting all by itself, and well worthy of our attention; getting data about the earliest generation of stars is notoriously difficult, and any morsel of information we can scrounge up is very helpful in putting together a picture of how the universe evolved from a relatively smooth plasma to the lumpy riot of stars and galaxies we see today. (Pop-level writeups at The Guardian and Science News, plus a helpful Twitter thread from Emma Chapman.)

But the intrigue gets kicked up a notch by an additional feature of the new results: the data imply that the cosmic gas surrounding these early stars is quite a bit cooler than we expected. What’s more, there’s a provocative explanation for why this might be the case: the gas might be cooled by interacting with dark matter. That’s quite a bit more speculative, of course, but sensible enough (and grounded in data) that it’s worth taking the possibility seriously.

[Update: skepticism has already been raised about the result. See this comment by Tim Brandt below.]

Illustration: NR Fuller, National Science Foundation

Let’s think about the stars first. We’re not seeing them directly; what we’re actually looking at is the cosmic microwave background (CMB) radiation, from about 380,000 years after the Big Bang. That radiation passes through the cosmic gas spread throughout the universe, occasionally getting absorbed. But when stars first start shining, they can very gently excite the gas around them (the 21cm hyperfine transition, for you experts), which in turn can affect the wavelength of radiation that gets absorbed. This shows up as a tiny distortion in the spectrum of the CMB itself. It’s that distortion which has now been observed, and the exact wavelength at which the distortion appears lets us work out the time at which those earliest stars began to shine.

Two cool things about this. First, it’s a tour de force bit of observational cosmology by Judd Bowman and collaborators. Not that collecting the data is hard by modern standards (observing the CMB is something we’re good at), but that the researchers were able to account for all of the different ways such a distortion could be produced other than by the first stars. (Contamination by such “foregrounds” is a notoriously tricky problem in CMB observations…) Second, the experiment itself is totally charming. EDGES (Experiment to Detect Global EoR [Epoch of Reionization] Signature) is a small-table-sized gizmo surrounded by a metal mesh, plopped down in a desert in Western Australia. Three cheers for small science!

But we all knew that the first stars had to be somewhen, it was just a matter of when. The surprise is that the spectral distortion is larger than expected (at 3.8 sigma), a sign that the cosmic gas surrounding the stars is colder than expected (and can therefore absorb more radiation). Why would that be the case? It’s not easy to come up with explanations — there are plenty of ways to heat up gas, but it’s not easy to cool it down.

One bold hypothesis is put forward by Rennan Barkana in a companion paper. One way to cool down gas is to have it interact with something even colder. So maybe — cold dark matter? Barkana runs the numbers, given what we know about the density of dark matter, and finds that we could get the requisite amount of cooling with a relatively light dark-matter particle — less than five times the mass of the proton, well less than expected in typical models of Weakly Interacting Massive Particles. But not completely crazy. And not really constrained by current detection limits from underground experiments, which are generally sensitive to higher masses.

The tricky part is figuring out how the dark matter could interact with the ordinary matter to cool it down. Barkana doesn’t propose any specific model, but looks at interactions that depend sharply on the relative velocity of the particles, as v^{-4}. You might get that, for example, if there was an extremely light (perhaps massless) boson mediating the interaction between dark and ordinary matter. There are already tight limits on such things, but not enough to completely squelch the idea.

This is all extraordinarily speculative, but worth keeping an eye on. It will be full employment for particle-physics model-builders, who will be tasked with coming up with full theories that predict the right relic abundance of dark matter, have the right velocity-dependent force between dark and ordinary matter, and are compatible with all other known experimental constraints. It’s worth doing, as currently all of our information about dark matter comes from its gravitational interactions, not its interactions directly with ordinary matter. Any tiny hint of that is worth taking very seriously.

But of course it might all go away. More work will be necessary to verify the observations, and to work out the possible theoretical implications. Such is life at the cutting edge of science!

by Sean Carroll at March 01, 2018 12:00 AM

February 25, 2018

February 08, 2018

Sean Carroll - Preposterous Universe

Why Is There Something, Rather Than Nothing?

A good question!

Or is it?

I’ve talked before about the issue of why the universe exists at all (1, 2), but now I’ve had the opportunity to do a relatively careful job with it, courtesy of Eleanor Knox and Alastair Wilson. They are editing an upcoming volume, the Routledge Companion to the Philosophy of Physics, and asked me to contribute a chapter on this topic. Final edits aren’t done yet, but I’ve decided to put the draft on the arxiv:

Why Is There Something, Rather Than Nothing?
Sean M. Carroll

It seems natural to ask why the universe exists at all. Modern physics suggests that the universe can exist all by itself as a self-contained system, without anything external to create or sustain it. But there might not be an absolute answer to why it exists. I argue that any attempt to account for the existence of something rather than nothing must ultimately bottom out in a set of brute facts; the universe simply is, without ultimate cause or explanation.

As you can see, my basic tack hasn’t changed: this kind of question might be the kind of thing that doesn’t have a sensible answer. In our everyday lives, it makes sense to ask “why” this or that event occurs, but such questions have answers only because they are embedded in a larger explanatory context. In particular, because the world of our everyday experience is an emergent approximation with an extremely strong arrow of time, such that we can safely associate “causes” with subsequent “effects.” The universe, considered as all of reality (i.e. let’s include the multiverse, if any), isn’t like that. The right question to ask isn’t “Why did this happen?”, but “Could this have happened in accordance with the laws of physics?” As far as the universe and our current knowledge of the laws of physics is concerned, the answer is a resounding “Yes.” The demand for something more — a reason why the universe exists at all — is a relic piece of metaphysical baggage we would be better off to discard.

This perspective gets pushback from two different sides. On the one hand we have theists, who believe that they can answer why the universe exists, and the answer is God. As we all know, this raises the question of why God exists; but aha, say the theists, that’s different, because God necessarily exists, unlike the universe which could plausibly have not. The problem with that is that nothing exists necessarily, so the move is pretty obviously a cheat. I didn’t have a lot of room in the paper to discuss this in detail (in what after all was meant as a contribution to a volume on the philosophy of physics, not the philosophy of religion), but the basic idea is there. Whether or not you want to invoke God, you will be left with certain features of reality that have to be explained by “and that’s just the way it is.” (Theism could possibly offer a better account of the nature of reality than naturalism — that’s a different question — but it doesn’t let you wiggle out of positing some brute facts about what exists.)

The other side are those scientists who think that modern physics explains why the universe exists. It doesn’t! One purported answer — “because Nothing is unstable” — was never even supposed to explain why the universe exists; it was suggested by Frank Wilczek as a way of explaining why there is more matter than antimatter. But any such line of reasoning has to start by assuming a certain set of laws of physics in the first place. Why is there even a universe that obeys those laws? This, I argue, is not a question to which science is ever going to provide a snappy and convincing answer. The right response is “that’s just the way things are.” It’s up to us as a species to cultivate the intellectual maturity to accept that some questions don’t have the kinds of answers that are designed to make us feel satisfied.

by Sean Carroll at February 08, 2018 05:19 PM

February 07, 2018

Axel Maas - Looking Inside the Standard Model

How large is an elementary particle?
Recently, in the context of a master thesis, our group has begun to determine the size of the W boson. The natural questions on this project is: Why do you do that? Do we not know it already? And does elementary particles have a size at all?

It is best to answer these questions in reverse order.

So, do elementary particles have a size at all? Well, elementary particles are called elementary as they are the most basic constituents. In our theories today, they start out as pointlike. Only particles made from other particles, so-called bound states like a nucleus or a hadron, have a size. And now comes the but.

First of all, we do not yet know whether our elementary particles are really elementary. They may also be bound states of even more elementary particles. But in experiments we can only determine upper bounds to the size. Making better experiments will reduce this upper bound. Eventually, we may see that a particle previously thought of as point-like has a size. This has happened quite frequently over time. It always opened up a new level of elementary particle theories. Therefore measuring the size is important. But for us, as theoreticians, this type of question is only important if we have an idea about what could be the more elementary particles. And while some of our research is going into this direction, this project is not.

The other issue is that quantum effects give all elementary particles an 'apparent' size. This comes about by how we measure the size of a particle. We do this by shooting some other particle at it, and measure how strongly it becomes deflected. A truly pointlike particle has a very characteristic reflection profile. But quantum effects allow for additional particles to be created and destroyed in the vicinity of any particle. Especially, they allow for the existence of another particle of the same type, at least briefly. We cannot distinguish whether we hit the original particle or one of these. Since they are not at the same place as the original particle, their average distance looks like a size. This gives even a pointlike particle an apparent size, which we can measure. In this sense even an elementary particle has a size.

So, how can we then distinguish this size from an actual size of a bound state? We can do this by calculations. We determine the apparent size due to the quantum fluctuations and compare it to the measurement. Deviations indicate an actual size. This is because for a real bound state we can scatter somewhere in its structure, and not only in its core. This difference looks pictorially like this:


So, do we know the size already? Well, as said, we can only determine upper limits. Searching for them is difficult, and often goes via detours. One of such detours are so-called anomalous couplings. Measuring how they depend on energy provides indirect information on the size. There is an active program at CERN underway to do this experimentally. The results are so far say that the size of the W is below 0.0000000000000001 meter. This seems tiny, but in the world of particle physics this is not that strong a limit.

And now the interesting question: Why do we do this? As written, we do not want to make the W a bound state of something new. But one of our main research topics is driven by an interesting theoretical structure. If the standard model is taken seriously, the particle which we observe in an experiment and call the W is actually not the W of the underlying theory. Rather, it is a bound state, which is very, very similar to the elementary particle, but actually build from the elementary particles. The difference has been so small that identifying one with the other was a very good approximation up to today. But with better and better experiments may change. Thus, we need to test this.

Because then the thing we measure is a bound state it should have a, probably tiny, size. This would be a hallmark of this theoretical structure. And that we understood it. If the size is such that it could be actually measured at CERN, then this would be an important test of our theoretical understanding of the standard model.

However, this is not a simple quantity to calculate. Bound states are intrinsically complicated. Thus, we use simulations for this purpose. In fact, we actually go over the same detour as the experiments, and will determine an anomalous coupling. From this we then infer the size indirectly. In addition, the need to perform efficient simulations forces us to simplify the problem substantially. Hence, we will not get the perfect number. But we may get the order of magnitude, or be perhaps within a factor of two, or so. And this is all we need to currently say whether a measurement is possible, or whether this will have to wait for the next generation of experiments. And thus whether we will know whether we understood the theory within a few years or within a few decades.

by Axel Maas (noreply@blogger.com) at February 07, 2018 11:18 AM

February 05, 2018

Matt Strassler - Of Particular Significance

In Memory of Joe Polchinski, the Brane Master

This week, the community of high-energy physicists — of those of us fascinated by particles, fields, strings, black holes, and the universe at large — is mourning the loss of one of the great theoretical physicists of our time, Joe Polchinski. It pains me deeply to write these words.

Everyone who knew him personally will miss his special qualities — his boyish grin, his slightly wicked sense of humor, his charming way of stopping mid-sentence to think deeply, his athleticism and friendly competitiveness. Everyone who knew his research will feel the absence of his particular form of genius, his exceptional insight, his unique combination of abilities, which I’ll try to sketch for you below. Those of us who were lucky enough to know him both personally and scientifically — well, we lose twice.

Image result for joe polchinski

Polchinski — Joe, to all his colleagues — had one of those brains that works magic, and works magically. Scientific minds are as individual as personalities. Each physicist has a unique combination of talents and skills (and weaknesses); in modern lingo, each of us has a superpower or two. Rarely do you find two scientists who have the same ones.

Joe had several superpowers, and they were really strong. He had a tremendous knack for looking at old problems and seeing them in a new light, often overturning conventional wisdom or restating that wisdom in a new, clearer way. And he had prodigious technical ability, which allowed him to follow difficult calculations all the way to the end, on paths that would have deterred most of us.

One of the greatest privileges of my life was to work with Joe, not once but four times. I think I can best tell you a little about him, and about some of his greatest achievements, through the lens of that unforgettable experience.

[To my colleagues: this post was obviously written in trying circumstances, and it is certainly possible that my memory of distant events is foggy and in error.  I welcome any corrections that you might wish to suggest.]

Our papers between 1999 and 2006 were a sequence of sorts, aimed at understanding more fully the profound connection between quantum field theory — the language of particle physics — and string theory — best-known today as a candidate for a quantum theory of gravity. In each of those papers, as in many thousands of others written after 1995, Joe’s most influential contribution to physics played a central role. This was the discovery of objects known as “D-branes”, which he found in the context of string theory. (The term is a generalization of the word `membrane’.)

I can already hear the polemical haters of string theory screaming at me. ‘A discovery in string theory,’ some will shout, pounding the table, ‘an untested and untestable theory that’s not even wrong, should not be called a discovery in physics.’ Pay them no mind; they’re not even close, as you’ll see by the end of my remarks.

The Great D-scovery

In 1989, Joe, working with two young scientists, Jin Dai and Rob Leigh, was exploring some details of string theory, and carrying out a little mathematical exercise. Normally, in string theory, strings are little lines or loops that are free to move around anywhere they like, much like particles moving around in this room. But in some cases, particles aren’t in fact free to move around; you could, for instance, study particles that are trapped on the surface of a liquid, or trapped in a very thin whisker of metal. With strings, there can be a new type of trapping that particles can’t have — you could perhaps trap one end, or both ends, of the string within a surface, while allowing the middle of the string to move freely. The place where a string’s end may be trapped — whether a point, a line, a surface, or something more exotic in higher dimensions — is what we now call a “D-brane”.  [The `D’ arises for uninteresting technical reasons.]

Joe and his co-workers hit the jackpot, but they didn’t realize it yet. What they discovered, in retrospect, was that D-branes are an automatic feature of string theory. They’re not optional; you can’t choose to study string theories that don’t have them. And they aren’t just surfaces or lines that sit still. They’re physical objects that can roam the world. They have mass and create gravitational effects. They can move around and scatter off each other. They’re just as real, and just as important, as the strings themselves!

D-Branes

Fig. 1: D branes (in green) are physical objects on which a fundamental string (in red) can terminate.

It was as though Joe and his collaborators started off trying to understand why the chicken crossed the road, and ended up discovering the existence of bicycles, cars, trucks, buses, and jet aircraft.  It was that unexpected, and that rich.

And yet, nobody, not even Joe and his colleagues, quite realized what they’d done. Rob Leigh, Joe’s co-author, had the office next to mine for a couple of years, and we wrote five papers together between 1993 and 1995. Yet I think Rob mentioned his work on D-branes to me just once or twice, in passing, and never explained it to me in detail. Their paper had less than twenty citations as 1995 began.

In 1995 the understanding of string theory took a huge leap forward. That was the moment when it was realized that all five known types of string theory are different sides of the same die — that there’s really only one string theory.  A flood of papers appeared in which certain black holes, and generalizations of black holes — black strings, black surfaces, and the like — played a central role. The relations among these were fascinating, but often confusing.

And then, on October 5, 1995, a paper appeared that changed the whole discussion, forever. It was Joe, explaining D-branes to those of us who’d barely heard of his earlier work, and showing that many of these black holes, black strings and black surfaces were actually D-branes in disguise. His paper made everything clearer, simpler, and easier to calculate; it was an immediate hit. By the beginning of 1996 it had 50 citations; twelve months later, the citation count was approaching 300.

So what? Great for string theorists, but without any connection to experiment and the real world.  What good is it to the rest of us? Patience. I’m just getting to that.

What’s it Got to Do With Nature?

Our current understanding of the make-up and workings of the universe is in terms of particles. Material objects are made from atoms, themselves made from electrons orbiting a nucleus; and the nucleus is made from neutrons and protons. We learned in the 1970s that protons and neutrons are themselves made from particles called quarks and antiquarks and gluons — specifically, from a “sea” of gluons and a few quark/anti-quark pairs, within which sit three additional quarks with no anti-quark partner… often called the `valence quarks’.  We call protons and neutrons, and all other particles with three valence quarks, `baryons”.   (Note that there are no particles with just one valence quark, or two, or four — all you get is baryons, with three.)

In the 1950s and 1960s, physicists discovered short-lived particles much like protons and neutrons, with a similar sea, but which  contain one valence quark and one valence anti-quark. Particles of this type are referred to as “mesons”.  I’ve sketched a typical meson and a typical baryon in Figure 2.  (The simplest meson is called a “pion”; it’s the most common particle produced in the proton-proton collisions at the Large Hadron Collider.)

 

MesonBaryonPictures

Fig. 2: Baryons (such as protons and neutrons) and mesons each contain a sea of gluons and quark-antiquark pairs; baryons have three unpaired “valence” quarks, while mesons have a valence quark and a valence anti-quark.  (What determines whether a quark is valence or sea involves subtle quantum effects, not discussed here.)

But the quark/gluon picture of mesons and baryons, back in the late 1960s, was just an idea, and it was in competition with a proposal that mesons are little strings. These are not, I hasten to add, the “theory of everything” strings that you learn about in Brian Greene’s books, which are a billion billion times smaller than a proton. In a “theory of everything” string theory, often all the types of particles of nature, including electrons, photons and Higgs bosons, are tiny tiny strings. What I’m talking about is a “theory of mesons” string theory, a much less ambitious idea, in which only the mesons are strings.  They’re much larger: just about as long as a proton is wide. That’s small by human standards, but immense compared to theory-of-everything strings.

Why did people think mesons were strings? Because there was experimental evidence for it! (Here’s another example.)  And that evidence didn’t go away after quarks were discovered. Instead, theoretical physicists gradually understood why quarks and gluons might produce mesons that behave a bit like strings. If you spin a meson fast enough (and this can happen by accident in experiments), its valence quark and anti-quark may separate, and the sea of objects between them forms what is called a “flux tube.” See Figure 3. [In certain superconductors, somewhat similar flux tubes can trap magnetic fields.] It’s kind of a thick string rather than a thin one, but still, it shares enough properties with a string in string theory that it can produce experimental results that are similar to string theory’s predictions.

SpinningMeson

Fig. 3: One reason mesons behave like strings in experiment is that a spinning meson acts like a thick string, with the valence quark and anti-quark at the two ends.

And so, from the mid-1970s onward, people were confident that quantum field theories like the one that describes quarks and gluons can create objects with stringy behavior. A number of physicists — including some of the most famous and respected ones — made a bolder, more ambitious claim: that quantum field theory and string theory are profoundly related, in some fundamental way. But they weren’t able to be precise about it; they had strong evidence, but it wasn’t ever entirely clear or convincing.

In particular, there was an important unresolved puzzle. If mesons are strings, then what are baryons? What are protons and neutrons, with their three valence quarks? What do they look like if you spin them quickly? The sketches people drew looked something like Figure 3. A baryon would perhaps become three joined flux tubes (with one possibly much longer than the other two), each with its own valence quark at the end.  In a stringy cartoon, that baryon would be three strings, each with a free end, with the strings attached to some sort of junction. This junction of three strings was called a “baryon vertex.”  If mesons are little strings, the fundamental objects in a string theory, what is the baryon vertex from the string theory point of view?!  Where is it hiding — what is it made of — in the mathematics of string theory?

SpinningBaryon.png

Fig. 4: A fast-spinning baryon looks vaguely like the letter Y — three valence quarks connected by flux tubes to a “baryon vertex”.  A cartoon of how this would appear from a stringy viewpoint, analogous to Fig. 3, leads to a mystery: what, in string theory, is this vertex?!

[Experts: Notice that the vertex has nothing to do with the quarks. It’s a property of the sea — specifically, of the gluons. Thus, in a world with only gluons — a world whose strings naively form loops without ends — it must still be possible, with sufficient energy, to create a vertex-antivertex pair. Thus field theory predicts that these vertices must exist in closed string theories, though they are linearly confined.]

BaryonPuzzle1.png

The baryon puzzle: what is a baryon from the string theory viewpoint?

No one knew. But isn’t it interesting that the most prominent feature of this vertex is that it is a location where a string’s end can be trapped?

Everything changed in the period 1997-2000. Following insights from many other physicists, and using D-branes as the essential tool, Juan Maldacena finally made the connection between quantum field theory and string theory precise. He was able to relate strings with gravity and extra dimensions, which you can read about in Brian Greene’s books, with the physics of particles in just three spatial dimensions, similar to those of the real world, with only non-gravitational forces.  It was soon clear that the most ambitious and radical thinking of the ’70s was correct — that almost every quantum field theory, with its particles and forces, can alternatively be viewed as a string theory. It’s a bit analogous to the way that a painting can be described in English or in Japanese — fields/particles and strings/gravity are, in this context, two very different languages for talking about exactly the same thing.

The saga of the baryon vertex took a turn in May 1998, when Ed Witten showed how a similar vertex appears in Maldacena’s examples. [Note added: I had forgotten that two days after Witten’s paper, David Gross and Hirosi Ooguri submitted a beautiful, wide-ranging paper, whose section on baryons contains many of the same ideas.] Not surprisingly, this vertex was a D-brane — specifically a D-particle, an object on which the strings extending from freely-moving quarks could end. It wasn’t yet quite satisfactory, because the gluons and quarks in Maldacena’s examples roam free and don’t form mesons or baryons. Correspondingly the baryon vertex isn’t really a physical object; if you make one, it quickly diffuses away into nothing. Nevertheless, Witten’s paper made it obvious what was going on. To the extent real-world mesons can be viewed as strings, real-world protons and neutrons can be viewed as strings attached to a D-brane.

BaryonPuzzle2.png

The baryon puzzle, resolved.  A baryon is made from three strings and a point-like D-brane. [Note there is yet another viewpoint in which a baryon is something known as a skyrmion, a soliton made from meson fields — but that is an issue for another day.]

It didn’t take long for more realistic examples, with actual baryons, to be found by theorists. I don’t remember who found one first, but I do know that one of the earliest examples showed up in my first paper with Joe, in the year 2000.

 

Working with Joe

That project arose during my September 1999 visit to the KITP (Kavli Institute for Theoretical Physics) in Santa Barbara, where Joe was a faculty member. Some time before that I happened to have studied a field theory (called N=1*) that differed from Maldacena’s examples only slightly, but in which meson-like objects do form. One of the first talks I heard when I arrived at KITP was by Rob Myers, about a weird property of D-branes that he’d discovered. During that talk I made a connection between Myers’ observation and a feature of the N=1* field theory, and I had one of those “aha” moments that physicists live for. I suddenly knew what the string theory that describes the N=1*  field theory must look like.

But for me, the answer was bad news. To work out the details was clearly going to require a very difficult set of calculations, using aspects of string theory about which I knew almost nothing [non-holomorphic curved branes in high-dimensional curved geometry.] The best I could hope to do, if I worked alone, would be to write a conceptual paper with lots of pictures, and far more conjectures than demonstrable facts.

But I was at KITP.  Joe and I had had a good personal rapport for some years, and I knew that we found similar questions exciting. And Joe was the brane-master; he knew everything about D-branes. So I decided my best hope was to persuade Joe to join me. I engaged in a bit of persistent cajoling. Very fortunately for me, it paid off.

I went back to the east coast, and Joe and I went to work. Every week or two Joe would email some research notes with some preliminary calculations in string theory. They had such a high level of technical sophistication, and so few pedagogical details, that I felt like a child; I could barely understand anything he was doing. We made slow progress. Joe did an important warm-up calculation, but I found it really hard to follow. If the warm-up string theory calculation was so complex, had we any hope of solving the full problem?  Even Joe was a little concerned.

Image result for polchinski joeAnd then one day, I received a message that resounded with a triumphant cackle — a sort of “we got ’em!” that anyone who knew Joe will recognize. Through a spectacular trick, he’d figured out how use his warm-up example to make the full problem easy! Instead of months of work ahead of us, we were essentially done.

From then on, it was great fun! Almost every week had the same pattern. I’d be thinking about a quantum field theory phenomenon that I knew about, one that should be visible from the string viewpoint — such as the baryon vertex. I knew enough about D-branes to develop a heuristic argument about how it should show up. I’d call Joe and tell him about it, and maybe send him a sketch. A few days later, a set of notes would arrive by email, containing a complete calculation verifying the phenomenon. Each calculation was unique, a little gem, involving a distinctive investigation of exotically-shaped D-branes sitting in a curved space. It was breathtaking to witness the speed with which Joe worked, the breadth and depth of his mathematical talent, and his unmatched understanding of these branes.

[Experts: It’s not instantly obvious that the N=1* theory has physical baryons, but it does; you have to choose the right vacuum, where the theory is partially Higgsed and partially confining. Then to infer, from Witten’s work, what the baryon vertex is, you have to understand brane crossings (which I knew about from Hanany-Witten days): Witten’s D5-brane baryon vertex operator creates a  physical baryon vertex in the form of a D3-brane 3-ball, whose boundary is an NS 5-brane 2-sphere located at a point in the usual three dimensions. And finally, a physical baryon is a vertex with n strings that are connected to nearby D5-brane 2-spheres. See chapter VI, sections B, C, and E, of our paper from 2000.]

Throughout our years of collaboration, it was always that way when we needed to go head-first into the equations; Joe inevitably left me in the dust, shaking my head in disbelief. That’s partly my weakness… I’m pretty average (for a physicist) when it comes to calculation. But a lot of it was Joe being so incredibly good at it.

Fortunately for me, the collaboration was still enjoyable, because I was almost always able to keep pace with Joe on the conceptual issues, sometimes running ahead of him. Among my favorite memories as a scientist are moments when I taught Joe something he didn’t know; he’d be silent for a few seconds, nodding rapidly, with an intent look — his eyes narrow and his mouth slightly open — as he absorbed the point.  “Uh-huh… uh-huh…”, he’d say.

But another side of Joe came out in our second paper. As we stood chatting in the KITP hallway, before we’d even decided exactly which question we were going to work on, Joe suddenly guessed the answer! And I couldn’t get him to explain which problem he’d solved, much less the solution, for several days!! It was quite disorienting.

This was another classic feature of Joe. Often he knew he’d found the answer to a puzzle (and he was almost always right), but he couldn’t say anything comprehensible about it until he’d had a few days to think and to turn his ideas into equations. During our collaboration, this happened several times. (I never said “Use your words, Joe…”, but perhaps I should have.) Somehow his mind was working in places that language doesn’t go, in ways that none of us outside his brain will ever understand. In him, there was something of an oracle.

Looking Toward The Horizon

Our interests gradually diverged after 2006; I focused on the Large Hadron Collider [also known as the Large D-brane Collider], while Joe, after some other explorations, ended up thinking about black hole horizons and the information paradox. But I enjoyed his work from afar, especially when, in 2012, Joe and three colleagues (Ahmed Almheiri, Don Marolf, and James Sully) blew apart the idea of black hole complementarity, widely hoped to be the solution to the paradox. [I explained this subject here, and also mentioned a talk Joe gave about it here.]  The wreckage is still smoldering, and the paradox remains.

Then Joe fell ill, and we began to lose him, at far too young an age.  One of his last gifts to us was his memoirs, which taught each of us something about him that we didn’t know.  Finally, on Friday last, he crossed the horizon of no return.  If there’s no firewall there, he knows it now.

What, we may already wonder, will Joe’s scientific legacy be, decades from now?  It’s difficult to foresee how a theorist’s work will be viewed a century hence; science changes in unexpected ways, and what seems unimportant now may become central in future… as was the path for D-branes themselves in the course of the 1990s.  For those of us working today, D-branes in string theory are clearly Joe’s most important discovery — though his contributions to our understanding of black holes, cosmic strings, and aspects of field theory aren’t soon, if ever, to be forgotten.  But who knows? By the year 2100, string theory may be the accepted theory of quantum gravity, or it may just be a little-known tool for the study of quantum fields.

Yet even if the latter were to be string theory’s fate, I still suspect it will be D-branes that Joe is remembered for. Because — as I’ve tried to make clear — they’re real.  Really real.  There’s one in every proton, one in every neutron. Our bodies contain them by the billion billion billions. For that insight, that elemental contribution to human knowledge, our descendants can blame Joseph Polchinski.

Thanks for everything, Joe.  We’ll miss you terribly.  You so often taught us new ways to look at the world — and even at ourselves.

Image result for joe polchinski

 

by Matt Strassler at February 05, 2018 03:59 PM