Particle Physics Planet


June 20, 2019

Lubos Motl - string vacua and pheno

Baer et al.: stringy naturalness prefers less usual but accessible SUSY scenarios, risky electroweak symmetry breaking
In early February, I discussed a paper by Howard Baer and 4 co-authors which made some steps to update the estimates of superpartner masses and other parameters of new physics – by replacing naturalness with the string naturalness which takes "the number of string vacua with certain properties" as a factor that makes a vacuum more likely.



Ace of Base, Living in Danger – recorded decades before the intense Islamization of Sweden began. In the eyes of a stranger, such as a Czech, Swedes are surely living in danger. The relevance will become clear later.

They have claimed that this better notion of naturalness naturally drives cubic couplings \(A\) to large values (because those are more represented in the string vacua, by a power law) which means a large mixing in the top squark sector and stop masses that may exceed \(1\TeV\). Also, the other (first two generation squarks...) scalars are "tens of \({\rm TeV}\) in mass". The lightest two neutralinos should be close to each other, with a mass difference around \(5\GeV\). Most encouraging is the derivation that the Higgs mass could be pushed up towards the observed \(125\GeV\), plus minus one.



They have continued to publish papers – roughly one paper per month – and the first hep-ph paper today comes from a similar author team, too.
Naturalness versus stringy naturalness (with implications for collider and dark matter searches)
I improved the title by closing the parenthesis. Baer, Barger, and Salam review some of the previous claims about the string naturalness – and look which general kinds of supersymmetry scenarios are likely according to the string naturalness.



String theory and supersymmetry are friends – but they are independent and different, too. Any person who does some actual research about the deeper origin of the observed laws of particle physics (currently the Standard Model) must have some theoretical basis to produce estimated probabilities of various statements about new particles and similar things. If you refuse to consider any such measures or probabilities, it just means that you are a complete non-expert in this part of physics and you shouldn't contaminate the discussions about these topics by your noise.

For bottom-up model builders, the "practical naturalness" has been the canonical framework to think about such matters. "Practical naturalness" says that none of the independent terms that add up to a quantity should be much greater in magnitude than the sum – the total quantity. It's a simple, partially justified rule, but it's also too heuristic and may be wrong or highly inaccurate, especially in some special (and perhaps even not so special) conditions.

String theory is bound to modify these rules. As I said, it seems that string theory wants the \(A\) cubic couplings to be high and so on. This has other implications. We're being pushed to some more extreme corners of the parameter space – more extreme according to the previous notion of naturalness – and the counting is a bit different in these corners. In particular, some masses may be rather high while this doesn't imply too big a fine-tuning, and so on.

Non-stringy supersymmetry model builders have often considered subsets of the MSSM parameter space such as the CMSSM (Constrained Minimal Supersymmetric Standard Model) and mSUGRA (minimum supergravity). These are obvious enough choices to reduce the number of soft parameters in the MSSM with broken supersymmetry. However, Baer et al. present evidence that such vacua are actually rather rare. High scale SUSY breaking models are more frequent but only some kinds. You need to read the paper to see the fate of PeV SUSY, minisplit SUSY, spread SUSY, and others.

The stringy counting arguments seem to prefer light enough higgsinos (and the related \(m_{\rm weak}\) parameter) in the vicinity of a hundred or hundreds of \({\rm GeV}\). On the other hand, gluinos and other strongly interacting superpartners are said to be out of the LHC reach.

Concerning the Higgs potential which is what breaks the electroweak symmetry, Baer et al. claim that the stringy naturalness pressures push the Universe to "living dangerously". It means that the parameters of these potentials are such that some of the "deadly" features of the potential are relatively nearby. By "deadly" features, I mean potentials that break the electromagnetic \(U(1)\); or they break the color \(SU(3)\); or that don't break the electroweak symmetry at all; or that produce a pocket universe weak scale of a magnitude that is clearly incompatible with the observed one.

If they can get this preference for the dangerous life, couldn't they also explain by the stringy statistical arguments why the whole electroweak vacuum seems – due to the other minimum of the Higgs potential etc. – to be metastable and almost unstable? That the quadratic terms in the Higgs potential, when reversed back to the Planck scale, seem to be zero – that the Standard Model seems to be "conformal" in the UV? And other coincidences that people have noticed...

At any rate, I find this research inconclusive but very interesting. The reasoning is imperfect but it's still much better than no reasoning or insisting on pure prejudices. And this reasoning indicates that the probability that some new particles such as higgsinos are just "hundreds of \({\rm GeV}\) in mass" and therefore (almost) accessible by the LHC is surely comparable to 50% or higher. The people who claim such a probability to be close to zero are just deluding themselves – they are defending themselves against a totally real possibility that they totally arbitrarily labeled blasphemous.

by Luboš Motl (noreply@blogger.com) at June 20, 2019 07:02 AM

June 19, 2019

Emily Lakdawalla - The Planetary Society Blog

Here's Our First Look at LightSail 2 Installed on SpaceX's Falcon Heavy Rocket
LightSail 2 is one of 24 spacecraft hitching a ride to orbit as part of the U.S. Air Force's STP-2 mission.

June 19, 2019 11:40 PM

Emily Lakdawalla - The Planetary Society Blog

OSIRIS-REx Sets Low-Orbit Record, Enters New Orbital B Mission Phase
Last week, NASA’s OSIRIS-REx asteroid sample-return mission announced that they had achieved an orbit above asteroid Bennu with an altitude of only 680 meters. Now they are surveying for landing sites and have invited the public's help.

June 19, 2019 08:49 PM

Peter Coles - In the Dark

While Tories distract us with Brexit, the NHS has just slipped out its first price list for treatments

Don’t say you weren’t warned.

Pride's Purge

Please don’t say you weren’t warned.

Because you’ve been warned time and time and time again the Tories are stealthily privatising the NHS.

This doesn’t mean just handing over hospitals and NHS services to private firms.

It means stealthily introducing actual charges to NHS patients at point of need.

This is all totally ignored by the mainstream press of course.

NHS trusts are now so confident they’ll get away with it, they are openly publishing the very first price lists since the formation of the NHS – for NHS operations, NHS procedures and NHS consultations (see here):

nhs charges 1

PLEASE SHARE if you care about the NHS. Thanks.

View original post

by telescoper at June 19, 2019 08:01 PM

Axel Maas - Looking Inside the Standard Model

Creativity in physics
One of the most widespread misconceptions about physics, and other natural sciences, is that they are quite the opposite to art: Precise, fact-driven, logical, and systematic. While art is perceived as emotional, open, creative, and inspired.

Of course, physics has experiments, has data, has math. All of that has to be fitted perfectly together, and there is no room for slights. Logical deduction is central in what we do. But this is not all. In fact, these parts are more like the handiwork. Just like a painter needs to be able to draw a line, a writer needs to be able to write coherent sentences, so we need to be able to calculate, build, check, and infer. But just like the act of drawing a line or writing a sentence is not what we recognize already as art, so is not the solving of an equation physics.

We are able to solve an equation, because we learned this during our studies. We learned, what was known before. Thus, this is our tool set. Like people read books before start writing one. But when we actually do research, we face the fact that nobody knows what is going on. In fact, quite often we do not even know what is an adequate question to pose. We just stand there, baffled, before a couple of observations. That is, where the same act of creativity has to set in as when writing a book or painting a picture. We need an idea, need inspiration, on how to start. And then afterwards, just like the writer writes page after page, we add to this idea various pieces, until we have a hypotheses of what is going on. This is like having the first draft of a book. Then, the real grinding starts, where all our education comes to bear. Then we have to calculate and so on. Just like the writer has to go and fix the draft to become a book.

You may now wonder whether this part of creativity is only limited to the great minds, and at the inception of a whole new step in physics? No, far from it. On the one hand, physics is not the work of lone geniuses. Sure, somebody has occasionally the right idea. But this is usually just the one idea, which is in the end correct, and all the other good ideas, which other people had, did just turn out to be incorrect, and you never hear of them because of this. And also, on the other hand, every new idea, as said above, requires eventually all that what was done before. And more than that. Creativity is rarely borne out of being a hermit. It is often by inspiration due to others. Talking to each other, throwing fragments of ideas at each other, and mulling about consequences together is what creates the soil where creativity sprouts. All those, with whom you have interacted, have contributed to the idea you have being born.

This is, why the genuinely big breakthroughs have often resulted from so-called blue-sky research or curiosity-driven research. It is not a coincidence that the freedom of doing whatever kind of research you think is important is an, almost sacred, privilege of hired scientists. Or should be. Fortunately I am privileged enough, especially in the European Union, to have this privilege. In other places, you are often shackled by all kinds of external influences, down to political pressure to only do politically acceptable research. And this can never spark the creativity you need to make something genuine new. If you are afraid about what you say, you start to restrain yourself, and ultimately anything which is not already established to be acceptable becomes unthinkable. This may not always be as obvious as real political pressure. But if whether you being hired, if your job is safe, starts to depend on it, you start going for acceptable research. Because failure with something new would cost you dearly. And with the currently quite common competitive funding prevalent particularly for non-permanently hired people, this starts to become a serious obstruction.

As a consequence, real breakthrough research can be neither planned nor can you do it on purpose. You can only plan the grinding part. And failure will be part of any creative process. Though you actually never really fail. Because you always learn how something does not work. That is one of the reasons why I strongly want that failures become also publicly available. They are as important to progress as success, by reducing the possibilities. Not to mention the amount of life time of researchers wasted because they fail with them same attempt, not knowing that others failed before them.

And then, perhaps, a new scientific insight arises. And, more often than not, some great technology arises along the way. Not intentionally, but because it was necessary to follow one's creativity. And that is actually where most technological leaps came from. So,real progress in physics, in the end, is made from about a third craftsmanship, a third communication, and a third creativity.

So, after all this general stuff, how do I stay creative?

Well, first of all, I was and am sufficiently privileged. I could afford to start out with just following my ideas, and either it will keep me in business, or I will have to find a non-science job. But this only worked out because of my personal background, because I could have afforded to have a couple of months with no income to find a job, and had an education which almost guarantees me a decent job eventually. And the education I could only afford in this quality because of my personal background. Not to mention that as a white male I had no systemic barriers against me. So, yes, privilege plays a major role.

The other part was that I learned more and more that it is not effort what counts, but effect. Took me years. But eventually, I understood that a creative idea cannot be forced by burying myself in work. Time off is for me as important. It took me until close to the end of my PhD to realize that. But not working overtime, enjoying free days and holidays, is for me as important for the creative process as any other condition. Not to mention that I also do all non-creative chores much more efficiently if well rested, which eventually leaves me with more time to ponder creatively and do research.

And the last ingredient is really exchange. I have had now the opportunity, in a sabbatical, to go to different places and exchange ideas with a lot of people. This gave me what I needed to acquire a new field and have already new ideas for it. It is the possibility to sit down with people for some hours, especially in a nicer and more relaxing surrounding than an office, and just discuss ideas. That is also what I like most about conferences. And one of the reasons I think conferences will always be necessary, even though we need to make going there and back ecologically much more viable, and restrict ourselves to sufficiently close ones until this is possible.

Sitting down over a good cup of coffee or a nice meal, and just discuss, is really jump starting my creativity. Even sitting with a cup of good coffee in a nice cafe somewhere and just thinking does wonders for me in solving problems. And with that, it seems not to be so different for me than for artists, after all.

by Axel Maas (noreply@blogger.com) at June 19, 2019 02:53 PM

Peter Coles - In the Dark

Atmospheric Muons as an Imaging Tool

The other day I came across an interesting paper with the above title on the arXiv. The abstract reads:

Imaging methods based on the absorption or scattering of atmospheric muons, collectively named under the neologism “muography”, exploit the abundant natural flux of muons produced from cosmic-ray interactions in the atmosphere. Recent years have seen a steep rise in the development of muography methods in a variety of innovative multidisciplinary approaches to study the interior of natural or man-made structures, establishing synergies between usually disconnected academic disciplines such as particle physics, geology, and archaeology. Muography also bears promise of immediate societal impact through geotechnical investigations, nuclear waste surveys, homeland security, and natural hazard monitoring. Our aim is to provide an introduction to this vibrant research area, starting from the physical principles at the basis of the methods and reviewing several recent developments in the application of muography methods to specific use cases, without any pretence of exhaustiveness. We then describe the main detector technologies and imaging methods, including their combination with conventional techniques from other disciplines, where appropriate. Finally, we discuss critically some outstanding issues that affect a broad variety of applications, and the current state of the art in addressing them.

This isn’t a new field, but it’s new to me and this paper provides a very nice introduction to it. I’ve taken the liberty of reproducing Figure 3 here to show one application of `muography’..

 

by telescoper at June 19, 2019 09:48 AM

June 18, 2019

Christian P. Robert - xi'an's og

likelihood-free approximate Gibbs sampling

“Low-dimensional regression-based models are constructed for each of these conditional distributions using synthetic (simulated) parameter value and summary statistic pairs, which then permit approximate Gibbs update steps (…) synthetic datasets are not generated during each sampler iteration, thereby providing efficiencies for expensive simulator models, and only require sufficient synthetic datasets to adequately construct the full conditional models (…) Construction of the approximate conditional distributions can exploit known structures of the high-dimensional posterior, where available, to considerably reduce computational overheads”

Guilherme Souza Rodrigues, David Nott, and Scott Sisson have just arXived a paper on approximate Gibbs sampling. Since this comes a few days after we posted our own version, here are some of the differences I could spot in the paper:

  1. Further references to earlier occurrences of Gibbs versions of ABC, esp. in cases when the likelihood function factorises into components and allows for summaries with lower dimensions. And even to ESP.
  2. More an ABC version of Gibbs sampling that a Gibbs version of ABC in that approximations to the conditionals are first constructed and then used with no further corrections.
  3. Inherently related to regression post-processing à la Beaumont et al.  (2002) in that the regression model is the start to designing an approximate full conditional, conditional on the “other” parameters and on the overall summary statistic. The construction of the approximation is far from automated. And may involve neural networks or other machine learning estimates.
  4. As a consequence of the above, a preliminary ABC step to design the collection of approximate full conditionals using a single and all-purpose multidimensional summary statistic.
  5. Once the approximations constructed, no further pseudo-data is generated.
  6. Drawing from the approximate full conditionals is done exactly, possibly via a bootstrapped version.
  7. Handling a highly complex g-and-k dynamic model with 13,140 unknown parameters, requiring a ten days simulation.

“In certain circumstances it can be seen that the likelihood-free approximate Gibbs sampler will exactly target the true partial posterior (…) In this case, then Algorithms 2 and 3 will be exact.”

Convergence and coherence are handled in the paper by setting the algorithm(s) as noisy Monte Carlo versions, à la Alquier et al., although the issue of incompatibility between the full conditionals is acknowledged, with the main reference being the finite state space analysis of Chen and Ip (2015). It thus remains unclear whether or not the Gibbs samplers that are implemented there do converge and if they do what is the significance of the stationary distribution.

by xi'an at June 18, 2019 10:19 PM

Marco Frasca - The Gauge Connection

Cracks in the Witten’s index theorem?

In these days, a rather interesting paper (see here for the preprint) appeared on Physical Review Letters. These authors study a Wess-Zumino model for {\cal N}=1, the prototype of any further SUSY model, and show that there exists an anomaly at one loop in perturbation theory that breaks supersymmetry. This is rather shocking as the model is supersymmetric at the classical level and, in agreement with Witten’s index theorem, no breaking of supersymmetry should ever be observed. Indeed, the authors, in the conclusions, correctly ask how the Witten’s theorem copes with this rather strange behavior. Of course, Witten’s theorem is correct and the question comes out naturally and is very much interesting for further studies.

This result is important as I have incurred in a similar situation for the Wess-Zumino model in a couple of papers. The first one (see here and here)  went published and shows how the classical Wess-Zumino model, in a strong coupling regime, breaks supersymmetry. Therefore, I asked a similar question as for the aforementioned case: How quantum corrections recover the Witten’s theorem? The second one is remained a preprint (see here). I tried to send it to Physics Letters B but the referee, without any check of mathematics, just claimed that there was the Witten’s theorem to forbid my conclusions. The Editor asked me to withdraw the paper in view of this identical reason. This was a very strong one. So, I never submited this paper again and just checked the classical case where I was more lucky.

So, my question is still alive: Has supersymmetry in itself the seeds of its breaking?

This is really important in view of the fact that the Minimal Supersymmetric Standard Model (MSSM), now in disgrace after LHC results, can have a dark side in its soft supersymmetry breaking sector. This, in turn, could entail a wrong understanding of where the superpartners could be after the breaking. Anyway, it is really something exciting already at the theoretical level. We are just stressing Witten’s index theorem in search for answers.

by mfrasca at June 18, 2019 03:06 PM

Christian P. Robert - xi'an's og

talk at CISEA 2019

Here are my slides for the overview talk I am giving at CISEA 2019, in Abidjan, highly resemblant with earlier talks, except for the second slide!

by xi'an at June 18, 2019 12:18 PM

Lubos Motl - string vacua and pheno

Acharya: string/M-theory probably implies low-energy SUSY
Bobby Acharya is a versatile fellow. Whenever you search for the author Acharya, B on Inspire, you will find out that "he" has written 1,527 papers which have earned over 161,000 citations which would trump 144,000 citations of Witten, E. Much of this weird huge number actually has some merit because Acharya is both a highly mathematical theorist – an expert in physics involving complicated extra-dimensional manifolds – as well as a member of the ATLAS experimental team at the LHC.

Today, he published
Supersymmetry, Ricci Flat Manifolds and the String Landscape.
String theory and supersymmetry are "allies" most of the time. Supersymmetry is a symmetry that first emerged – at least in the Western world – when Pierre Ramond was incorporating fermions to the stringy world sheet. (In Russia, SUSY was discovered independently by purely mathematical efforts to classify Lie-algebra-like physical symmetries.) Also, most of the anti-string hecklers tend to be anti-supersymmetry hecklers as well, and vice versa.

On the other hand, string theory and supersymmetry are somewhat independent. Bosonic string theory in \(D=26\) has no SUSY – and SUSY is also broken in type 0 theories, some non-supersymmetric heterotic string theories, non-critical string theory, and more. Also, supersymmetry may be incorporated to non-gravitational field theories, starting with the Wess-Zumino model and the MSSM, which obviously aren't string vacua – because the string vacua make gravity unavoidable.



Some weeks ago, Alessandro Strumia was excited and told us that he wanted to become a non-supersymmetric stringy model builder because it was very important to satisfy one-half of the anti-string, anti-supersymmetric hecklers. It's a moral duty to abandon supersymmetry, he basically argued, so string theorists must do it as well and he wants to lead them. He didn't use these exact words but it was the spirit.



Well, string vacua with low-energy supersymmetry are rather well understood and many of them have matched the observed phenomena with an impressive (albeit not perfect, so far) precision – while those without supersymmetry seem badly understood and their agreement with the observations hasn't been proven too precisely. It's not surprising for many reasons. One of them is that supersymmetry makes physics both more stable, promising, and free of some hierarchy problems which is good phenomenologically; as well as full of cancellations and easier to calculate which is good from a mathematical viewpoint. Oh, SUSY, with a pictorial walking.

It is totally plausible that supersymmetry at low enough energies is an unavoidable consequence of string/M-theory – assuming some reasonably mild assumptions about the realism of the models. This belief was surely shared e.g. by my adviser Tom Banks – one of his prophesies used to be that this assertion (SUSY is unavoidable in string theory or quantum gravity) would eventually be proven. Acharya was looking into this question.

He focused on "geometric" vacua that may be described by 10D, 11D, or 12D (F-theory...) supergravity – which may then be dimensionally reduced to a four-dimensional theory. Assuming that these high-dimensional supergravity theories are good approximations at some level, the statement that "supersymmetry is unavoidable in string theory" becomes basically equivalent to the statement that "manifolds used for stringy extra dimensions require covariantly constant spinors".

Calabi-Yau three-folds – which, when used in heterotic string theory, gave us the first (and still excellent) class of realistic string compactifications in 1985 – are manifolds of \(SU(3)\) holonomy. This holonomy guarantees the preservation of 1/4 of the supercharges that have existed in the higher-dimensional supergravity theory in the flat space because the generic holonomy \(SU(4)\sim SO(6)\) of the orientable six-dimensional manifolds is reduced to \(SU(3)\) where only 3 spinorial components out of 4 are randomly rotated into each other (after any closed parallel transport) while the fourth one remains fixed.

In table 1, Acharya lists all the relevant holonomy groups. If you forgot, the holonomy group is the group of all possible rotations of the tangent space that is induced by a parallel transport around any closed curve.

\(SO(N)\) is the generic holonomy of an \(N\)-dimensional real manifold. It would be \(O(N)\) if the manifold were unorientable. This transformation mixes the spinors in the most general way so there are no covariantly constant spinors. But there could nevertheless be Ricci-flat manifolds of this generic holonomy. The three question marks are written on that first line of his table because they exactly correspond to the big question he wants to probe in this paper.

Now, in real dimensions \(n=2k\), \(n=4k\), \(n=7\), and \(n=8\), one has the holonomies \(SU(k)\), \(USp(2k)\), \(G_2\), and \(Spin(7)\), respectively. All these special holonomies guarantee covariantly constant spinors i.e. some low-energy supersymmetry; and the Ricci-flatness of the metric, too. On the other hand, one may also "deform" the \(SU(k)\) and \(USp(2k)\) holonomies to \(U(k)\) and \(USp(2k)\times Sp(1)\), respectively, and this deformation kills both the covariantly constant spinors (i.e. SUSY) as well as the Ricci-flatness.

Note that string/M-theory allows you to derive Einstein's equations of general relativity from a more fundamental starting point. In the absence of matter sources (i.e. in the vacuum), Einstein's equations reduce to Ricci-flatness i.e. \(R_{\mu\nu}=0\). This is relevant for the curved 4D spacetime that everyone knows. But it's also nice for the extra dimensions that produce the diversity of low-energy fields and particles.

So whether you find it beautiful or not, and all physicists with a good taste find it beautiful (and the beauty is very important, I must make you sure about this basic fact because you may have been misled by an ugly pundit), string/M-theory makes it important to study Ricci-flat manifolds – both manifolds including the 4 large dimensions that we know, as well the compactified extra dimensions. The former is relevant for 4D gravity we know; the latter is more relevant for the rest of physics.

Acharya divides the question "whether the Ricci-flat manifolds without covariantly constant spinors exist" into two groups:

* simply connected manifolds
* simply disconnected manifolds

In the first group, he doesn't quite find the proof but it seems that he believes that the conjecture that "no such compact, simply connected, Ricci flat manifolds without SUSY exist" seems promising.

In the second group, there exist counterexamples. After all, you may take quotients (orbifolds) of some supersymmetric manifolds – but the orbifolding maps the spinors to others in a generic enough way which breaks all of supersymmetry. So SUSY-breaking, Ricci-flat compactifications exist.

However, at the same moment, Acharya points out that all such simply disconnected Ricci-flat manifolds seem to suffer from an instability – a generalization of Witten's "bubble of nothing". It's given by a Coleman-style instanton that has a hole inside. The simplest vacuum with this Witten's instability is the Scherk-Schwarz compactification on a circle with antiperiodic boundary conditions for fermions (the easiest quotient-like way to break all of SUSY because when a constant is antiperiodic, it must be zero). The antiperiodic boundary conditions are perfect for closing a cylinder into a cigar (a good shape for Coleman-like instantons in the Euclideanized spacetime, especially because of Coleman's obsessive smoking) on which the spinors are well-behaved.

So the corresponding history in the Minkowski space looks like a vacuum decay – except that the new vacuum in the "ball inside" – which is growing almost by the speed of light – isn't really a vacuum at all. It's "emptiness" that doesn't even have a vacuum in it. The radius of the circular dimension – which is \(a\to a_0\) for \(r\to\infty\) – continuously approaches \(r=0\) on the boundary of Witten's bubble of nothing – basically on \(|\vec r|=ct\) where \(c\) is the speed of light – and it stays zero for \(|\vec r|\lt ct\) which means that there's no space for \(|\vec r| \lt ct\) at all.

Such instabilities are brutal and Acharya basically proves that these instabilities make all Ricci-flat, simply disconnected, non-supersymmetric stringy compactifications unstable. We see that our Universe doesn't decay instantly so we can't live in such a vacuum. Instead, the extra dimensions should either be supersymmetric and simply disconnected; or they should be simply connected. When they're simply connected, the conjecture – which has passed lots of tests and may be proven – says that these compactifications imply low-energy supersymmetry, anyway.

If this conjecture happened to be wrong, it would seem likely to Acharya – and me – that the number of non-supersymmetric, simply connected, Ricci-flat compact manifolds would probably be much higher than the number of the supersymmetric Ricci-flat solutions. If it were so, SUSY breaking could be "generic" in string/M-theory, and SUSY breaking could actually become a rather solid prediction of string/M-theory. (Well, the population advantage should also beat the factor of \(10^{34}\) to persuade us that we don't need to care about the non-supersymmetric vacua's hierarchy problem.) Note that with some intense enough mathematical work, it should be possible to settle which of these two predictions are actually being made by string theory.

Acharya has only considered "geometric/supergravity" vacua. It's possible that some non-geometric vacua not admitting a higher-dimensional supergravity description are important or numerous or prevailing – and if it is so, the answer about low-energy SUSY could be anything and Acharya's work could become useless for finding this answer.

But some geometric approximation may exist for almost all vacua - dualities indicate that there are often several geometric starting points to understand a vacuum, so why the number should be zero too often? – and the incomplete evidence indicates that low-energy SUSY is mathematically needed in stable enough string vacua. When I say low-energy SUSY, it may be broken at \(100\TeV\) or anything. But it should be a scale lower than the Kaluza-Klein scale of the extra dimensions – and maybe than some other, even lower, scales.

by Luboš Motl (noreply@blogger.com) at June 18, 2019 11:55 AM

Peter Coles - In the Dark

Physics Lectureship in Maynooth!

Every now and then I have the opportunity to use the medium of this blog to draw the attention of my vast readership (both of them) to employment opportunities. Today is another such occasion, so I am happy to point out that my colleagues in the Department of Experimental Physics are advertising a lectureship. For full details, see here, but I draw your attention in particular to this paragraph:

The Department of Experimental Physics is seeking candidates with the potential to build on the research strengths of the Department in the areas of either terahertz optics or atmospheric physics. The Department is especially interested in candidates with research experience that could broaden the scope of current research activity. This could include for example terahertz applications in space, imaging, remote sensing and communications or applications of atmospheric physics related to monitoring and modelling climate change. It would be an advantage if the candidate’s research involved international collaboration with the potential for interdisciplinary initiatives with other University institutes and departments.

The deadline for applications is Sunday 28 July 2019 at 11.30pm.

by telescoper at June 18, 2019 10:06 AM

June 17, 2019

Christian P. Robert - xi'an's og

Le Monde puzzle [#1104]

A palindromic Le Monde mathematical puzzle:

In a monetary system where all palindromic amounts between 1 and 10⁸ have a coin, find the numbers less than 10³ that cannot be paid with less than three coins. Find if 20,191,104 can be paid with two coins. Similarly, find if 11,042,019 can be paid with two or three coins.

Which can be solved in a few lines of R code:

coin=sort(c(1:9,(1:9)*11,outer(1:9*101,(0:9)*10,"+")))
amounz=sort(unique(c(coin,as.vector(outer(coin,coin,"+")))))
amounz=amounz[amounz<1e3]

and produces 9 amounts that cannot be paid with one or two coins.

21 32 43 54 65 76 87 98 201

It is also easy to check that three coins are enough to cover all amounts below 10³. For the second question, starting with n¹=20,188,102,  a simple downward search of palindromic pairs (n¹,n²) such that n¹+n²=20,188,102 led to n¹=16,755,761 and n²=3,435,343. And starting with 11,033,011, the same search does not produce any solution, while there are three coins such that n¹+n²+n³=11,042,019, for instance n¹=11,022,011, n²=20,002, and n³=6.

by xi'an at June 17, 2019 10:19 PM

Christian P. Robert - xi'an's og

Lubos Motl - string vacua and pheno

μνSSM produces nice neutrino masses, new 96 GeV Higgs
The most interesting new hep-ph preprint is
Precise prediction for the Higgs-Boson Masses in the μνSSM with three right-handed neutrino superfields (58 pages)
by Sven Heinemeyer (CERN) and Biekötter+Muñoz (Spain) – BHM. They discuss some remarkable combined virtues of a non-minimal supersymmetric model of particle physics.



Note that none of the so far observed elementary particles – bosons or fermions – seems to be a superpartner of another observed fermion or boson, respectively. But for theoretical reasons, it is more likely that these superpartners exist and a supersymmetric Standard Model is a more accurate description of Nature than the Standard Model – the minimum model encompassing the currently observed particles.



From a string theorist's, top down perspective, there may exist many different supersymmetric models that are relevant at low energies (energies accessible by colliders), with or without grand unification, with or without various hidden sectors. String theory or more generally quantum gravity surely guarantees an infinite number of very massive particle species – that gradually become generic black hole microstates once their mass is above or well above the Planck mass.



But from a bottom-up perspective, what are the first new particles that are likely to be observed? The golden standard extension of the Standard Model is the MSSM, the Minimal Supersymmetric Standard Model. Take all particles of the Standard Model, extend them to a superfield (the superpartner has a spin lower by 1/2, except for the superpartners of scalars that need to go to +1/2, of course), and add all the new couplings compatible with supersymmetry.

Because you find out that the Higgs superfield is chiral, you will need to double the number of Higgs fields – to have two doublets, each of which is also a superfield – to produce the masses for up-type as well as down-type quarks. This doubling of the Higgses is also necessary to cancel some anomalies that would otherwise arise from the new chiral higgsinos. As far as physical particles go, you will get 5 Higgses (8-3, 3 are eaten by the gauge bosons to become massive and gain a longitudinal polarization): the normal CP-even Higgs, its lighter sibling, the CP-odd neutral boson, and a particle-antiparticle pair of charged Higgses. The higgsinos are mixed with photinos and zinos to give you four neutralinos; while the charged higgsinos mix with the winos to produce two charginos.

Aside from other virtues, the MSSM is better (because less unnaturally fine-tuned) than the Standard Model because it eliminates all of the hierarchy problem or most of it – if the superpartners are light enough, they cancel the potentially huge loop corrections to the Higgs mass, with some precision. Also, MSSM is usually (but not always) considered with an unbroken R-parity (the number of new superpartners modulo two) which makes the lightest superpartner, the LSP, stable and an excellent candidate for dark matter.

In the MSSM, the self-interaction of the Higgses arises due to a cubic superpotential which has a coefficient \(\mu\). This \(\mu\) could also be expected to be large and there's a milder, new hierarchy-like problem, the \(\mu\)-problem. The most trusted supersymmetric model beyond the MSSM is the NMSSM, the Next-To-Minimal Supersymmetric Standard Model which upgrades this parameter \(\mu\) into a new superfield \(S\), a new singlet Higgs superfield. The \(\mu\)-problem is avoided and the NMSSM has some other advantages.

BHM argue that another supersymmetric extension of the Standard Model, the mu-from-nu SSM or μνSSM, should be considered the "third most canonical" supersymmetric extension of the Standard Model if not better. The μνSSM also has a new superfield \(\mu\) which plays the same role as \(\mu\) in NMSSM. However, in μνSSM, the fermionic components of this new field is simultaneously the right-handed neutrino. So the new singlet Higgses and right-handed neutrinos are unified into a superfield, a nice and economic choice to exploit the available chairs in the superfields – it's nice if it can be compatible with observations. And it can, they argue.

The neutrino masses have the right magnitude because the electroweak seesaw mechanism naturally follows from the equations. The R-parity is broken but a long-lived gravitino seems like a good dark matter candidate (which is invisible to the direct searches). Also, BHM seem convinced by the mathematics that there is no reason for a "flavor blindness" of the parameters in this model. You might be afraid of flavor-changing predictions but they say that with the constraints on the neutrino mass matrices, these FCNC-like predictions are within the experimental bounds.

Because we have three generations of neutrinos, it's natural to have three new \(\mu\)-like superfields with three right-handed neutrinos and three new singlet Higgs fields. In this new paper, for the first time, BHM consider the full model with three such new \(\mu\)-fields. And with the help of some software, they also analyze the full one-loop diagrams and the equally accurate renormalization group flows to say something about the masses. Note that the loop diagrams matter in supersymmetric models – for example, even in the MSSM, they are vital to increase the tree-level prediction of the Higgs mass from \(83\GeV\) to \(125\GeV\). The loop diagrams fulfill some additional tasks in the μνSSM.

As a great by-product, the preliminary \(96\GeV\) Higgs boson indicated by some diphoton and bottom-pair excesses at LEP and CMS, may be one of the mass bosonic eigenstates of these new \(\mu\)-fields (sneutrinos). In some other section, they discuss quite a precise setup with sneutrino masses near \(1235\GeV\), this precision is sort of intriguing. I didn't understand whether these very different values of the sneutrino masses follow from one scenario or two.

The folks seem genuinely excited about the \(96\GeV\) excess and waiting for new clues about these experimental hints. I am somewhat excited, too – but they're excited enough to find the energy to write 58 pages on calculations in a potentially relevant model.



Because we discussed fake Spaniards with Erwin, here is a Czech remake "A lamb and a wolf" of a random medieval Spanish Christmas carol Riu Riu Chiu – which, as you will probably agree, is better than the Spanish original song. From the 1990 album "You have to insist on your truth" – byt the Spiritual Quintet band, when brothers Nedvěd were members (the membership seems frequently changing). This band isn't quite mainstream on radios but most Czechs are familiar with this kind of music which dominates the campfires (although the Spiritual Quintet clearly sings more religious music than the most famous songs by the Nedvěd brothers and similar musicians).

by Luboš Motl (noreply@blogger.com) at June 17, 2019 03:15 PM

CERN Bulletin

Interview with the Director General on the situation of fellows

We have received many reactions through staff delegates that cuts have been made to the budget of fellows. Our colleagues are concerned, whether they are fellows themselves, supervisors of fellows or just their colleagues. The Director General has accepted to answer our questions on this subject.

Staff Association: Thanks for taking the time to discuss with us this topic which raises concerns among our colleagues.

Director general: Before answering directly on the question of the fellows, I would first like to provide information on the financial situation of CERN, which is currently challenging.

We have two important projects in a construction phase: the LHC Injectors Upgrade (LIU) and the LHC High Luminosity Upgrade (HL-LHC). These two projects represent together a material cost of more than CHF 1 billion, but they must be carried out with a constant budget for the Organization, that is with fixed contributions from Member States, which leads to a deficit situation. Moreover, the CERN Council wants the cumulative budget deficit over the coming years (CBD) to remain contained.

In 2019 and 2020, there will actually be a peak in spending due to the completion and installation of LIU during Long Shutdown 2 (LS2), the ramp-up in spending according to the HL-LHC project plan, and the consolidation of accelerators to be done also during LS2. Consequently, 2019 and 2020 are difficult years from a financial point of view. I shall underline that there is no increase in the cost of the LIU and HL-LHC projects that remain within their agreed budget and timelines.

In addition, there are new expenditures each year, depending on the scientific requirements of the Laboratory, but also on the condition and needs of the general infrastructure. For example, this year we will add to the Medium-Term Plan (MTP) the construction of a computing centre on the Prévessin site to fulfil CERN's commitment as Tier 0 of the Worldwide LHC Computing Grid (WLCG) for the second part of Run 3 and the HL-LHC phase. We have also scheduled a new building in Prévessin for 600 additional office spaces, and new resources for the AWAKE project which completed its Phase 1 in 2018 and begins its Phase 2 and for research and development for future detectors.

In this context, compensation must be found with corresponding savings elsewhere. Each year the Directorate and the Extended Directorate (directors and department heads) review and optimise the resources. In 2018 and 2019, an in-depth exercise was carried out, following an internal review, with proposals from department heads and project leaders on the potential savings that could be made in the material budget.

Now I can answer your questions regarding the fellows.

Staff Association: Can you tell us what decisions have been made about the fellows?

Director general: Since 2016, this management has always supported the Fellowship Program; as soon as we took office, we reverted the 10% cut in the Fellowship budget that had been previously decided. We have evolved from 640 fellows at the end of 2015 to 840 at the end of 2018. This fellowship programme is both a flagship and a mission for the Organization.

This year we were forced to slightly reduce the budget at the May Fellows Committee, which was not an easy decision; this measure affects only the AWAKE, CLIC, FCC and R2E (Radiation to Electronics) projects, and only those fellows paid through a transfer from the material budget to the personnel budget ("M2P fellows"). Of course, we cannot expect to do the same amount of work with fewer personnel, but for future long-term projects, a temporary slowdown remains acceptable.

Staff Association: In practice, a Fellows Committee was held on May 21. What are the results of this Committee?

Director general: We do not have final numbers yet, but the consequences are that 8 extensions beyond two years and 20 new requests were not granted solely for the AWAKE, CLIC, FCC and R2E projects.

Staff Association: We have also learned that projects and departments themselves have self-restrained by submitting only requests for extensions and new contracts that were sure to be awarded. How then can we judge the impact of the decisions taken?

Director general: If we compare the recent results of Fellows Committees, we had respectively 129 extensions and 121 new fellowship contracts awarded in November 2018 compared to 1241 extensions and 1101 new contracts in May 2019.

Staff Association: Isn't this also due to conjunctural variations as much as to voluntary decisions?

Director general: Indeed, this is also explained by the fact that projects such as LIU and ELENA are coming to an end; it is therefore normal that they experience a decrease in the number of fellows.

In addition, the Organization must ask itself whether the number of its fellows is optimized to meet the Organization's needs on the one hand, but also for the fellows themselves on the other hand, in terms of workload (overload or underload), work organization, including their supervision, and the quality of their experience during their stay at CERN.

To assess this, the HR department will launch a survey to get the Fellows' opinion on their working conditions, including their workload. This survey will give us important indications to form the basis for future decisions. A survey will also be conducted with the staff (“titulaires”), many of whom are also supervisors of fellows.

Staff Association: The figures announced do not show any dramatic change, 5 fewer extensions and about 10 fewer new fellowship contracts between the two committees, which seems rather inconsistent with the fact that substantial savings are being sought.

Director general: We are still in 2019 with a budget already approved last year, and we must contain already the increase in spending. Every element of saving count. The savings made at the May Fellows Committee were less than CHF 2 million.

Staff Association: So how can one explain the significant feedback we have had from many people who have reported more significant cuts than that?

Director general: I understand that the news about the cuts in extensions and new fellowship contracts may have generated concern among many in the Organization. Cuts are never made with a light heart, but the financial situation is very difficult and there will be sacrifices to be made in the next two years, as we go through the peak of cumulative budget deficit (CBD).

Staff Association: Will the cuts made at the May 2019 Committee be renewed or even increased for future Committees?

Director general: Everything remains to be seen. Cuts have been made to the material budget of departments and projects. This budget includes goods, industrial services and M2P transfers, and therefore affects fellows and project associates. But we leave it to department heads and project leaders to find the best way to absorb these cuts and minimize their impact. I repeat that the Fellowship program is always defended by the Management, because training the younger generation is also part of our mission. But we also need to optimize it both in terms of quantity, number of fellows, and quality, the return for CERN and for the fellows. This is a reflection that we will have in the next months.

Staff Association: Beyond the social consequences for the fellows themselves, supervisors, team leaders and colleagues are concerned about the increased workload for other personnel. Will they have to do the work themselves that will not be done by the fellows who have left?

Director general: Consideration is being given to optimizing the workforce in general, including also the categories of associated members of personnel. Of course, if we reduce the number of project associates and fellows, we cannot do the same work in the same time frame. But there is no magic solution. We shall first keep within a strict timeline the priority projects that are in the implementation phase, such as LIU and HL-LHC, and it will be necessary to accept minor delays on other projects in the longer term.

In addition, when the European strategy for particle physics is developed and approved by Council in May 2020, we will reconsider the priorities for future projects on the basis of the new roadmap.

For the moment the situation is quite fluid with quite strong financial constraints.

Staff Association: We understand that the Directorate is looking for savings and budget cuts in all directions.  Do you intend to further increase these cuts in the budgets of the fellows? Will you reach other categories of personnel? What impact on the personnel in general?

Director general: There will be no impact on the personnel budget. We will only touch the material budget, with a limited impact on the M2P part. I want to remind you that at the end of 2016 we presented to Council and obtained its support for 80 new staff (“titulaires”) posts because we had considered that the deadlines and workload required this reinforcement of staff posts in our workforce.

Staff Association: Finally, we were surprised to learn of this news from the side, without any information to the personnel and without prior concertation with the Staff Association. Cuts in the number of employed members of personnel is still something we should concert together. What is your position on this point?

Director general: We must hold discussion and concertation with the Staff Association. We usually do it because the concertation process is very important to us.

There was no information to the personnel in general because the MTP is under development. Council must have the opportunity to see it first in view of its approval. But I have set the date of 4 July for a presentation to all members of personnel of the news from the June Council, including the MTP and the European strategy.

Staff Association: Admittedly, the development of the MTP is still in progress, but the decision to cut the budget of the fellows has already been taken, and this decision to reduce the number of employed members of personnel has not been concerted.

Director general: Indeed, this decision was both urgent and essential to contain the 2019 budget. This cut was light, but I stand in solidarity first with the fellows whose contract could not be extended beyond two years, but also with the groups and projects that experienced this cut. However, it should be recalled that a large increase in the number of fellows was granted in past years.

For future Fellows Committees, no decision has been taken yet. But the responsibility stays with the department heads to implement savings in their material budget.

 

We thank the Director general for taking the time to answer our questions. Of course, we remain very vigilant on the impact that the budgetary cuts will have on the personnel, both in numbers and on the employment and association conditions offered, even in case of transfer from the material budget to the personnel budget.

 

[1] Numbers are not final but will be very close.

June 17, 2019 09:06 AM

Peter Coles - In the Dark

Euclid Updates

Following the Euclid Consortium Meeting in Helsinki a couple of weeks ago, here are a couple of updates.

First, here is the conference photograph so you can play Spot The Telescoper:

(The picture was taken from the roof of the Finlandia Hall, by the way, which accounts for the strange viewpoint.

The other update is that the European Space Agency has released a Press Release releasing information about the location on the sky of the planned Euclid Deep Fields. Here they are (marked in yellow):

These deep fields amount to only about 40 square degrees, a small fraction of the total sky coverage of Euclid (~15,000 square degrees), but the Euclid telescope will point at them multiple times in order to detect very faint distant galaxies at enormous look-back times to study galaxy evolution. It is expected that these fields will produce several hundred thousand galaxy images per square degree…

Selecting these fields was a difficult task because one has to avoid bright sources in both optical and infrared (such as stars and zodiacal emission) so as not to mess with Euclid’s very sensitive camera. Roberto Scaramella gave a talk at the Helsinki Meeting showing how hard it is to find fields that satisfy all the constraints. The problem is that there are just too many stars and other bits of rubbish in the sky getting in the way of the interesting stuff!

 

For much more detail see here.

 

by telescoper at June 17, 2019 08:58 AM

June 16, 2019

John Baez - Azimuth

Applied Category Theory Meeting at UCR

 

The American Mathematical Society is having their Fall Western meeting here at U. C. Riverside during the weekend of November 9th and 10th, 2019. Joe Moeller and I are organizing a session on Applied Category Theory! We already have some great speakers lined up:

• Tai-Danae Bradley
• Vin de Silva
• Brendan Fong
• Nina Otter
• Evan Patterson
• Blake Pollard
• Prakash Panangaden
• David Spivak
• Brad Theilman
• Dmitry Vagner
• Zhenghan Wang

Alas, we have no funds for travel and lodging. If you’re interested in giving a talk, please submit an abstract here:

General information about abstracts, American Mathematical Society.

More precisely, please read the information there and then click on the link on that page to submit an abstract. It should then magically fly through the aether to me! Abstracts are due September 3rd, but the sooner you submit one, the greater the chance that we’ll have space.

For the program of the whole conference, go here:

Fall Western Sectional Meeting, U. C. Riverside, Riverside, California, 9–10 November 2019.

I will also be running a special meeting on diversity and excellence in mathematics on Friday November 8th. There will be a banquet that evening, and at some point I’ll figure out how tickets for that will work.

We had a special session like this in 2017, and it’s fun to think about how things have evolved since then.

David Spivak had already written Category Theory for the Sciences, but more recently he’s written another book on applied category theory, Seven Sketches, with Brendan Fong. He already had a company, but now he’s helping run Conexus, which plans to award grants of up to $1.5 million to startups that use category theory (in exchange for equity). Proposals are due June 30th, by the way!

I guess Brendan Fong was already working with David Spivak at MIT in the fall of 2017, but since then they’ve written Seven Sketches and developed a graphical calculus for logic in regular categories. He’s also worked on a functorial approach to machine learning—and now he’s using category theory to unify learners and lenses.

Blake Pollard had just finished his Ph.D. work at U.C. Riverside back in 2018. He will now talk about his work with Spencer Breiner and Eswaran Subrahmanian at the National Institute of Standards and Technology, using category theory to help develop the “smart grid”—the decentralized power grid we need now. Above he’s talking to Brendan Fong at the Centre for Quantum Technologies, in Singapore. I think that’s where they first met.

Nina Otter was a grad student at Oxford in 2017, but now she’s at UCLA and the University of Leipzig. She worked with Ulrike Tillmann and Heather Harrington on stratifying multiparameter persistent homology, and is now working on a categorical formulation of positional and role analysis in social networks. Like Brendan, she’s on the executive board of the applied category theory journal Compositionality.

I first met Tai-Danae Bradley at ACT2018. Now she will talk about her work at Tunnel Technologies, a startup run by her advisor John Terilla. They model sequences—of letters from an alphabet, for instance—using quantum states and tensor networks.

Vin de Silva works on topological data analysis using persistent cohomology so he’ll probably talk about that. He’s studied the “interleaving distance” between persistence modules, using category theory to treat it and the Gromov-Hausdorff metric in the same setting. He came to the last meeting and it will be good to have him back.

Evan Patterson is a statistics grad student at Stanford. He’s worked on knowledge representation in bicategories of relations, and on teaching machines to understand data science code by the semantic enrichment of dataflow graphs. He too came to the last meeting.

Dmitry Vagner was also at the last meeting, where he spoke about his work with Spivak on open dynamical systems and the operad of wiring diagrams. Now is implementing wiring diagrams and a type-safe linear algebra library in Idris. The idea is to avoid problems that people currently run into a lot in TensorFlow (“ugh I have a 3 x 1 x 2 tensor but I need a 3 x 2 tensor”).

Prakash Panangaden has long been a leader in applied category theory, focused on semantics and logic for probabilistic systems and languages, machine learning, and quantum information theory.

Brad Theilman is a grad student in computational neuroscience at U.C. San Diego. I first met him at ACT2018. He’s using algebraic topology to design new techniques for quantifying the spatiotemporal structure of neural activity in the auditory regions of the brain of the European starling. (I bet you didn’t see those last two words coming!)

Last but not least, Zhenghan Wang works on condensed matter physics and modular tensor categories at U.C. Santa Barbara. At Microsoft’s Station Q, he is using this research to help design topological quantum computers.

In short: a lot has been happening in applied category theory, so it will be good to get together and talk about it!

by John Baez at June 16, 2019 08:41 PM

Emily Lakdawalla - The Planetary Society Blog

Reconstructing the Cost of the One Giant Leap
How much did Project Apollo cost? Planetary Society experts answered that question by revisiting primary sources and reconstructing Apollo's entire cost history from 1960 - 1973.

June 16, 2019 05:09 PM

Peter Coles - In the Dark

Coloured Ball Illusion

This image, created by David Novick, is the most impressive colour illusion I have ever seen: all the balls are actually the same colour, brown.

If you don’t believe me, zoom in on any one of them…

I don’t really know why this fascinating image causes the effect that it does, but think it is a combination of hardware and software issues! The hardware issues include the fact that colour receptors are not distributed uniformly at the back of the human eye, so colour perception is different when peripheral cues are present, and also that their spectal response is rather broad with considerable overlap between the three types of cell. The software issue is something to do with how the brain resolves a colour when there are other colour nearby:nNotice how the balls take on the colour of the lines passing across them..

by telescoper at June 16, 2019 07:34 AM

June 15, 2019

Emily Lakdawalla - The Planetary Society Blog

LightSail 2 Launch Viewing: Tips & Tricks
LightSail 2 is launching on the next SpaceX Falcon Heavy rocket from Launch Complex 39A at Kennedy Space Center in Florida. It is one payload of many on the mission known collectively as STP-2. Space Test Program (STP) is a crucial part of the US Air Force’s development of advanced technologies in space.

June 15, 2019 04:12 PM

June 14, 2019

Matt Strassler - Of Particular Significance

A Ring of Controversy Around a Black Hole Photo

[Note Added: Thanks to some great comments I’ve received, I’m continuing to add clarifying remarks to this post.  You’ll find them in green.]

It’s been a couple of months since the `photo’ (a false-color image created to show the intensity of radio waves, not visible light) of the black hole at the center of the galaxy M87, taken by the Event Horizon Telescope (EHT) collaboration, was made public. Before it was shown, I wrote an introductory post explaining what the ‘photo’ is and isn’t. There I cautioned readers that I thought it might be difficult to interpret the image, and controversies about it might erupt.EHTDiscoveryM87

So far, the claim that the image shows the vicinity of M87’s black hole (which I’ll call `M87bh’ for short) has not been challenged, and I’m not expecting it to be. But what and where exactly is the material that is emitting the radio waves and thus creating the glow in the image? And what exactly determines the size of the dark region at the center of the image? These have been problematic issues from the beginning, but discussion is starting to heat up. And it’s important: it has implications for the measurement of the black hole’s mass (which EHT claims is that of 6.5 billion Suns, with an uncertainty of about 15%), and for any attempt to estimate its rotation rate.

Over the last few weeks I’ve spent some time studying the mathematics of spinning black holes, talking to my Harvard colleagues who are world’s experts on the relevant math and physics, and learning from colleagues who produced the `photo’ and interpreted it. So I think I can now clearly explain what most journalists and scientist-writers (including me) got wrong at the time of the photo’s publication, and clarify what the photo does and doesn’t tell us.

One note before I begin: this post is long. But it starts with a summary of the situation that you can read quickly, and then comes the long part: a step-by-step non-technical explanation of an important aspect of the black hole ‘photo’ that, to my knowledge, has not yet been given anywhere else.

[I am heavily indebted to Harvard postdocs Alex Lupsasca and Shahar Hadar for assisting me as I studied the formulas and concepts relevant for fast-spinning black holes. Much of what I learned comes from early 1970s papers, especially those by my former colleague Professor Jim Bardeen (see this one written with Press and Teukolsky), and from papers written in the last couple of years, especially this one by my present and former Harvard colleagues.]

What Does the EHT Image Show?

Scientists understand the black hole itself — the geometric dimple in space and time — pretty well. If one knows the mass and the rotation rate of the black hole, and assumes Einstein’s equations for gravity are mostly correct (for which we have considerable evidence, for example from LIGO measurements and elsewhere), then the equations tell us what the black hole does to space and time and how its gravity works.

But for the `photo’, ​that’s not enough information. We don’t get to observe the black hole itself (it’s black, after all!) What the `photo’ shows is a blurry ring of radio waves, emitted from hot material (a plasma of mostly electrons and protons) somewhere around the black hole — material whose location, velocity, and temperature we do not know. That material and its emission of radio waves are influenced by powerful gravitational forces (whose details depend on the rotation rate of the M87bh, which we don’t know yet) and powerful magnetic fields (whose details we hardly know at all.) The black hole’s gravity then causes the paths on which the radio waves travel to bend, even more than a glass lens will bend the path of visible light, so that where things appear in the ‘photo’ is not where they are actually located.

The only insights we have into this extreme environment come from computer simulations and a few other `photos’ at lower magnification. The simulations are based on well-understood equations, but the equations have to be solved approximately, using methods that may or may not be justified. And the simulations don’t tell you where the matter is; they tell you where the material will go, but only after you make a guess as to where it is located at some initial point in time. (In the same sense: computers can predict the national weather tomorrow only when you tell them what the national weather was yesterday.) No one knows for sure how accurate or misleading these simulations might be; they’ve been tested against some indirect measurements, but no one can say for sure what flaws they might have.

However, there is one thing we can certainly say, and it has just been said publicly in a paper by Samuel Gralla, Daniel Holz and Robert Wald.

Two months ago, when the EHT `photo’ appeared, it was widely reported in the popular press and on blogs that the photo shows the image of a photon sphere at the edge of the shadow of the M87bh. (Instead of `shadow’, I suggested the term ‘quasi-silhouette‘, which I viewed as somewhat less misleading to a non-expert.)

Unfortunately, it seems these statements are not true; and this was well-known to (but poorly communicated by, in my opinion) the EHT folks.  This lack of clarity might perhaps annoy some scientists and science-loving non-experts; but does this issue also matter scientifically? Gralla et al., in their new preprint, suggest that it does (though they were careful to not yet make a precise claim.)

The Photon Sphere Doesn’t Exist

Indeed, if you happened to be reading my posts carefully when the `photo’ first appeared, you probably noticed that I was quite vague about the photon-sphere — I never defined precisely what it was. You would have been right to read this as a warning sign, for indeed I wasn’t getting clear explanations of it from anyone. Studying the equations and conversing with expert colleagues, I soon learned why: for a rotating black hole, the photon sphere doesn’t really exist.

But let’s first define what the photon sphere is for a non-rotating black hole! Like the Earth’s equator, the photon sphere is a location, not an object. This location is the surface of an imaginary ball, lying well outside the black hole’s horizon. On the photon sphere, photons (the particles that make up light, radio waves, and all other electromagnetic waves) travel on special circular or spherical orbits around the black hole.

By contrast, a rotating black hole has a larger, broader `photon-zone’ where photons can have special orbits. But you won’t ever see the whole photon zone in any image of a rotating black hole. Instead, a piece of the photon zone will appear as a `photon ring‘, a bright and very thin loop of radio waves. However, the photon ring is not the edge of anything spherical, is generally not perfectly circular, and generally is not even perfectly centered on the black hole.

… and the Photon Ring Isn’t What We See…

It seems likely that the M87bh is rotating quite rapidly, so it has a photon-zone rather than a photon-sphere, and images of it will have a photon ring. Ok, fine; but then, can we interpret EHT’s `photo’ simply as showing the photon ring, blurred by the imperfections in the `telescope’? Although some of the EHT folks have seemed to suggest the answer is “yes”, Gralla et al. suggest the answer is likely “no” (and many of their colleagues have been pointing out the same thing in private.) The circlet of radio waves that appears in the EHT `photo’ is probably not simply a blurred image of M87bh’s photon ring; it probably shows a combination of the photon ring with something brighter (as explained below). That’s where the controversy starts.

…so the Dark Patch May Not Be the Full Shadow…

The term `shadow’ is confusing (which is why I prefer `quasi-silhouette’ in describing it in public contexts, though that’s my own personal term) but no matter what you call it, in its ideal form it is supposed to be an absolutely dark area whose edge is the photon ring. But in reality the perfectly dark area need not appear so dark after all; it may be partly filled in by various effects. Furthermore, since the `photo’ may not show us the photon ring, it’s far from clear that the dark patch in the center is the full shadow anyway. The EHT folks are well aware of this, but at the time the photo came out, many science writers and scientist-writers (including me) were not.

…so EHT’s Measurement of the M87bh’s Mass is Being Questioned

It was wonderful that EHT could make a picture that could travel round the internet at the speed of light, and generate justifiable excitement and awe that human beings could indirectly observe such an amazing thing as a black hole with a mass of several billion Sun-like stars. Qualitatively, they achieved something fantastic in showing that yes, the object at the center of M87 really is as compact and dark as such a black hole would be expected to be! But the EHT telescope’s main quantitative achievement was a measurement of the mass of the M87bh, with a claimed precision of about 15%.

Naively, one could imagine that the mass is measured by looking at the diameter of the dark spot in the black hole ‘photo’, under the assumption that it is the black hole’s shadow. So here’s the issue: Could interpreting the dark region incorrectly perhaps lead to a significant mistake in the mass measurement, and/or an underestimate of how uncertain the mass measurement actually is?

I don’t know.  The EHT folks are certainly aware of these issues; their simulations show them explicitly.  The mass of the M87bh isn’t literally measured by putting a ruler on the ‘photo’ and measuring the size of the dark spot! The actual methods are much more sophisticated than that, and I don’t understand them well enough yet to explain, evaluate or criticize them. All I can say with confidence right now is that these are important questions that experts currently are debating, and consensus on the answer may not be achieved for quite a while.

———————————————————————-

The Appearance of a Black Hole With Nearby Matter

Ok, now I’m going to explain the most relevant points, step-by-step. Grab a cup of coffee or tea, find a comfy chair, and bear with me.

Because fast-rotating black holes are more complicated, I’m going to start illuminating the controversy by looking at a non-rotating black hole’s properties, which is also what Gralla et al. mainly do in their paper. It turns out the qualitative conclusion drawn from the non-rotating case largely applies in the rotating case too, at least in the case of the M87bh as seen from our perspective; that’s important because the M87bh may well be rotating at a very good clip.

A little terminology first: for a rotating black hole there’s a natural definition of the poles and the equator, just as there is for the Earth: there’s an axis of rotation, and the poles are where that axis intersects with the black hole horizon. The equator is the circle that lies halfway between the poles. For a non-rotating black hole, there’s no such axis and no such automatic definition, but it will be useful to define the north pole of the black hole to be the point on the horizon closest to us.

A Single Source of Electromagnetic Waves

Let’s imagine placing a bright light bulb on the same plane as the equator, outside the black hole horizon but rather close to it. (The bulb could emit radio waves or visible light or any other form of electromagnetic waves, at any frequency; for what I’m about to say, it doesn’t matter at all, so I’ll just call it `light’.) See Figure 1. Where will the light from the bulb go?

Some of it, heading inward, ends up in the black hole, while some of it heads outward toward distant observers. The gravity of the black hole will bend the path of the light. And here’s something remarkable: a small fraction of the light, aimed just so, can actually spiral around the black hole any number of times before heading out. As a result, you will see the bulb not once but multiple times!

There will be a direct image — light that comes directly to us — from near the bulb’s true location (displaced because gravity bends the light a bit, just as a glass lens will distort the appearance of what’s behind it.) That path of that light is the orange arrow in Figure 1. But then there will be an indirect image (the green arrow in Figure 1) from light that goes halfway around the black hole before heading in our direction; we will see that image of the bulb on the opposite side of the black hole. Let’s call that the `first indirect image.’ Then there will be a second indirect image from light that orbits the black hole once and comes out near the direct image, but further out; that’s the blue arrow in Figure 1. Then there will be a third indirect image from light that goes around one and a half times (not shown), and so on. In short, Figure 1 shows the paths of the direct, first indirect, and second indirect images of the bulb as they head toward our location at the top of the image.

BHTruthBulb.png

Figure 1: A light bulb (yellow) outside but near the non-rotating black hole’s horizon (in black) can be seen by someone at the top of the image not only through the light that goes directly upward (orange line) — a “direct image” — but also through light that makes partial or complete orbits of the black hole — “indirect images.” The first indirect and second indirect images are from light taking the green and blue paths. For light to make orbits of the black hole, it must travel near the grey-dashed circle that indicates the location of a “photon-sphere.” (A rotating black hole has no such sphere, but when seen from the north or south pole, the light observed takes similar paths to what is shown in this figure.) [The paths of the light rays were calculated carefully using Mathematica 11.3.]

What you can see in Figure 1 is that both the first and second indirect images are formed by light that spends part of its time close to a special radius around the back hole, shown as a dotted line. This imaginary surface, the edge of a ball,  is an honest “photon-sphere” in the case of a non-rotating black hole.

In the case of a rotating black hole, something very similar happens when you’re looking at the black hole from its north (or south) pole; there’s a special circle then too. But that circle is not the edge of a photon-sphere! In general, photons can have special orbits in a wide region, which I called the “photon-zone” earlier, and only a small set of them are on this circle. You’ll see photons from other parts of the photon zone if you look at the black hole not from the poles but from some other angle.

[If you’d like to learn a bit more about the photon zone, and you have a little bit of knowledge of black holes already, you can profit from exploring this demo by Professor Leo Stein: https://duetosymmetry.com/tool/kerr-circular-photon-orbits/ ]

Back to the non-rotating case: What our camera will see, looking at what is emitted from the light bulb, is shown in Figure 2: an infinite number of increasingly squished `indirect’ images, half on one side of the black hole near the direct image, and the other half on the other side. What is not obvious, but true, is that only the first of the indirect images is large and bright; this is one of Gralla et al.‘s main points. We can, therefore, separate the images into the direct image, the first indirect image, and the remaining indirect images. The total amount of light coming from the direct image and the first indirect image can be large, but the total amount of light from the remaining indirect images is typically (according to Gralla et al.) less than 5% of the light from the first indirect image. And so, unless we have an extremely high-powered camera, we’ll never pick those other images up. Let’s therefore focus our attention on the direct image and the first indirect image.

BHObsvBulb3.png

Figure 2: What the drawing in Figure 1 actually looks like to the observer peering toward the black hole; all the indirect images lie at almost exactly the same distance from the black hole’s center.

WARNING (since this seems to be a common confusion):

IN ALL MY FIGURES IN THIS POST, AS IN THE BLACK HOLE `PHOTO’ ITSELF, THE COLORS OF THE IMAGES ARE CHOSEN ARBITRARILY (as explained in my first blog post on this subject.) THE `PHOTO’ WAS TAKEN AT A SINGLE, NON-VISIBLE FREQUENCY OF ELECTROMAGNETIC WAVES: EVEN IF WE COULD SEE THAT TYPE OF RADIO WAVE WITH OUR EYES, IT WOULD BE A SINGLE COLOR, AND THE ONLY THING THAT WOULD VARY ACROSS THE IMAGE IS BRIGHTNESS. IN THIS SENSE, A BLACK AND WHITE IMAGE MIGHT BE CLEARER CONCEPTUALLY, BUT IT IS HARDER FOR THE EYE TO PROCESS.

A Circular Source of Electromagnetic Waves

Proceeding step by step toward a more realistic situation, let’s replace our ordinary bulb by a circular bulb (Figure 3), again set somewhat close to the horizon, sitting in the plane that contains the equator. What would we see now?

BHTruthCirc2.png

Figure 3: if we replace the light bulb with a circle of light, the paths of the light are the same as in Figure 1, except now for each point along the circle. That means each direct and indirect image itself forms a circle, as shown in the next figure.

That’s shown in Figure 4: the direct image is a circle (looking somewhat larger than it really is); outside it sits the first indirect image of the ring; and then come all the other indirect images, looking quite dim and all piling up at one radius. We’re going to call all those piled-up images the “photon ring”.

BHObsvCirc3.png

Figure 4: The circular bulb’s direct image is the bright circle, but a somewhat dimmer first indirect image appears further out, and just beyond one finds all the other indirect images, forming a thin `photon ring’.

Importantly, if we consider circular bulbs of different diameter [yellow, red and blue in Figure 5], then although the direct images reflect the differences in the bulbs’ diameters (somewhat enlarged by lensing), the first indirect images all are about the same diameter, just a tad larger or smaller than the photon ring.  The remaining indirect images all sit together at the radius of the photon ring.

BH3Circ4.png

Figure 5: Three bulbs of different diameter (yellow, blue, red) create three distinct direct images, but their first indirect images are located much closer together, and very close to the photon ring where all their remaining indirect images pile up.

These statements are also essentially true for a rotating black hole seen from the north or south pole; a circular bulb generates a series of circular images, and the indirect images all pile more or less on top of each other, forming a photon ring. When viewed off the poles, the rotating black hole becomes a more complicated story, but as long as the viewing angle is small enough, the changes are relatively minor and the picture is qualitatively somewhat similar.

A Disk as a Source of Electromagnetic Waves

And what if you replaced the circular bulb with a disk-shaped bulb, a sort of glowing pancake with a circular hole at its center, as in Figure 7? That’s relevant because black holes are thought to have `accretion disks’ made of material orbiting the black hole, and eventually spiraling in. The accretion disk may well be the dominant source emitting radio waves at the M87bh. (I’m showing a very thin uniform disk for illustration, but a real accretion disk is not uniform, changes rapidly as clumps of material move within it and then spiral into the black hole, and may be quite thick — as thick as the black hole is wide, or even thicker.)

Well, we can think of the disk as many concentric circles of light placed together. The direct images of the disk (shown in Figure 6 left, on one side of the disk, as an orange wash) would form a disk in your camera, the dim red region in Figure 6 right; the hole at its center would appear larger than it really is due to the bending caused by the black hole’s gravity, but the shape would be similar. However, the indirect images would all pile up in almost the same place from your perspective, forming a bright and quite thin ring, the bright yellow circle in Figure 6 right. (The path of the disk’s first indirect image is shown in Figure 6 left, going halfway about the black hole as a green wash; notice how it narrows as it travels, which is why it appears as a narrow ring in the image at right.) This circle — the full set of indirect images of the whole disk — is the edge of the photon-sphere for a non-rotating black hole, and the circular photon ring for a rotating black hole viewed from its north or south pole.

BHDisk2.png

Figure 6: A glowing disk of material (note it does not touch the black hole) looks like a version of Figure 5 with many more circular bulbs. The direct image of the disk forms a disk (illustrated at left, for a piece of the disk, as an orange wash) while the first indirect image becomes highly compressed (illustrated, for a piece of the disk, as a green wash) and is seen as a narrow circle of bright light.  (It is expected that the disk is mostly transparent in radio waves, so the indirect image can pass through it.) That circle, along with the other indirect images, forms the photon ring. In this case, because the disk’s inner edge lies close to the black hole horizon, the photon ring sits within the disk’s direct image, but we’ll see a different example in Figure 9.

[Gralla et al. call the first indirect image the `lensed ring’ and the remaining indirect images, currently unobservable at EHT, the `photon ring’, while EHT refers to all the indirect images as the `photon ring’. Just letting you know in case you hear `lensed ring’ referred to in future.]

So the conclusion is that if we had a perfect camera, the direct image of a disk makes a disk, but the indirect images (mainly just the first one, as Gralla et al. emphasize) make a bright, thin ring that may be superposed upon the direct image of the disk, depending on the disk’s shape.

And this conclusion, with some important adjustments, applies also for a spinning black hole viewed from above its north or south pole — i.e., along its axis of rotation — or from near that axis; I’ll mention the adjustments in a moment.

But EHT is not a perfect camera. To make the black hole image, technology had to be pushed to its absolute limits. Someday we’ll see both the disk and the ring, but right now, they’re all blurred together. So which one is more important?

From a Blurry Image to Blurry Knowledge

What does a blurry camera do to this simple image? You might think that the disk is so dim and the ring so bright that the camera will mainly show you a blurry image of the bright photon ring. But that’s wrong. The ring isn’t bright enough. A simple calculation reveals that the ​photo will show mainly the disk, not the photon ring! This is shown in Figure 9, which you can compare with the Black Hole `photo’ (Figure 10). (Figure 9 is symmetric around the ring, but the photo is not, for multiple reasons — Doppler-like effect from rotation, viewpoint off the rotation axis, etc. — which I’ll have to defer til another post.)

More precisely, the ring and disk blur together, but the brightness of the image is dominated by the disk, not the ring.

BHBlurDisk_a1_2.png

Figure 7: At left is repeated the image in Figure 6, as seen in a perfect camera, while at right the same image is shown when observed using a camera with imperfect vision. The disk and ring blur together into a single thick ring, whose brightness is dominated by the disk. Note that the shadow — the region surrounded by the yellow photon ring — is not the same as the dark patch in the right-hand image; the dark patch is considerably smaller than the shadow.

Let’s say that again: the black hole `photo’ may mainly show the M87bh’s accretion disk, with the photon ring contributing only some of the light, and therefore the photon ring does not completely and unambiguously determine the radius of the observed dark patch in the `photo​.’ In general, the patch could be considerably smaller than what is usually termed the `shadow’ of the black hole.

M87BH_Vicinity_Photo_2a.png

Figure 8: (Left) We probably observe the M87bh at a small angle off its south pole. Its accretion disk has an unknown size and shape — it may be quite thick and non-uniform — and it may not even lie at the black hole’s equator. The disk and the black hole interact to create outward-going jets of material (observed already many years ago but not clearly visible in the EHT ‘photo’.) (Right) The EHT `photo’ of the M87bh (taken in radio waves and shown in false color!) Compare with Figure 7; the most important difference is that one side of the image is brighter than the other. This likely arises from (a) our view being slightly off from the south pole, combined with (b) rotation of the black hole and its disk, and (c) possibly other more subtle issues.

This is important. The photon ring’s diameter, and thus the width of the `shadow’ too, barely depend on the rotation rate of the black hole; they depend almost exclusively on the black hole’s mass. So if the ring in the photo were simply the photon ring of the M87bh, you’d have a very simple way to measure the black hole’s mass without knowing its rotation rate: you’d look at how large the dark patch is, or equivalently, the diameter of the blurry ring, and that would give you the answer to within 10%. But it’s nowhere near so simple if the blurry ring shows the accretion disk, because the accretion disk’s properties and appearance can vary much more than the photon ring; they can depend strongly on the black hole’s rotation rate, and also on magnetic fields and other details of the black hole’s vicinity.

The Important Role of Rotation

If we conclude that EHT is seeing a mix of the accretion disk with the photon ring, with the former dominating the brightness, then this makes EHT’s measurement of the M87bh’s mass more confusing and even potentially suspect. Hence: controversy. Is it possible that EHT underestimated their uncertainties, and that their measurement of the black hole mass has more ambiguities, and is not as precise, as they currently claim?

Here’s where the rotation rate is important. Despite what I showed (for pedagogical simplicity) in Figure 7, for a non-rotating black hole the accretion disk’s central gap is actually expected to lie outside the photon ring; this is shown at the top of Figure 9.  But  the faster the black hole rotates, the smaller this central gap is expected to be, to the point that for a fast-rotating black hole the gap will lie inside the photon ring, as shown at the bottom of Figure 9. (This tendency is not obvious; it requires understanding details of the black hole geometry.) And if that is true, the dark patch in the EHT image may not be the black hole’s full shadow (i.e. quasi-silhouette), which is the region inside the photon ring. It may be just the inner portion of it, with the outer portion obscured by emission from the accretion disk.

The effect of blurring in the two cases of slow (or zero) and fast rotation are illustrated in Figure 9, where the photon ring’s size is taken to be the same in each case but the disk’s inner edge is close in or far out. (The black holes, not illustrated since they aren’t visible anyway, differ in mass by about 10% in order to have the photon ring the same size.) This shows why the size of the dark patch can be quite different, depending on the disk’s shape, even when the photon ring’s size is the same.

BHBlurDisk_a0_a1_3.png

Figure 9: Comparing the appearance of slightly more realistically-shaped disks around slowly rotating or non-rotating black holes (top) to those around fast-rotating black holes (bottom) of the same mass, as seen from the north or south pole. (Left) the view in a perfect camera; (right) rough illustration of the effect of blurring in the current version of the EHT. The faster the black hole is spinning, the smaller the central gap in the accretion disk is likely to be. No matter what the extent of the accretion disk (dark red), the photon ring (yellow) remains at roughly the same location, changing only by 10% between a non-rotating black hole and a maximally rotating black hole of the same mass. But blurring in the camera combines the disk and photon ring into a thick ring whose brightness is dominated by the disk rather than the ring, and which can therefore be of different size even though the mass is the same. This implies that the radius of the blurry ring in the EHT `photo’, and the size of the dark region inside it, cannot by themselves tell us the black hole’s mass; at a minimum we must also know the rotation rate (which we do not.)

Gralla et al. subtly raise these questions but are careful not to overstate their case, perhaps because they have not yet completed their study of rotating black holes. But the question is now in the air.

I’m interested to hear what the EHT folks have to say about it, as I’m sure they have detailed arguments in favor of their procedures. In particular, EHT’s simulations show all of the effects mentioned above; there’s none of this of which they are unaware. (In fact, the reason I know my illustrations above are reasonable is partly because you can see similar pictures in the EHT papers.) As long as the EHT folks correctly accounted for all the issues, then they should have been able to properly measure the mass and estimate their uncertainties correctly. In fact, they don’t really use the photo itself; they use more subtle techniques applied to their telescope data directly. Thus it’s not enough to argue the photo itself is ambiguous; one has to argue that EHT’s more subtle analysis methods are flawed. No one has argued that yet, as far as I am aware.

But the one thing that’s clear right now is that science writers almost uniformly got it wrong [because the experts didn’t explain these points well] when they tried to describe the image two months ago. The `photo’ probably does not show “a photon ring surrounding a shadow.” That would be nice and simple and impressive-sounding, since it refers to fundamental properties of the black hole’s warping effects on space. But it’s far too glib, as Figures 7 and 9 show. We’re probably seeing an accretion disk supplemented by a photon ring, all blurred together, and the dark region may well be smaller than the black hole’s shadow.

(Rather than, or in addition to, the accretion disk, it is also possible that the dominant emission in the photo comes from the inner portion of one of the jets that emerges from the vicinity of the black hole; see Figure 8 above. This is another detail that makes the situation more difficult to interpret, but doesn’t change the main point I’m making.)

Someday in the not distant future, improved imaging should allow EHT to separately image the photon ring and the disk, so both can be observed easily, as in the left side of Figure 9. Then all these questions will be answered definitively.

Why the Gargantua Black Hole from Interstellar is Completely Different

Just as a quick aside, what would you see if an accretion disk were edge-on rather than face-on? Then, in a perfect camera, you’d see something like the famous picture of Gargantua, the black hole from the movie Interstellar — a direct image of the front edge of the disk, and a strongly lensed indirect image of the back side of the disk, appearing both above and below the black hole, as illustrated in Figure 11. And that leads to the Gargantua image from the movie, also shown in Figure 11. Notice the photon ring (which is, as I cautioned you earlier, off-center!)   [Note added: this figure has been modified; in the original version I referred to the top and bottom views of the disk’s far side as the  “1st indirect image”, but as pointed out by Professor Jean-Pierre Luminet, that’s not correct terminology here.]

BHGarg4.png

Figure 10: The movie Interstellar features a visit to an imaginary black hole called Gargantua, and the simulated images in the movie (from 2014) are taken from near the equator, not the pole. As a result, the direct image of the disk cuts across the black hole, and indirect images of the back side of the disk are seen above and below the black hole. There is also a bright photon ring, slightly off center; this is well outside the surface of the black hole, which is not visible. A real image would not be symmetric left-to-right; it would be brighter on the side that is rotating toward the viewer.  At the bottom is shown a much more realistic visual image (albeit not so good quality) from 1994 by Jean-Alain Marck, in which this asymmetry can be seen clearly.

However, the movie image leaves out an important Doppler-like effect (which I’ll explain someday when I understand it 100%). This makes the part of the disk that is rotating toward us bright, and the part rotating away from us dim… and so a real image from this vantage point would be very asymmetric — bright on the left, dim on the right — unlike the movie image.  At the suggestion of Professsor Jean-Pierre Luminet I have added, at the bottom of Figure 10, a very early simulation by Jean-Alain Marck that shows this effect.

I mention this because a number of expert science journalists incorrectly explained the M87 image by referring to Gargantua — but that image has essentially nothing to do with the recent black hole `photo’. M87’s accretion disk is certainly not edge-on. The movie’s Gargantua image is taken from the equator, not from near the pole.

Final Remarks: Where a Rotating Black Hole Differs from a Non-Rotating One

Before I quit for the week, I’ll just summarize a few big differences for fast-rotating black holes compared to non-rotating ones.

1) As I’ve just emphasized, what a rotating black hole looks like to a distant observer depends not only on where the matter around the black hole is located but also on how the black hole’s rotation axis is oriented relative to the observer. A pole observer, an equatorial observer, and a near-pole observer see quite different things. (As noted in Figure 8, we are apparently near-south-pole observers for M87’s black hole.)

Let’s assume that the accretion disk lies in the same plane as the black hole’s equator — there are some reasons to expect this. Even then, the story is complex.

2) As I mentioned above, instead of a photon-sphere, there is a ‘photon-zone’ — a region where specially aimed photons can travel round the black hole multiple times. For high-enough spin (greater than about 80% of maximum as I recall), an accretion disk’s inner edge can lie within the photon zone, or even closer to the black hole than the photon zone; and this can cause a filling-in of the ‘shadow’.

3) Depending on the viewing angle, the indirect images of the disk that form the photon ring may not be a circle, and may not be concentric with the direct image of the disk. Only when viewed from along the rotation axis (i.e., above the north or south pole) will the direct and indirect images of the disk all be circular and concentric. We’re not viewing the M87bh on its axis, and that further complicates interpretation of the blurry image.

4) When the viewing angle is not along the rotation axis the image will be asymmetric, brighter on one side than the other. (This is true of EHT’s `photo’.) However, I know of at least four potential causes of this asymmetry, any or all of which might play a role, and the degree of asymmetry depends on properties of the accretion disk and the rotation rate of the black hole, both of which are currently unknown. Claims about the asymmetry made by the EHT folks seem, at least to me, to be based on certain assumptions that I, at least, cannot currently check.

Each of these complexities is a challenge to explain, so I’ll give both you and I a substantial break while I figure out how best to convey what is known (at least to me) about these issues.

by Matt Strassler at June 14, 2019 12:15 PM

June 11, 2019

CERN Bulletin

Learning Classical Music Club

Le Learning Classical Music Club propose des cours d’instruments pour les enfants, adolescents et adultes : piano, violon, flûte traversière, flûte à bec, clavecin, harpe, violoncelle, guitare et clarinette.

Nous avons le plaisir de proposer un nouveau cours :

Cours de percussion, tous styles musicaux/ initiation aux percussions pour les enfants.

 

June 11, 2019 04:06 PM

CERN Bulletin

CROQUET & LAWN BOWLS CLUB

The club is looking for new members: Why not try a new sport this summer? The season lasts from April to early November.

Croquet is a game of skill, where accuracy and tactics are equally important. It is good fun and Social Nights on Mondays, ending with a barbecue, are very popular. Play is possible at any time, including weekends.

We have two lawns and clubhouse on the Prévessin site, with bar and barbecue. All playing equipment is provided. You would just need tennis-type flat-soled shoes.

We organize a range of internal tournaments, we play an annual match Zürich vs Croquet Club and our top players compete in European and World championships.

Coaching is offered on request in both the simpler Golf Croquet and the more tactical Association Croquet version of the game. Lawn bowls is also played, on Wednesday mornings and Thursday afternoons.

For further information please contact:

 

June 11, 2019 04:06 PM

CERN Bulletin

Conference

The Staff Association is pleased to invite you to a conference:

 

 

 

 

 

 

June 11, 2019 04:06 PM

CERN Bulletin

Le Jardin des Particules

A Library in “Le Jardin des Particules” for "the little and the big ones"

One of this year's educational projects was the opening of our library for "Petits et Grands" within the structure of the crèche and school “Le Jardin des Particules”.

Why a library?

The library is an extraordinary place for all children; it allows them to open up to the world of books, to develop their imagination and reading.

It is a privileged living space for moments of history, relaxation and wonder for the little ones... then for the older ones, it is a place to discover the world of writing and figurative art.

In the library, we learn to observe, search, and share in peace and quiet. It is a very important place for the transmission of knowledge, cultures and also for autonomy and personal enrichment...

Opened since January 2019, this quiet and engaging space has already benefited all the children in the structure.

"Shared reading makes you happy" ISJM1

To also allow parents to discover our new library, we took the opportunity of "World Reading Aloud Day" to invite them to the inauguration and share moments of reading in the different spaces organized throughout the day, either under the book tree, or in the huts and tents in the garden, or in the library... of course!

Children, professionals and parents were able to travel and settle from one place to another, in a calm and friendly atmosphere.

It was a very beautiful day! To be done again!!!!!

 

[1] l’Institut suisse Jeunesse et Médias 

June 11, 2019 04:06 PM

Georg von Hippel - Life on the lattice

Looking for guest bloggers to cover LATTICE 2019
My excellent reason for not attending LATTICE 2018 has become a lot bigger, much better at many things, and (if possible) even more beautiful — which means I won't be able to attend LATTICE 2019 either (I fully expect to attend LATTICE 2020, though). So once again I would greatly welcome guest bloggers willing to cover LATTICE 2019; if you are at all interested, please send me an email and we can arrange to grant you posting rights.

by Georg v. Hippel (noreply@blogger.com) at June 11, 2019 10:28 AM

Georg von Hippel - Life on the lattice

Book Review: "Lattice QCD — Practical Essentials"
There is a new book about Lattice QCD, Lattice Quantum Chromodynamics: Practical Essentials by Francesco Knechtli, Michael Günther and Mike Peardon. At 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.

In line with this aim, the authors spend relatively little time on the physical or field-theoretic background; while some more advanced topics such as the Nielsen-Ninomiya theorem and the Symanzik effective theory are touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are omitted altogether. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation for pure gauge fields, and hybrid Monte Carlo (HMC) for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov-space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging are explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.

Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.

___
Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.

by Georg v. Hippel (noreply@blogger.com) at June 11, 2019 10:27 AM

June 10, 2019

Matt Strassler - Of Particular Significance

Minor Technical Difficulty with WordPress

Hi all — sorry to bother you with an issue you may not even have noticed, but about 18 hours ago a post of mine that was under construction was accidentally published, due to a WordPress bug.  Since it isn’t done yet, it isn’t readable (and has no figures yet) and may still contain errors and typos, so of course I tried to take it down immediately.  But it seems some of you are still getting the announcement of it or are able to read parts of it.  Anyway, I suggest you completely ignore it, because I’m not done working out the scientific details yet, nor have I had it checked by my more expert colleagues; the prose and perhaps even the title may change greatly before the post comes out later this week.  Just hang tight and stay tuned…

by Matt Strassler at June 10, 2019 11:43 PM

Matt Strassler - Of Particular Significance

The Black Hole Photo: Controversy Begins To Bubble Up

It’s been a couple of months since the `photo’ (a false-color image created to show the intensity of radio waves, not visible light) of the the black hole at the center of the galaxy M87, taken by the Event Horizon Telescope (EHT) collaboration, was made public.  Before it was shown, I wrote an introductory post explaining what the ‘photo’ is and isn’t.  There I cautioned readers that I thought it might be difficult to interpret the image, and controversies about it might erupt. This concern seems to have been warranted.  This is the first post of several in which I’ll explain the issue as I see it.

So far, the claim that the image shows the vicinity of M87’s black hole (which I’ll call `M87bh’ for short) has not been challenged, and I’m not expecting it to be. But what and where exactly is the material that is emitting the radio waves and thus creating the glow in the image? And what exactly determines the size of the dark region at the center of the image? That’s been a problematic issue from the beginning, but discussion is starting to heat up.  And it’s important: it has implications for the measurement of the black hole’s mass, and of any attempt to estimate its rotation rate.

Over the last few weeks I’ve spent some time studying the mathematics of spinning black holes, talking to my Harvard colleagues who are world’s experts on the relevant math and physics, and learning from colleagues who produced the `photo’ and interpreted it.  So I think I can now clearly explain what most journalists and scientist-writers (including me) got wrong at the time of the photo’s publication, and clarify what the photo does and doesn’t tell us.

[I am heavily indebted to Harvard postdocs Alex Lupsasca and Shahar Hadar for assisting me as I studied the formulas and concepts relevant for fast-spinning black holes. Much of what I learned comes from early 1970s papers, especially those by my former colleague Professor Jim Bardeen (see this one written with Press and Teukolsky), and from papers written in the last couple of years, especially this one by my present and former Harvard colleagues.]

What does the EHT Image Show?

Scientists understand the black hole itself — the geometric dimple in space and time — pretty well.  If one knows the mass and the rotation rate of the black hole, and assumes Einstein’s equations for gravity are mostly correct (for which we have considerable evidence, for example from LIGO measurements and elsewhere), then the equations tell us what the black hole does to space and time and how its gravity works.

But for the `photo’, ​that’s not enough information.  We don’t get to observe black hole itself (it’s black, after all!)   What the `photo’ shows is a blurry ring of radio waves, emitted from hot material (mostly electrons and protons) somewhere around the black hole — material whose location, velocity, and temperature we do not know. That material and its emission of radio waves are influenced by powerful gravitational forces (whose details depend on the rotation rate of the M87bh, which we don’t know yet) and powerful magnetic fields (whose details we hardly know at all.)  The black hole then bends the paths of the radio waves extensively, even more than does a glass lens, so that where things appear in the image is not where they are actually located.

The only insights we have into this extreme environment come from computer simulations and a few other `photos’ at lower magnification. The simulations are based on well-understood equations, but the equations have to be solved approximately, using methods that may or may not be justified. And the simulations don’t tell you where the matter is; they tell you where the material will go, but only after you make a guess as to where it is located at some initial point in time.  (In the same sense: computers can predict the national weather tomorrow only when you tell them what the national weather was yesterday.) No one knows for sure how accurate or misleading these simulations might be; they’ve been tested against some indirect measurements, but no one can say for sure what flaws they might have.

However, there is one thing we can certainly say, and a paper by Gralla, Holz and Wald has just said it publicly.

When the EHT `photo’ appeared, it was widely reported that it shows the image of a photon sphere at the edge of the shadow (or ‘quasi-silhouette‘, a term I suggested as somewhat less misleading) of the M87bh.

[Like the Earth’s equator, the photon sphere is a location, not an object.  Photons (the particles that make up light, radio waves, and all other electromagnetic radiation) that move along the photon sphere have special, spherical orbits around the black hole.]

Unfortunately, it seems likely that these statements are incorrect; and Gralla et al. have said almost as much in their new preprint (though they were careful not to make a precise claim.)

 

The Photon Sphere Doesn’t Exist

Indeed, if you happened to be reading my posts carefully back then, you probably noticed that I was quite vague about the photon-sphere — I never defined precisely what it was.  You would have been right to read this as a warning sign, for indeed I wasn’t getting clear explanations of it from anyone. A couple of weeks later, as I studied the equations and conversed with colleagues, I learned why; for a rotating black hole, the photon sphere doesn’t really exist.  There’s a broad photon-zone' where photons can have special orbits, but you won't ever see the whole photon zone in an image of a rotating black hole.  Instead a piece of the photon zone will show up asphoton ring, a bright thin loop of radio waves.

But this ring is not the edge of anything spherical, is generally not perfectly circular, and is not even perfectly centered on the black hole.

… and the Photon Ring Isn’t What We See…

It seems likely that the M87bh is rotating quite rapidly, so it probably has no photon-sphere.  But does it show a photon ring?  Although some of the EHT folks seemed to suggest the answer was ‘yes’, Gralla et al. suggest the answer is likely `no’ (and my Harvard colleagues were finding the same thing.)  It seems unlikely that the circlet of radio waves that appears in the EHT `photo’ is really an image of M87bh’s photon ring anyway; it’s probably something else.  That’s where controversy starts.

…so the Dark Patch is Probably Not the Full Shadow

The term shadow' is confusing (which is why I prefer quasi-silhouette’) but no matter what you call it, in its ideal form it is a perfectly dark area whose edge is the photon ring.    But in reality the perfectly dark area need not appear so dark after all; it may be filled in by various effects.  Furthermore, since the `photo’ may not show us the photon ring, it’s far from clear that the dark patch in the center is the full shadow anyway.

Step-By-Step Approach

To explain these points will take some time and care, so I’m going to spread the explanation out over several blog posts.  Otherwise it’s just too much information too fast, and I won’t do a good job writing it down.  So bear with me… expect at least three more posts, probably four, and even then there will still be important issues to return to in future.

The Appearance of a  Black Hole With Nearby Matter

Because fast-rotating black holes are complicated, I’m going to illuminate the controversy using a non-rotating black hole’s properties, which is also what Gralla et al. mainly do in their paper. It turns out the qualitative conclusion drawn from the non-rotating case largely applies in the rotating case too, at least in the case of the M87BH as seen from our perspective; that’s important because the M87BH is probably rotating at a very good clip. (At the end of this post I’ll briefly describe some key differences between the appearance of non-rotating black holes, rotating black holes observed along the rotation axis, and rotating black holes observed just a bit off the rotation axis.)

A little terminology first: for a rotating black hole there’s a natural definition of the poles and the equator, just as there is for the Earth: there’s an axis of rotation, and the poles are where that axis intersects with the black hole horizon. The equator is the circle that lies halfway between the poles. For a non-rotating black hole, there’s no such axis and no such automatic definition, but it will be useful to define the north pole of the black hole to be the point on the horizon closest to us.

A Single Source of Electromagnetic Waves

Let’s imagine placing a bright light bulb on the same plane as the equator, outside the black hole horizon but rather close to it. (The bulb could emit radio waves or visible light or any other form of electromagnetic waves, at any frequency; for what I’m about to say, it doesn’t matter at all, so I’ll just call it `light’.)  See Figure 1.  Where would the light from the bulb go?

Some of it, heading inward, ends up in the black hole, while some of it heads outward toward distant observers. The gravity of the black hole will bend the path of the light. And here’s something remarkable: a small fraction of the light, aimed just so, can actually spiral around the black hole any number of times before heading out. As a result, you will see the bulb not once but multiple times!

There will be a direct image — light that comes directly to us — from near the bulb’s true location (displaced because gravity bends the light a bit, just as a glass lens will distort the appearance of what’s behind it.) That’s the orange arrow in Figure 1.  But then there will be an indirect image from light that goes halfway (the green arrow in Figure 1) around the black hole before heading in our direction; we will see that image of the bulb on the opposite side of the black hole. Let’s call that the `first indirect image.’ Then there will be a second indirect image from light that orbits the black hole once and comes out near the direct image, but further out; that’s the blue arrow in Figure 1. Then there will be a third indirect image from light that goes around one and a half times (not shown), and so on. Figure 1 shows the paths of the direct, first indirect, and second indirect images of the bulb as they head toward our location at the top of the image.

What you can see in Figure 1 is that both the first and second indirect images are formed by light (er, radio waves) that spends part of its time close to a special radius around the back hole, shown as a dotted line. This, in the case of a non-rotating black hole, is an honest “photon-sphere”.

In the case of a rotating black hole, something very similar happens when you’re looking at the black hole from its north pole; there’s a special circle then too.  But that circle is not the edge of a photon-sphere!  In general, photons can orbit in a wide region, which I’ll call the “photon-zone.” You’ll see photons from other parts of the photon zone if you look at the black hole not from the north pole but from some other angle.

What our radio-wave camera will see, looking at what is emitted from the light bulb, is shown in Figure 2: an infinite number of increasingly squished `indirect’ images, half on one side of the black hole near the direct image, and the other half on the other side. What is not obvious, but true, is that only the first of the indirect images is bright; this is one of Gralla et al’s main points. We can, therefore, separate the images into the direct image, the first indirect image, and the remaining indirect images. The total amount of light coming from the direct image and the first indirect image can be large, but the total amount of light from the remaining indirect images is typically (according to Gralla et al.) less than 5% of the light from the first indirect image. And so, unless we have an extremely high resolution camera, we’ll never pick those other images up up. Consequently, all we can really hope to detect with something like EHT is the direct image and the first indirect image.

WARNING (since this seems to be a common confusion even after two months):

IN ALL MY FIGURES IN THIS POST, AS IN THE BLACK HOLE `PHOTO’ ITSELF, THE COLORS OF THE IMAGE ARE CHOSEN ARBITRARILY (as explained in my first blog post on this subject.) THE `PHOTO’ WAS TAKEN AT A SINGLE, NON-VISIBLE FREQUENCY OF ELECTROMAGNETIC WAVES: EVEN IF WE COULD SEE THAT TYPE OF RADIO WAVE WITH OUR EYES, IT WOULD BE A SINGLE COLOR, AND THE ONLY THING THAT WOULD VARY ACROSS THE IMAGE IS BRIGHTNESS. IN THIS SENSE, A BLACK AND WHITE IMAGE MIGHT BE CLEARER CONCEPTUALLY, BUT IT IS HARDER FOR THE EYE TO PROCESS.

A Circular Source of Electromagnetic Waves

Let’s replace our ordinary bulb by a circular bulb (Figure 3), again set somewhat close to the horizon, sitting in the plane that contains the equator. What would we see now? Figure 4: The direct image is a circle (looking somewhat larger than it really is); outside it sits the first indirect image of the ring; and then come all the other indirect images, looking quite dim and all piling up at one radius. We’re going to call all those piled-up images the “photon ring”.

Importantly, if we replace that circular bulb [shown yellow in Figure 5] by one of a larger or smaller radius [shown blue in Figure 5], then (Figure 6) the inner direct image would look larger or smaller to us, but the indirect images would barely move. They remain very close to the same size no matter how big a circular bulb we chose would barely move!

A Disk as a Source of Electromagnetic Waves

And what if you replaced the circular bulb with a disk-shaped bulb, a sort of glowing pancake with a circular hole at its center, as in Figure 7? That’s relevant because black holes are thought to have `accretion disks’ of material (possibly quite thick — I’m showing a very thin one for illustration, but they can be as thick as the black hole is wide, or even thicker) that orbit them. The accretion disk may be the source of the radio waves at M87’s black hole. Well, we can think of the disk as many concentric circles of light placed together. The direct images of the disk (shown on one side of the disk as an orange wash) would form a disk in your camera (Figure 8); the hole at its center would appear larger than it really is due to the bending caused by the black hole’s gravity, but the shape would be the same. However, the indirect images (the first of which is shown going halfway about the black hole as a green wash) would all pile up in the same place from your perspective, forming a bright and quite thin ring. This is the photon ring for a non-spinning black hole — the full set of indirect images of everything that lies at or inside the photon sphere but outside the black hole horizon.

[Gralla et al. call the first indirect image the `lensed ring’ and the remaining indirect images, completely unobservable at EHT, the `photon ring’. I don’t know if their notation will be adopted but you might hear `lensed ring’ referred to in future. In any case, what EHT calls the photon ring includes what Gralla et al. call the lensed ring.]

So the conclusion is that if we had a perfect camera, the direct image of a disk makes a disk, but the indirect images (mainly just the first one, as Gralla et al. emphasize) make a bright, thin ring that may be superposed upon the direct image of the disk, depending on the disk’s shape.

And this conclusion, with some important adjustments, applies also for a spinning black hole viewed from above its north or south pole — its axis of rotation — or from near that axis; I’ll mention the adjustments in a moment.

But EHT is not a perfect camera. To make the black hole image, it had to be pushed to its absolute limits.  Someday we’ll see both the disk and the ring, but right now, they’re all blurred together.  So which one is more important?

From a Blurry Image to Blurry Knowledge

What does a blurry camera do to this simple image? You might think that the disk is so dim that the camera will mainly show you a blurry image of the bight photon ring. But that’s wrong. The ring isn’t bright enough. A simple calculation reveals that blurring the ring makes it dimmer than the disk! The photo, therefore, will show mainly the accretion disk, not the photon ring! This is shown in Figure 9, which you can compare with the Black Hole `photo’ (Figure 10).  (Figure 9 is symmetric around the ring, but the photo is not, for multiple reasons — rotation, viewpoint off the rotation axis, etc. — which I’ll have to defer til another post.)

More precisely, the ring and disk blur together, but the image is dominated by the disk.

Let’s say that again: the black hole `photo’ is likely showing the accretion disk, with the photon ring contributing only some of the light, and therefore the photon ring does not completely and unambiguously determine the radius of the observed dark patch in the `photo​.’  In general, the patch may well be smaller than what is usually termed the `shadow’ of the black hole.

This is very important. The photon ring’s radius barely depend on the rotation rate of the black hole, and therefore, if the light were coming from the ring, you’d know (without knowing the black hole’s rotation right) how big its dark patch will appear for a given mass. You could therefore use the radius of the ring in the photo to determine the black hole’s mass. But the accretion disk’s properties and appearance can vary much more. Depending on the spin of the black hole and the details of the matter that’s spiraling in to the black hole, its radius can be larger or smaller than the photon ring’s radius… making the measurement of the mass both more ambiguous and — if you partially mistook the accretion disk for the photon ring — potentially suspect. Hence: controversy. Is it possible that EHT underestimated their uncertainties, and that their measurement of the black hole mass has more ambiguities, and is not as precise, as they currently claim?

Here’s where the rotation rate is important.  For a non-rotating black hole the accretion disk’s inner edge is expected to lie outside the photon ring, but for a fast-rotating black hole (as M87’s may well be), it will lie inside the photon ring. And if that is true, the dark patch in the EHT image may not be the black hole’s full shadow (i.e. quasi-silhouette). It may be just the inner portion of it, with the outer portion obscured by emission from the accretion disk.

Gralla et al. subtly raise these questions but are careful not to overstate their case, because they have not yet completed their study of rotating black holes. But the question is now in the air. I’m interested to hear what the EHT folks have to say about it, as I’m sure they have detailed arguments in favor of their procedures.

(Rather than the accretion disk, it is also possible that the dominant emission comes from the inner portion of one of the jets that emerges from the vicinity of the black hole. This is another detail that makes the situation more difficult to interpret, but doesn’t change the main point I’m making.)

Why the Gargantua Black Hole From Interstellar is Completely Different

Just as a quick aside, what would you see if an accretion disk were edge-on rather than face-on? Then, in a perfect camera, you’d see something like the famous picture of Gargantua, the black hole from the movie Interstellar — a direct image of the front edge of the disk, and a strongly lensed indirect image of the back side of the disk, appearing both above and below the black hole, as illustrated in Figure 11.

One thing that isn’t included in the Gargantua image from the movie (Figure 12) is a sort of Doppler effect (which I’ll explain someday when I understand it 100%). This makes the part of the disk that is rotating toward us bright, and the part rotating away from us dim… and so the image will be very asymmetric, unlike the movie image. See Figure 13 with what it would really `look’ like to the EHT.

I mention this because a number of expert science journalists incorrectly explained the M87 image by referring to Gargantua — but that image has essentially nothing to do with the recent black hole `photo’. M87’s accretion disk is certainly not edge-on. The movie’s Gargantua image is taken from the equator, not from near the pole, and does not show the Doppler effect correctly (for artistic reasons).

Where a Rotating Black Hole Differs

Before I quit for the day, I’ll just summarize a few big differences for fast-rotating black holes compared to non-rotating ones.

1) What a rotating black hole looks like to a distant observe depends not only on where the matter around the black hole is located but also on how the black hole’s rotation axis is oriented relative to the observer. A north-pole observer, an equatorial observer, and a near-north-pole observer see quite different things. (We are apparently near-south-pole observers for M87’s black hole.)

Let’s assume that the accretion disk lies in the same plane as the black hole’s equator — there are reasons to expect this. Even then, the story is complex.

2) Instead of a photon-sphere, there is what you might call a `photon-zone’ — a region where specially aimed photons can travel round the black hole multiple times. As I mentioned above, for high-enough spin (greater than about 80% of maximum as I recall), an accretion disk’s inner edge can lie within the photon zone, or even closer to the black hole than the photon zone; this leads to multiple indirect images of the disk and a potentially bright photon ring.

3) However, depending on the viewing angle, the indirect images of the disk that form the photon ring may not be a circle, and may not be concentric with the direct image of the disk. Only when viewed from points along the rotation axis (i.e., above the north or south pole) will the direct and indirect image of the disk both be circular and concentric. That further complicates interpretation of the blurry image.

4) When the viewing angle is not along the rotation axis the image will be asymmetric, brighter on one side than the other. (This is true of EHT’s `photo’.) However, I know of at least four potential causes of this asymmetry, any or all of which might play a role, and the degree of asymmetry depends on properties of the accretion disk and the rotation rate of the black hole, both of which are currently unknown. Claims about the asymmetry made by the EHT folks seem, at least to me, to be based on certain assumptions that we cannot currently check.

Each of these complexities is a challenge to explain, so I’ll give both you and I a substantial break while I figure out how best to convey what is known (at least to me) about these issues.

by Matt Strassler at June 10, 2019 04:04 AM

June 09, 2019

Lubos Motl - string vacua and pheno

Direct anthropic bound on the weak scale from supernovæ explosions – or how I learned to stop worrying and love the Higgs
Guest blog by Prof Alessandro Strumia, not only a famous misogynist but also a physicist ;-)

I thank Luboš for hosting this post, where I present a strangelove-like idea that might be the long-sought explanation of the most puzzling aspect of the Standard Model (SM) of the Fundamental Interactions: the existence of two vastly different mass scales. The electro-weak Fermi scale (set by the Higgs mass, that controls the mass of all other elementary Standard Model particles) is 17 orders of magnitude smaller than the gravitational Planck scale (the mass above which any elementary particle is a black hole, according to Einstein Relativity and Quantum Mechanics).

The puzzle is that, according to many theorists, the Standard Model Higgs is unnatural because its squared mass receives Planck-scale quantum corrections, so that cancellations tuned by one part in \(10^{34}\) are needed to get the small Fermi scale. This naturalness argument lead theorists to expect that the Higgs cannot be alone, that it must exist together with new physics that protects its lightness. Theorists proposed concrete examples of new physics that makes the Higgs natural: supersymmetry, technicolor, extra dimensions... Dozens of thousands of research publications studied these ideas and supersymmetry seemed to work so beautifully that most theorists expected that the Fermi scale is the scale of supersymmetry breaking.



The physics of the fundamental interactions enjoyed a decennium of great interest, returning to be a data-driven field, when in 2010 the Large Hadron Collider (LHC) started to explore physics above the Fermi scale. Experiments could finally answer the biggest question identified in decades of theoretical speculations: the origin of the Fermi scale.

The dominant expectation, based on naturalness, was that the LHC would have opened a golden age for high-energy physics, discovering the Higgs boson together with the new physics that protects its lightness. Some theorists worried that too much new physics could be a background to itself.

This is no longer a worry: LHC discovered the Higgs and no new physics. Data agree with the Standardissimo Model. The lack of supersymmetry and of any other new physics that makes the Fermi scale “natural” is now considered as a serious issue.



Unnaturalness is so important that, before accepting such conclusion, one would like to check that it persists at higher energies. But LHC already reached 13 TeV and most of its discovery potential has been exploited. No higher-energy collider is in construction. Getting funds for reaching higher energies and possibly finding more nothing is difficult: a new 100 TeV collider seem to cost 29 billions of dollars. It's a lot of money. We're gonna have to earn it. A paper on arXiv today includes gender as motivation for giving it to CERN. Supersymmetry would have been a better motivation: that's why before LHC physicists dubbed the present situation as “nightmare scenario”. Waiting for 100 TeV data at my 100th birthday, I provisionally assume that present negative results from LHC mean that the Fermi scale is unnatural.

Crisis can lead to progress. It is maybe not exaggerated to see a parallel between the present negative results from LHC and the negative results of the Michelson-Morley experiment, that in 1887 shacked the strong belief in the ugly aether theory, opening a crisis later beautifully resolved by relativity. Today we are confused about naturalness. Nature is surely following some logic. Marx said that «history repeats itself, first as tragedy, then as farce». So maybe this time nature follows a ugly logic missed by physicists who seek something beautiful.

The unnatural smallness of the Fermi scale could be due to anthropic selection. The A-word is not politically correct among physicists, but anthropic selection is like the horseshoe of Bohr: it works even when physicists don't believe in it.

Anthropic selection might sound science fiction, but it easily follows from our present theoretical understanding of physics. All what is needed is a theory (possibly string theory) that admits as minima of its potential a “landscape” of many vacua (say, \(10^{500}\)) with different values of their vacuum energy and of their Fermi scale (more in general with different particle physics, as particles are excitations around the vacuum). Thanks to enough diversity, rare vacua have “good” physical laws that allow for complex nuclei, chemistry, stars... and life and observers. Thanks to cosmological inflation, different vacua are realised in different regions of space-time, separated by deserts and walls (known as “cosmological horizons” and “potential barriers”). Thanks to diversity plus separation, our universe can be a region in one “good” vacuum immersed in a bigger “multiverse” of shi**ole regions. As “life” can only form in rare regions with “good” physics, observers worry about naturalness because they measure fundamental constant that seem tuned for their existence, but not more tuned than that.

Weinberg in 1987 proposed an anthropic argument for the smallness of the cosmological constant. Agrawal, Barr, Donoghue and Seckel in 1997 noticed that light fermion masses \(m_f\) have special values that allow for the possible existence of many nuclei, rather than just Hydrogen and/or Helium. More complex chemistry seems needed for “life”. However this anthropic boundary does not explain the smallness of the Fermi scale \(v\), because fermion masses are obtained in the Standard Model as \(m_f=y_f v\) (dimension-less Yukawa couplings \(y_f\) times Fermi scale \(v\)): a SM-like vacuum with the same “good” fermion masses obtained from bigger Fermi scale \(v\) times smaller Yukawas \(y_f\) needs less tuning, and would thereby be more likely in a multiverse. So far the Standard Model seems uselessly unnatural even if fermion masses are anthropically selected.

An anthropic explanation of the smallness of the Fermi scale needs an anthropic boundary that directly restricts the Fermi scale. In order to search for such extra boundary, we look at events where weak interactions play a key role. There are two events where non-trivial physics happens thanks to the same numerical coincidence\[

v\sim M_{\rm Planck}^{1/4} \Lambda_{\rm QCD}^{3/4}

\] that involves the Fermi scale, the Planck mass and the naturally small QCD scale (or proton mass).

The first event is Big Bang Nucleosynthesis: Hall, Pinner and Ruderman showed in 2014 that BBN produces comparable Hydrogen and Helium abundances because neutrinos decouple at a temperature comparable to the proton/neutron mass difference, and because BBN happens when the age of the Universe is 3 minutes, comparable to the neutron life-time. This is puzzling, but it does not seem to lead to an anthropic boundary.

The second event is core-collapse supernova explosions. According to their standard theoretical understanding (partially validated by the 1987 observation of supernova neutrinos), explosions happen because weak interactions of outflowing neutrinos push the material outwards. This pushing is effective because neutrinos are trapped for a few seconds, a time comparable to the gravitational time-scale of a supernova. We argued that explosions disappear if the Fermi scale \(v\) is changed by a factor of few in either direction. If \(v\) is too small, neutrinos are too much trapped and exit too late when the collapse is over. If \(v\) is too large, neutrinos are not trapped and immediately fly away with negligible weak interactions.



Core-collapse supernova explosions spread intermediate-mass elements that seem needed by “life”, such as Oxygen. This is illustrated by the following periodic table, where elements are colored according to what produces them, and primary and secondary elements that seem needed by the chemistry of “life” are highlighted.



Core-collapse supernova explosions (in green) seem anthropically relevant.

In conclusion, we might observe an unnaturally small value of the Fermi scale because of anthropic selection: no observers exist in universes where the Fermi scale has larger, more natural, values.

I over-simplified: scientific details and doubts can be found in this talk and in this arXiv preprint in collaboration with D'Amico, Urbano and Xue. We are high-energy physicists, not experts of supernova explosions nor of astro-biology. I hope that experts can better test the idea: it's important because it might explain the smallness of the Fermi scale, and this is a major topic in fundamental physics since decades.

Actually (despite my jokes) this is a deadly serious topic. Fundamental physics now risks abandoning the high-energy frontier. But our scientific job is seeking the correct understanding, even if it means losing our job.

by Luboš Motl (noreply@blogger.com) at June 09, 2019 12:21 PM

June 06, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

The Modern Hiker’s Checklist: Guide to Avoid Getting Lost

%Hiking is a popular and perhaps one of the oldest hobbies in the world. It provides a lot of physical and mental benefits and allows one to immerse into and appreciate nature. However, hiking could be relatively dangerous, depending on where and when (weather and seasons) you’re hiking. In fact, there are about 50,000 Search and Rescue (SAR) missions in the U.S. each year, 36% of which aim to find lost individuals, and 40% to find lost hikers.

To avoid being part of the statistics, here are simple tips and tools you’ll need to know and have for your hike:

Do Your Research

There are many thrill-seekers out there who would intentionally want to get lost, explore, and find their way back when hiking. But for regular and/or inexperienced hikers, it’s best to do proper trip planning before going on a hike. Pull out a map, plot your hiking trails, and know the landmarks and any geographical features that can help you pinpoint your location and your destination.

Know the Routes

There are many pre-planned hiking trails available online through hiking websites or social media pages of hikers that could help you know the scenic routes, as well as the safest or the more adventurous trails you can follow. Although it would be fun to improvise during the hike, it’s best that you and your team are well-versed of the area. If you’re inexperienced, it’s best to stick to the main trail or stay close to it.

Know How to Read Your Tools and Surroundings

Before we get to the essential tools you need to avoid getting lost, it’s important that you know how to use them. Perhaps the most basic skill you and your group need to learn is how to read a map and your surroundings. Bringing a map, GPS, or any other tool and not being able to read or use them properly renders them virtually useless.

You also need to be wary of your surroundings and any landmark when you’re hiking. If you’re hiking through a trail that has no visible dirt path, it’s best to observe and mark (with tape or ribbon) any discernable landmark.

The Tools

hiker holding a compass

Even if you have a phone with location tracking or a GNSS device, a map and a compass should always be part of your essentials (as they don’t rely on batteries nor do they suffer from signal interference). It’s important to have a physical map and compass to guide you to where you need to be and to know where you are.

GNSS

A GNSS device allows you to know where you are and is a perfect companion for your map. It’s important not to skimp on your GNSS (or GPS if you’re in the U.S., GLONASS in Russia, or BeiDou if in China) device. Always make sure the GNSS device model you’re ordering or buying has been tested through a multi-element GNSS simulator. This way, you can be assured of its reliability and quality.

It would also be good to have a power bank or better yet, a GNSS device that has a solar charger. If you can’t afford a GNSS device, you can use your phone’s locator or map application.

Apps

Your phone could be an alternative GNSS device. Although most map apps require you to stay online to properly pinpoint your location and guide you, there are actually offline maps and navigators that can serve as your guide even when there’s no signal or Internet connection.

There are many options at your disposal to make sure that you don’t get lost when hiking, so make sure you take extra precaution to avoid being among the many hikers that end up getting saved during search and rescue missions. To be safe, it’s always best to come prepared by bringing the essentials, such as food, water, clothing, utility knives, and other survival tools and supplies.

The post The Modern Hiker’s Checklist: Guide to Avoid Getting Lost appeared first on None Equilibrium.

by Bertram Mortensen at June 06, 2019 02:12 AM

John Baez - Azimuth

Nonstandard Models of Arithmetic

There seems to be a murky abyss lurking at the bottom of mathematics. While in many ways we cannot hope to reach solid ground, mathematicians have built impressive ladders that let us explore the depths of this abyss and marvel at the limits and at the power of mathematical reasoning at the same time.

This is a quote from Matthew Katz and Jan Reimann’s book An Introduction to Ramsey Theory: Fast Functions, Infinity, and Metamathematics. I’ve been been talking to my old friend Michael Weiss about nonstandard models of Peano arithmetic on his blog. We just got into a bit of Ramsey theory. But you might like the whole series of conversations, which are precisely about this murky abyss.

Here it is so far:

Part 1: I say I’m trying to understand ‘recursively saturated’ models of Peano arithmetic, and Michael dumps a lot of information on me. The posts get easier to read after this one!

Part 2: I explain my dream: to show that the concept of ‘standard model’ of Peano arithmetic is more nebulous than many seem to think. We agree to go through Ali Enayat’s paper Standard models of arithmetic.

Part 3: We talk about the concept of ‘standard model’, and the ideas of some ultrafinitists: Alexander Yessenin-Volpin and Edward Nelson.

Part 4: Michael mentions “the theory of true arithmetic”, and I ask what that means. We decide that a short dive into the philosophy of mathematics may be required.

Part 5: Michael explains his philosophies of mathematics, and how they affect his attitude toward the natural numbers and the universe of sets.

Part 6: After explaining my distaste for the Punch-and-Judy approach to the philosophy of mathematics (of which Michael is thankfully not guilty), I point out a strange fact: our views on the infinite cast shadows on our study of the natural numbers. For example: large cardinal axioms help us name larger finite numbers.

Part 7: We discuss Enayat’s concept of “a T-standard model of PA”, where T is some axiom system for set theory. I describe my crazy thought: maybe your standard natural numbers are nonstandard for me. We conclude with a brief digression into Hermetic philosophy: “as above, so below”.

Part 8: We discuss the tight relation between PA and ZFC with the axiom of infinity replaced by its negation. We then chat about Ramsey theory as a warmup for the Paris–Harrington Theorem.

Part 9: Michael sketches the proof of the Paris–Harrington Theorem, which says that a certain rather simple theorem about combinatorics can be stated in PA, and proved in ZFC, but not proved in PA. The proof he sketches builds a nonstandard model in which this theorem does not hold!

by John Baez at June 06, 2019 12:32 AM

June 05, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVII: Super-Entropic Instability

I'm quite excited because of some new results I got recently, which appeared on the ArXiv today. I've found a new (and I think, possibly important) instability in quantum gravity.

Said more carefully, I've found a sibling to Hawking's celebrated instability that manifests itself as black hole evaporation. This new instability also results in evaporation, driven by Hawking radiation, and it can appear for black holes that might not seem unstable to evaporation in ordinary circumstances (i.e., there's no Hawking channel to decay), but turn out to be unstable upon closer examination, in a larger context. That context is the extended gravitational thermodynamics you've read me talking about here in several previous posts (see e.g. here and here). In that framework, the cosmological constant is dynamical and enters the thermodynamics as a pressure variable, p. It has a conjugate, V, which is a quantity that can be derived once you know the pressure and the mass of the black hole.

Well, Hawking evaporation is a catastrophic quantum phenomenon that follows from the fact that the radiation temperature of a Schwarzschild black hole (the simplest one you can think of) goes inversely with the mass. So the black hole radiates and loses energy, reducing its mass. But that means that it will radiate at even higher temperature, driving its mass down even more. So it will radiate even more, and so on. So it is an instability in the sense that the system drives itself even further away from where it started at every moment. Like a pencil falling over from balancing on a point.

This is the original quantum instability for gravitational systems. It's, as you probably know, very important. (Although in our universe, the temperature of radiation is so tiny for astrophysical black holes (they have large mass) that the effect is washed out by the local temperature of the universe... But if the univverse ever had microscopic black holes, they'd have radiated in this way...)

So very nice, so very 1970s. What have I found recently?

A nice way of expressing the above instability is to simply say [...] Click to continue reading this post

The post News from the Front, XVII: Super-Entropic Instability appeared first on Asymptotia.

by Clifford at June 05, 2019 02:11 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

The Edge of Having Quick Access to Healthcare IT Services

A hospital, clinic, or any health institution can possibly have hundreds or even thousands of patients to tend to every single day. The gist here is that not all of these health service providers can actually monitor a patient’s condition round the clock. This is the reason why there is an increased risk of committing errors or even losing crucial data, which may curb the speedy recovery or further examination of a sick person. The best way to fend off these negative outcomes is to seek healthcare IT services. They do not only provide the proper documentation of a patient’s status, but they also promote the welfare of the entire healthcare team.

Your Company’s Weaknesses

It is important to know which aspect of patient monitoring is most likely to commit errors or incur difficulty. It may be during the phase of assessing and getting the vital information of a particular patient or the time when a sick person is actually undergoing treatment. It may even be the time when a patient needs to come back to the institution for a follow-up checkup. Thus, as a healthcare institution that is expected to keep a well-maintained system in data recording, the first step in addressing a problem is to identify the problem itself.

The Pros and Cons

There can be no question that every healthcare institution may have its own line of expertise when it comes to competent data recording, but there can always be room for improvement. After identifying which phase in handling patient information needs to be addressed and improved, it is about time to weigh if securing healthcare IT services is really a method to upgrade such a phase. Or will it only leave the other phases of patient monitoring sluggish? Each stage should harmonize with each other to achieve optimum patient management.

Affordability and Sustainability

stethoscope over a keyboard

Healthcare companies need to take all sorts of risk in whatever medical branch they enter into. These days, competition can reach a great number and get quite loose. Still, only those companies that can provide safe and quality services will reach the top and possibly lure a significant number of clients. Securing the best kind of IT service for your company means that it should be beneficial not only in the beginning but also as the institution flourishes. Therefore, an IT service provider must not only be reasonably priced, but it must also be available whenever there are glitches in the system, which needs backup and assistance.

Indeed, we are all living in a digital age, and most people have access to the Internet. Healthcare companies also adapt to what technology has to offer since it is the quickest and most convenient way in obtaining information, not only with respect to their patients but also with the latest developments and innovation in medical science. Getting an established IT service provider is proven to be an advantageous leap toward better health outcomes of patients. It focuses not only on the best interest of each patient but also on the useful influence of the healthcare team who work hand in hand.

The post The Edge of Having Quick Access to Healthcare IT Services appeared first on None Equilibrium.

by Bertram Mortensen at June 05, 2019 01:00 AM

June 04, 2019

John Baez - Azimuth

Quantum Physics and Logic 2019
open_petri_4

There’s another conference involving applied category theory at Chapman University!

• Quantum Physics and Logic 2019, June 9-14, 2019, Chapman University, Beckman Hall 404. Organized by Matthew Leifer, Lorenzo Catani, Justin Dressel, and Drew Moshier.

The QPL series started out being about quantum programming languages, but it later broadened its scope while keeping the same acronym. This conference series now covers quite a range of topics, including the category-theoretic study of physical systems. My students Kenny Courser, Jade Master and Joe Moeller will be speaking there, and I’ll talk about Kenny’s new work on structured cospans as a tool for studying open systems.

Program

The program is here.

Invited talks

• John Baez (UC Riverside), Structured cospans.

• Anna Pappa (University College London), Classical computing via quantum means.

• Joel Wallman (University of Waterloo), TBA.

Tutorials

• Ana Belen Sainz (Perimeter Institute), Bell nonlocality: correlations from principles.

• Quanlong Wang (University of Oxford) and KangFeng Ng (Radboud University), Completeness of the ZX calculus.

by John Baez at June 04, 2019 03:01 PM

June 01, 2019

Lubos Motl - string vacua and pheno

Wolfram on Gell-Mann
I got a permission to post a very interesting text by Stephen Wolfram so if you're thirsty for some intellectual adrenaline and if you can survive without tons of writers' humility ;-), keep on reading.

Remembering Murray Gell-Mann (1929–2019), Inventor of Quarks
Guest blog by Stephen Wolfram

First Encounters

In the mid-1970s, particle physics was hot. Quarks were in. Group theory was in. Field theory was in. And so much progress was being made that it seemed like the fundamental theory of physics might be close at hand.

Right in the middle of all this was Murray Gell-Mann—responsible for not one, but most of the leaps of intuition that had brought particle physics to where it was. There’d been other theories, but Murray’s—with their somewhat elaborate and abstract mathematics—were always the ones that seemed to carry the day.

It was the spring of 1978 and I was 18 years old. I’d been publishing papers on particle physics for a few years, and had gotten quite known around the international particle physics community (and, yes, it took decades to live down my teenage-particle-physicist persona). I was in England, but planned to soon go to graduate school in the US, and was choosing between Caltech and Princeton. And one weekend afternoon when I was about to go out, the phone rang. In those days, it was obvious if it was an international call. “This is Murray Gell-Mann”, the caller said, then launched into a monologue about why Caltech was the center of the universe for particle physics at the time.



Perhaps not as starstruck as I should have been, I asked a few practical questions, which Murray dismissed. The call ended with something like, “Well, we’d like to have you at Caltech”.

A few months later I was indeed at Caltech. I remember the evening I arrived, wandering around the empty 4th floor of Lauritsen Lab—the home of Caltech theoretical particle physics. There were all sorts of names I recognized on office doors, and there were two offices that were obviously the largest: “M. Gell-Mann” and “R. Feynman”. (In between them was a small office labeled “H. Tuck”—which by the next day I’d realized was occupied by Helen Tuck, the lively longtime departmental assistant.)



There was a regular Friday lunch in the theoretical physics group, and as soon as a Friday came around, I met Murray Gell-Mann there. The first thing he said to me was, “It must be a culture shock coming here from England”. Then he looked me up and down. There I was in an unreasonably bright yellow shirt and sandals—looking, in fact, quite Californian. Murray seemed embarrassed, mumbled some pleasantry, then turned away.

With Murray at Caltech

I never worked directly with Murray (though he would later describe me to others as “our student”). But I interacted with him frequently while I was at Caltech. He was a strange mixture of gracious and gregarious, together with austere and combative. He had an expressive face, which would wrinkle up if he didn’t approve of what was being said.

Murray always had people and things he approved of, and ones he didn’t—to which he would often give disparaging nicknames. (He would always refer to solid-state physics as “squalid-state physics”.) Sometimes he would pretend that things he did not like simply did not exist. I remember once talking to him about something in quantum field theory called the beta function. His face showed no recognition of what I was talking about, and I was getting slightly exasperated. Eventually I blurted out, “But, Murray, didn’t you invent this?” “Oh”, he said, suddenly much more charming, “You mean g times the psi function. Why didn’t you just say that? Now I understand”. Of course, he had understood all along, but was being difficult about me using the “beta function” term, even though it had by then been standard for years.

I could never quite figure out what it was that made Murray impressed by some people and not others. He would routinely disparage physicists who were destined for great success, and would vigorously promote ones who didn’t seem so promising, and didn’t in fact do well. So when he promoted me, I was on the one hand flattered, but on the other hand concerned about what his endorsement might really mean.

The interaction between Murray Gell-Mann and Richard Feynman was an interesting thing to behold. Both came from New York, but Feynman relished his “working-class” New York accent, while Gell-Mann affected the best pronunciation of words from any language. Both would make surprisingly childish comments about the other.

I remember Feynman insisting on telling me the story of the origin of the word “quark”. He said he’d been talking to Murray one Friday about these hypothetical particles, and in their conversation they’d needed a name for them. Feynman told me he said (no doubt in his characteristic accent), “Let’s call them ‘quacks’”. The next Monday he said Murray came to him very excited and said he’d found the word “quark” in James Joyce. In telling this to me, Feynman then went into a long diatribe about how Murray always seemed to think the names for things were so important. “Having a name for something doesn’t tell you a damned thing”, Feynman said. (Having now spent so much of my life as a language designer, I might disagree). Feynman went on, mocking Murray’s concern for things like what different birds are called. (Murray was an avid bird watcher.)

Meanwhile, Feynman had worked on particles which seemed (and turned out to be) related to quarks. Feynman had called them “partons”. Murray insisted on always referring to them as “put-ons”.

Even though in terms of longstanding contributions to particle physics (if not physics in general) Murray was the clear winner, he always seemed to feel as if he was in the shadow of Feynman, particularly with Feynman’s showmanship. When Feynman died, Murray wrote a rather snarky obituary, saying of Feynman: “He surrounded himself with a cloud of myth, and he spent a great deal of time and energy generating anecdotes about himself”. I never quite understood why Murray—who could have gone to any university in the world—chose to work at Caltech for 33 years in an office two doors down from Feynman.

Murray cared a lot about what people thought of him, but would routinely (and maddeningly to watch) put himself in positions where he would look bad. He was very interested in—and I think very knowledgeable about—words and languages. And when he would meet someone, he would make a point of regaling them with information about the origin of their name (curiously—as I learned only years later—his own name, “Gell-Mann”, had been “upgraded” from “Gellmann”). Now, of course, if there’s one word people tend to know something about, it’s their own name. And, needless to say, Murray sometimes got its origins wrong—and was very embarrassed. (I remember he told a friend of mine named Nathan Isgur a long and elaborate story about the origin of the name “Isgur”, with Nathan eventually saying: “No, it was made up at Ellis Island!”.)

Murray wasn’t particularly good at reading other people. I remember in early 1982 sitting next to Murray in a limo in Chicago that had just picked up a bunch of scientists for some event. The driver was reading the names of the people he’d picked up over the radio. Many were complicated names, which the driver was admittedly butchering. But after each one, Murray would pipe up, and say “No, it’s said ____”. The driver was getting visibly annoyed, and eventually I said quietly to Murray that he should stop correcting him. When we arrived, Murray said to me: “Why did you say that?” He seemed upset that the driver didn’t care about getting the names right.

Occasionally I would ask Murray for advice, though he would rarely give it. When I was first working on one-dimensional cellular automata, I wanted to find a good name for them. (There had been several previous names for the 2D case, one of which—that I eventually settled on—was “cellular automata”.) I considered the name “polymones” (somehow reflecting Leibniz’s monad concept). But I asked Murray—given all his knowledge of words and languages—for a suggestion. He said he didn’t think polymones was much good, but didn’t have any other suggestion.

When I was working on SMP (a forerunner of Mathematica and the Wolfram Language) I asked Murray about it, though at the time I didn’t really understand as I do now the correspondences between human and computational languages. Murray was interested in trying out SMP, and had a computer terminal installed in his office. I kept on offering to show him some things, but he kept on putting it off. I later realized that—bizarrely to me—Murray was concerned about me seeing that he didn’t know how to type. (By the way, at the time, few people did—which is, for example, why SMP, like Unix, had cryptically short command names.)

But alongside the brush-offs and the strangeness, Murray could be personally very gracious. I remember him inviting me several times to his house. I never interacted with either of his kids (who were both not far from my age). But I did interact with his wife, Margaret, who was a very charming English woman. (As part of his dating advice to me, Feynman had explained that both he and Murray had married English women because “they could cope”.)

While I was at Caltech, Margaret got very sick with cancer, and Murray threw himself into trying to find a cure. (He blamed himself for not having made sure Margaret had had more checkups.) It wasn’t long before Margaret died. Murray invited me to the memorial service. But somehow I didn’t feel I could go; even though by then I was on the faculty at Caltech, I just felt too young and junior. I think Murray was upset I didn’t come, and I’ve felt guilty and embarrassed about it ever since.

Murray did me quite a few favors. He was an original board member of the MacArthur Foundation, and I think was instrumental in getting me a MacArthur Fellowship in the very first batch. Later, when I ran into trouble with intellectual property issues at Caltech, Murray went to bat for me—attempting to intercede with his longtime friend Murph Goldberger, who was by then president of Caltech (and who, before Caltech, had been a professor at Princeton, and had encouraged me to go to graduate school there).

I don’t know if I would call Murray a friend, though, for example, after Margaret died, he and I would sometimes have dinner together, at random restaurants around Pasadena. It wasn’t so much that I felt of a different generation from him (which of course I was). It was more that he exuded a certain aloof tension, that made one not feel very sure about what the relationship really was.

A Great Time in Physics

At the end of World War II, the Manhattan Project had just happened, the best and the brightest were going into physics, and “subatomic particles” were a major topic. Protons, neutrons, electrons and photons were known, and together with a couple of hypothesized particles (neutrinos and pions), it seemed possible that the story of elementary particles might be complete.

But then, first in cosmic rays, and later in particle accelerators, new particles started showing up. There was the muon, then the mesons (pions and kaons), and the hyperons (Λ, Σ, Ξ). All were unstable. The muon—which basically nobody understands even today—was like a heavy electron, interacting mainly through electromagnetic forces. But the others were subject to the strong nuclear force—the one that binds nuclei together. And it was observed that this force could generate these particles, though always together (Λ with K, for example). But, mysteriously, the particles could only decay through so-called weak interactions (of the kind involved in radioactive beta decay, or the decay of the muon).

For a while, nobody could figure out why this could be. But then around 1953, Murray Gell-Mann came up with an explanation. Just as particles have “quantum numbers” like spin and charge, he hypothesized that they could have a new quantum number that he called strangeness. Protons, neutrons and pions would have zero strangeness. But the Λ would have strangeness –1, the (positive) kaon strangeness +1, and so on. And total strangeness, he suggested, might be conserved in strong (and electromagnetic) interactions, but not in weak interactions. To suggest a fundamentally new property of particles was a bold thing to do. But it was correct: and immediately Murray was able to explain lots of things that had been observed.

But how did the weak interaction that was—among other things—responsible for the decay of Murray’s “strange particles” actually work? In 1957, in their one piece of collaboration in all their years together at Caltech, Feynman and Gell-Mann introduced the so-called V-A theory of the weak interaction—and, once again, despite initial experimental evidence to the contrary, it turned out to be correct. (The theory basically implies that neutrinos can only have left-handed helicity, and that weak interactions involve parity conservation and parity violation in equal amounts.)

As soon as the quantum mechanics of electrons and other particles was formulated in the 1920s, people started wondering about the quantum theory of fields, particularly the electromagnetic field. There were issues with infinities, but in the late 1940s—in Feynman’s big contribution—these were handled through the concept of renormalization. The result was that it was possible to start computing things using quantum electrodynamics (QED)—and soon all sorts of spectacular agreements with experiment had been found.

But all these computations worked by looking at just the first few terms in a series expansion in powers of the interaction strength parameter α≃1/137. In 1954, during his brief time at the University of Illinois (from which he went to the University of Chicago, and then Caltech), Murray, together with Francis Low, wrote a paper entitled “Quantum Electrodynamics at Small Distances” which was an attempt to explore QED to all orders in α. In many ways this paper was ahead of its time—and 20 years later, the “renormalization group” that it implicitly defined became very important (and the psi function that it discussed was replaced by the beta function).

While QED could be investigated through a series expansion in the small parameter α≃1/137, no such program seemed possible for the strong interaction (where the effective expansion parameter would be ≃1). So in the 1950s there was an attempt to take a more holistic approach, based on looking at the whole so-called S-matrix defining overall scattering amplitudes. Various properties of the S-matrix were known—notably analyticity with respect to values of particle momenta, and so-called crossing symmetry associated with exchanging particles and antiparticles.

But were these sufficient to understand the properties of strong interactions? Throughout the 1960s, attempts involving more and more elaborate mathematics were made. But things kept on going wrong. The proton-proton total interaction probability was supposed to rise with energy. But experimentally it was seen to level off. So a new idea (the pomeron) was introduced. But then the interaction probability was found to start rising again. So another phenomenon (multiparticle “cuts”) had to be introduced. And so on. (Ironically enough, early string theory spun off from these attempts—and today, after decades of disuse, S-matrix theory is coming back into vogue.)

But meanwhile, there was another direction being explored—in which Murray Gell-Mann was centrally involved. It all had to do with the group-theory-meets-calculus concept of Lie groups. An example of a Lie group is the 3D rotation group, known in Lie group theory as SO(3). A central issue in Lie group theory is to find representations of groups: finite collections, say of matrices, that operate like elements of the group.

Representations of the rotation group had been used in atomic physics to deduce from rotational symmetry a characterization of possible spectral lines. But what Gell-Mann did was to say, in effect, “Let’s just imagine that in the world of elementary particles there’s some kind of internal symmetry associated with the Lie group SU(3). Now use representation theory to characterize what particles will exist”.

And in 1961, he published his eightfold way (named after Buddha’s Eightfold Way) in which he proposed—periodic-table style—that there should be 8+1 types of mesons, and 10+8 types of baryons (hyperons plus nucleons, such as proton and neutron). For the physics of the time, the mathematics involved in this was quite exotic. But the known particles organized nicely into Gell-Mann’s structure. And Gell-Mann made a prediction: that there should be one additional type of hyperon, that he called the Ω, with strangeness –3, and certain mass and decay characteristics.

And—sure enough—in 1964, the was observed, and Gell-Mann was on his way to the Nobel Prize, which he received in 1969.

At first the SU(3) symmetry idea was just about what particles should exist. But Gell-Mann wanted also to characterize interactions associated with particles, and for this he introduced what he called current algebra. And, by 1964, from his work on current algebra, he’d realized something else: that his SU(3) symmetry could be interpreted as meaning that things like protons were actually composed of something more fundamental—that he called quarks.

What exactly were the quarks? In his first paper on the subject, Gell-Mann called them “mathematical entities”, although he admitted that, just maybe, they could actually be particles themselves. There were problems with this, though. First, it was thought that electric charge was quantized in units of the electron charge, but quarks would have to have charges of 2/3 and –1/3. But even more seriously, one would have to explain why no free quarks had ever been seen.

It so happened that right when Gell-Mann was writing this, a student at Caltech named George Zweig was thinking of something very similar. Zweig (who was at the time visiting CERN) took a mathematically less elaborate approach, observing that the existing particles could be explained as built from three kinds of “aces”, as he called them, with the same properties as Gell-Mann’s quarks.

Zweig became a professor at Caltech—and I’ve personally been friends with him for more than 40 years. But he never got as much credit for his aces idea as he should (though in 1977 Feynman proposed him for a Nobel Prize), and after a few years he left particle physics and started studying the neurobiology of the ear—and now, in his eighties, has started a quant hedge fund.

Meanwhile, Gell-Mann continued pursuing the theory of quarks, refining his ideas about current algebras. But starting in 1968, there was something new: particle accelerators able to collide high-energy electrons with protons (“deep inelastic scattering”) observed that sometimes the electrons could suffer large deflections. There were lots of details, particularly associated with relativistic kinematics, but in 1969 Feynman proposed his parton (or, as Gell-Mann called it, “put-on”) model, in which the proton contained point-like “parton” particles.

It was immediately guessed that partons might be quarks, and within a couple of years this had been established. But the question remained of why the quarks should be confined inside particles such as protons. To avoid some inconsistencies associated with the exclusion principle, it had already been suggested that quarks might come in three “colors”. Then in 1973, Gell-Mann and his collaborators suggested that associated with these colors, quarks might have “color charges” analogous to electric charge.

Electromagnetism can be thought of as a gauge field theory associated with the Lie group U(1). Now Gell-Mann suggested that there might be a gauge field theory associated with an SU(3) color group (yes, SU(3) again, but a different application than in the eightfold way, etc.). This theory became known as quantum chromodynamics, or QCD. And, in analogy to the photon, it involves particles called gluons.

Unlike photons, however, gluons directly interact with each other, leading to a much more complex theory. But in direct analogy to Gell-Mann and Low’s 1954 renormalization group computation for QED, in 1973 the beta function (AKA g times psi function) for QCD was computed, and was found to show the phenomenon of asymptotic freedom—essentially that QCD interactions get progressively weaker at shorter distances.

This immediately explained the success of the parton model, but also suggested that if quarks get further apart, the QCD interactions between them get stronger, potentially explaining confinement. (And, yes, this is surely the correct intuition about confinement, although even to this day, there is no formal proof of quark confinement—and I suspect it may have issues of undecidability.)

Through much of the 1960s, S-matrix theory had been the dominant approach to particle physics. But it was having trouble, and the discovery of asymptotic freedom in QCD in 1973 brought field theory back to the fore, and, with it, lots of optimism about what might be possible in particle physics.

Murray Gell-Mann had had an amazing run. For 20 years he had made a series of bold conjectures about how nature might work—strangeness, V-A theory, SU(3), quarks, QCD—and in each case he had been correct, while others had been wrong. He had had one of the more remarkable records of repeated correct intuition in the whole history of science.

He tried to go on. He talked about “grand unification being in the air”, and (along with many other physicists) discussed the possibility that QCD and the theory of weak interactions might be unified in models based on groups like SU(5) and SO(10). He considered supersymmetry—in which there would be particles that are crosses between things like neutrinos and things like gluons. But quick validations of these theories didn’t work out—though even now it’s still conceivable that some version of them might be correct.

But regardless, the mid-1970s were a period of intense activity for particle physics. In 1974, the
J/ψ particle was discovered, which turned out to be associated with a fourth kind of quark (charm quark). In 1978, evidence of a fifth quark was seen. Lots was figured out about how QCD works. And a consistent theory of weak interactions emerged that, together with QED and QCD, defined what by the early 1980s had become the modern Standard Model of particle physics that exists today.

I myself got seriously interested in particle physics in 1972, when I was 12 years old. I used to carry around a copy of the little Particle Properties booklet—and all the various kinds of particles became, in a sense, my personal friends. I knew by heart the mass of the Λ, the lifetime of the π0, and a zillion other things about particles. (And, yes, amazingly, I still seem to remember almost all of them—though now they’re all known to much greater accuracy.)

At the time, it seemed to me like the most important discoveries ever were being made: fundamental facts about the fundamental particles that exist in our universe. And I think I assumed that before long everyone would know these things, just as people know that there are atoms and protons and electrons.

But I’m shocked today that almost nobody has, for example, even heard of muons—even though we’re continually bombarded with them from cosmic rays. Talk about strangeness, or the omega-minus, and one gets blank stares. Quarks more people have heard of, though mostly because of their name, with its various uses for brands, etc.

To me it feels a bit tragic. It’s not hard to show Gell-Mann’s eightfold way pictures, and to explain how the particles in them can be made from quarks. It’s at least as easy to explain that there are 6 known types of quarks as to explain about chemical elements or DNA bases. But for some reason—in most countries—all these triumphs of particle physics have never made it into school science curriculums.

And as I was writing this piece, I was shocked at how thin the information on “classic” particle physics is on the web. In fact, in trying to recall some of the history, the most extensive discussion I could find was in an unpublished book I myself wrote when I was 12 years old! (Yes, full of charming spelling mistakes, and a few physics mistakes.)

The Rest of the Story

When I first met Murray in 1978, his great run of intuition successes and his time defining almost everything that was important in particle physics was already behind him. I was never quite sure what he spent his time on. I know he traveled a lot, using physics meetings in far-flung places as excuses to absorb local culture and nature. I know he spent significant time with the JASON physicists-consult-for-the-military-and-get-paid-well-for-doing-so group. (It was a group that also tried to recruit me in the mid-1980s.) I know he taught classes at Caltech—though he had a reputation for being rather disorganized and unprepared, and I often saw him hurrying to class with giant piles of poorly collated handwritten notes.

Quite often I would see him huddled with more junior physicists that he had brought to Caltech with various temporary jobs. Often there were calculations being done on the blackboard, sometimes by Murray. Lots of algebra, usually festooned with tensor indices—with rarely a diagram in sight. What was it about? I think in those days it was most often supergravity—a merger of the idea of supersymmetry with an early form of string theory (itself derived from much earlier work on S-matrix theory).

This was the time when QCD, quark models and lots of other things that Murray had basically created were at their hottest. Yet Murray chose not to work on them—for example telling me after hearing a talk I gave on QCD that I should work on more worthwhile topics.

I’m guessing Murray somehow thought that his amazing run of intuition would continue, and that his new theories would be as successful as his old. But it didn’t work out that way. Though when I would see Murray, he would often tell me of some amazing physics that he was just about to crack, often using elaborate mathematical formalism that I didn’t recognize.

By the time I left Caltech in 1983, Murray was spending much of his time in New Mexico, around Santa Fe and Los Alamos—particularly getting involved in what would become the Santa Fe Institute. In 1984, I was invited to the inaugural workshop discussing what was then called the Rio Grande Institute might do. It was a strange event, at which I was by far the youngest participant. And as chance would have it, in connection with the republication of the proceedings of that event, I just recently wrote an account of what happened there, which I will soon post.

But in any case, Murray was co-chairing the event, and talking about his vision for a great interdisciplinary university, in which people would study things like the relations between physics and archaeology. He talked in grand flourishes about covering the arts and sciences, the simple and the complex, and linking them all together. It didn’t seem very practical to me—and at some point I asked what the Santa Fe Institute would actually concentrate on if it had to make a choice.

People asked what I would suggest, and I (somewhat reluctantly, because it seemed like everyone had been trying to promote their pet area) suggested “complex systems theory”, and my ideas about the emergence of complexity from things like simple programs. The audio of the event records some respectful exchanges between Murray and me, though more about organizational matters than content. But as it turned out, complex systems theory was indeed what the Santa Fe Institute ended up concentrating on. And Murray himself began to use “complexity” as a label for things he was thinking about.

I tried for years (starting when I first worked on such things, in 1981) to explain to Murray about cellular automata, and about my explorations of the computational universe. He would listen politely, and pay lip service to the relevance of computers and experiments with them. But—as I later realized—he never really understood much at all of what I was talking about.

By the late 1980s, I saw Murray only very rarely. I heard, though, that through an agent I know, Murray had got a big advance to write a book. Murray always found writing painful, and before long I heard that the book had gone through multiple editors (and publishers), and that Murray thought it responsible for a heart attack he had. I had hoped that the book would be an autobiography, though I suspected that Murray might not have the introspection to produce that. (Several years later, a New York Times writer named George Johnson wrote what I considered a very good biography of Murray, which Murray hated.)

But then I heard that Murray’s book was actually going to be about his theory of complexity, whatever that might be. A few years went by, and, eventually, in 1994, to rather modest fanfare, Murray’s book The Quark and the Jaguar appeared. Looking through it, though, it didn’t seem to contain anything concrete that could be considered a theory of complexity. George Zweig told me he’d heard that Murray had left people like me and him out of the index to the book, so we’d have to read the whole book if we wanted to find out what he said about us.

At the time, I didn’t bother. But just now, in writing this piece, I was curious to find out what, if anything, Murray actually did say about me. In the printed book, the index goes straight from “Winos” to Woolfenden. But online I can find that there I am, on page 77 (and, bizarrely, I’m also in the online index): “As Stephen Wolfram has emphasized, [a theory] is a compressed package of information, applicable to many cases”. Yes, that’s true, but is that really all Murray got out of everything I told him? (George Zweig, by the way, isn’t mentioned in the book at all.)

In 2002, I’d finally finished my own decade-long basic science project, and I was getting ready to publish my book A New Kind of Science. In recognition of his early support, I’d mentioned Murray in my long list of acknowledgements in the book, and I thought I’d reach out to him and see if he’d like to write a back-cover blurb. (In the end, Steve Jobs convinced me not to have any back-cover blurbs: “Isaac Newton didn’t have blurbs on the Principia; nor should you on your book”.)

Murray responded politely: “It is exciting to know that your magnum opus, reflecting so much thought, research, and writing, will finally appear. I should, of course, be delighted to receive the book and peruse it, and I might be able to come up with an endorsement, especially since I expect to be impressed”. But he said, “I find it difficult to write things under any conditions, as you probably know”.

I sent Murray the book, and soon thereafter was on the phone with him. It was a strange and contentious conversation. Murray was obviously uncomfortable. I was asking him about what he thought complexity was. He said it was “like a child learning a language”. I asked what that meant. We went back and forth talking about languages. I had the distinct sense that Murray thought he could somehow blind me with facts I didn’t know. But—perhaps unfortunately for the conversation—even though A New Kind of Science doesn’t discuss languages much, my long efforts in computational language design had made me quite knowledgeable about the topic, and in the conversation I made it quite clear that I wasn’t convinced about what Murray had to say.

Murray followed up with an email: “It was good to talk with you. I found the exchange of ideas very interesting. We seem to have been thinking about many of the same things over the last few years, and apparently we agree on some of them and have quite divergent views on others”. He talked about the book, saying that “Obviously, I can’t, in a brief perusal, come to any deep conclusions about such an impressive tome. It is clear, however, that there are many ideas in it with which, if I understand them correctly, I disagree”.

Then he continued: “Also, my own work of the last decade or so is not mentioned anywhere, even though that work includes discussions of the meaning and significance of simplicity and complexity, the role of decoherent histories in the understanding of quantum mechanics, and other topics that play important roles in A New Kind of Science”. (Actually, I don’t think I discussed anything relevant to decoherent histories in quantum mechanics.) He explained that he didn’t want to write a blurb, and ended: “I’m sorry, and I hope that this matter does not present any threat to our friendship, which I hold dear”.

As it turned out, I never talked to Murray about science again. The last time I saw Murray was in 2012 at a peculiar event in New York City for promising high-school students. I said hello. Murray looked blank. I said my name, and held up my name tag. “Do I know you?”, he said. I repeated my name. Still blank. I couldn’t tell if it was a problem of age—or a repeat of the story of the beta function. But, with regret, I walked away.

I have often used Murray as an example of the challenges of managing the arc of a great career. From his twenties to his forties, Murray had the golden touch. His particular way of thinking had success after success, and in many ways, he defined physics for a generation. But by the time I knew him, the easy successes were over. Perhaps it was Murray; more likely, it was just that the easy pickings from his approach were now gone.

I think Murray always wanted to be respected as a scholar and statesman of science—and beyond. But—to his chagrin—he kept on putting himself in situations that played to his weaknesses. He tried to lead people, but usually ended up annoying them. He tried to become a literary-style author, but his perfectionism and insecurity got in the way. He tried to do important work in new fields, but ended up finding that his particular methods didn’t work there. To me, it felt in many ways tragic. He so wanted to succeed as he had before, but he never found a way to do it—and always bore the burden of his early success.

Still, with all his complexities, I am pleased to have known Murray. And though Murray is now gone, the physics he discovered will live on, defining an important chapter in the quest for our understanding of the fundamental structure of our universe.

by Luboš Motl (noreply@blogger.com) at June 01, 2019 06:40 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

A Global Helping Hand: Outsource the Right Way

Let’s say you’re a start-up IT firm in Silicon Valley, or a software development firm trying to grow in Kansas City. Your team is overworked, and your eyebags aren’t getting any smaller either. What can you do to get some rest? Outsource!

Outsourcing opens up funds and reduces attrition rates. Employees engaged in outsourced work can be more productive and less stressed thanks to cutting out the need to commute. Sending work abroad also helps small businesses grow in so many ways. Corporate giants like Alibaba, Google, and Slack have all used it to grow their businesses.

If you’re not sure that outsourcing is for you, here are a few things to consider.

Cost-benefit Analysis

As with any business decision, you have to do a cost-benefit analysis on whether outsourcing will really benefit your workplace. An example of this is an analysis of the benefits of outsourcing at a hospital in Uganda. The study discovered that while their expenses grew slightly, the additional expenditure was more than made up for by the better quality of the service.

Benefits aren’t limited to this one hospital either. A U.S. asset management company noted that outsourcing work allowed their in-house employees to focus better on crucial work, while the business still got quality services from outsourced work.

Company Need

As noted above, companies outsource work to fit their needs. With some asset management companies, outsourcing corporate, fiduciary, and administrative services allows them to put all their efforts into their core tasks. Outsourcing can help you keep your employees focus on their core competencies and leave the rest to professionals who probably have more training than them in that field.

Reputation

When it’s time to choose your outsource service or contractor, look for a contractor you can trust. Outsourcing your services can also mean outsourcing your reputation. Keeping an eye on your supply chain from hiring to actual operations ensures your risk is at the lowest level possible.

It’s also important to maintain a good reputation among contractors. As with local hires, you can’t eschew good manners with talents that you’ll send remote work to. Be polite, use their services consistently, and keep their bottomline protected—and they’ll do the same for you.

Legal Aspects

Group of People Message Talking Communication OUTSOURCING ConceptYou’re a law-abiding business—and you’ll definitely make sure that you deal with outsourced work on the same terms. Before entering any contract, you’ll want to review all issues related to third-party work. Read up about U.S. laws on outsourcing work and the laws governing remote and third-party work in the country you plan to create outsourcing arrangements with.

You have to consider the length of your contracts, your outsourcing company’s pending obligations from their previous client, the intellectual property rights and ownership of assets created by your third-party contact, liability, and service levels provided by your service provider.

Thinking ahead helps you stay on top of contract management and in extreme cases, termination and exit options.

While you’re setting up your outsourcing, make sure your in-house employees are happy and productive. Their pay and benefits may be high, but some would argue that what really motivates them is motivation from peers and their bosses recognizing their work. Keeping employees happy rewards you two-fold: seeing happy employees make your customers happy too. Take care of your employees in-house and abroad, and you’re sure to find success.

The post A Global Helping Hand: Outsource the Right Way appeared first on None Equilibrium.

by Bertram Mortensen at June 01, 2019 01:00 AM

May 29, 2019

ZapperZ - Physics and Physicists

How Do You Detect A Neutrino?
Another Don Lincoln video, and this time, it is on a topic that I had a small involvement in, which is neutrino detection.



My small part was in the photomultiplier photocathode used for detection of Cerenkov light that is emitted from such a collision between the "weak boson" and the nucleus. We were trying to design a photodetector that has a large surface area as compared to the current PMT round surface.

In any case, this is a good introduction to why neutrinos are so difficult to detect.

Zz.

by ZapperZ (noreply@blogger.com) at May 29, 2019 02:14 PM

May 28, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

An Overview of the Health of Small Businesses in New Jersey

Like the rest of the country, small enterprises make up the bulk of companies that exist in New Jersey. In fact, it’s 99.6%, according to the Small Business Administration (SBA). The number of small-scale companies is now more than 800,000, employing about 1.8 million people in 2015. That’s nearly half of the total workforce in the state.

Although the growth of the sector in 2017 was slower than that of the United States, it still went up to 2.7% year over year. The number of proprietors went up from 2015 to 2016 by 2.7%. The highest gains occurred among firms with fewer than 20 workers. They generated over 33,000 net jobs. There are many factors contributing to the growth of the small-scale enterprises in New Jersey, including the following:

Diversity of Industries

There are at least five large and in-demand industries booming in the state. These are pharmaceuticals and life sciences, transportation and logistics, information technology, financial services, and advanced manufacturing. IT, for example, covers a wide range of sectors. Some of the most popular ones are software publishers and the design of computer systems. Logistics and transportation also grew with the significant budget set up by the state for infrastructure.

Increase in the Number of Seniors

New Jersey is not immune to the growing phenomenon across the United States: aging. Based on the data from the US Census Bureau in 2018, all of its counties experienced an increase in median age. Baby boomers can then become a huge market small businesses can tap into. They can offer health-related services such as home care and assistance. It can also be an opportunity for startups that focus on lifestyle and wellness.

Facing the Challenges

However, challenges can stop the growth of small businesses. Some of these are:

Unemployment Rate

The unemployment rate in the state has already significantly declined between 2012 and 2018. Within six years, it dropped from 9.5% to only 4.1%. Its performance even beat that of New York. But it’s still higher than the national average at 3.8%. Part of the reason includes uncompetitive wages and low consumer spending.

Because jobs can be less attractive to the workers, many startups may have to become a one-man band or begin with a limited number of employees. Fortunately, solutions such as VoIP and digital marketing can offer reliable support for these businesses until they can skyrocket, pay higher wages, and appeal to more talents.

Taxes

financing

New Jersey has one of the highest corporate income taxes. For example, for companies earning at least $50,000, the rate is already 7.5%, beating that of North Dakota and Louisiana, among others, in the same bracket.

At 2.4%, the property tax in the state is also well above the national average, which is only 1.9%. Worse, many small businesses struggle with taxes. They either don’t know how to calculate them or when to pay them, both of which can significantly impact their costs. To solve this, many other businesses, such as legal and accounting, are providing programs and different types of assistance.

New Jersey is a gold mine for opportunities, but the challenges can overshadow them. It’s high time more stable companies and the government extended the right kind of help to help them accelerate their growth.

The post An Overview of the Health of Small Businesses in New Jersey appeared first on None Equilibrium.

by Bertram Mortensen at May 28, 2019 05:51 AM

May 27, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A conference in Paris

This week I’m in Paris, attending a conference in memory of the outstanding British astronomer and theoretician Arthur Stanley Eddington. The conference, which is taking place at the Observatoire de Paris, is designed to celebrate the centenary of Eddington’s famous measurement of the bending of distant starlight by the sun.  a key experiment that offered important early support for Einstein’s general theory of relativity. However, there are talks on lots of different topics, from Eddington’s philosophy of science to his work on the physics of stars, from his work in cosmology to his search for a unified field theory. The conference website and programme is here.

IMG_2761

The view from my hotel in Denfert-Rochereau

All of the sessions of the conference were excellent, but today was a particular treat with four outstanding talks on the 1919 expedition. In ‘Eddington, Dyson and the Eclipse of 1919’, Daniel Kennefick of the University of Arkansas gave a superb overview of his recent book on the subject. In ‘The 1919 May 29 Eclipse: On Accuracy and Precision’, David Valls-Gabaud of the Observatoire de Paris gave a forensic analysis of Eddington’s calculations. In ‘The 1919 Eclipse; Were the Results Robust?’ Gerry Gilmore of the University of Cambridge described how recent reconstructions of the expedition measurements gave confidence in the results; and in ‘Chasing Mare’s Nests ; Eddington and the Early Reception of General Relativity among Astronomers’, Jeffrey Crelinsten of the University of Toronto summarized the doubts expressed by major American astronomical groups in the early 1920s, as described in his excellent book.

Image result for no shadow of a doubt by daniel kennefick        Image result for einstein's jury

I won’t describe the other sessions, but just note a few things that made this conference the sort of meeting I like best. All speakers were allocated the same speaking time (30 mins including questions); most speakers were familiar with each other’s work; many speakers spoke on the same topic, giving different perspectives; there was plenty of time for further questions and comments at the end of each day. So a superb conference organised by Florian Laguens of the IPC and David Valls-Gabaud of the Observatoire de Paris.

IMG_2742

On the way to the conference

In my own case, I gave a talk on Eddington’s role in the discovery of the expanding universe. I have long been puzzled by the fact that Eddington, an outstanding astronomer and strong proponent of the general theory of relativity, paid no attention when his brilliant former student Georges Lemaître suggested that a universe of expanding universe could be derived from general relativity, a phenomenon that could account for the redshifts of the spiral nebulae, the biggest astronomical puzzle of the age. After considering some standard explanations (Lemaître’s status as an early-career researcher, the journal he chose to publish in and the language of the paper), I added two considerations of my own: (i) the theoretical analysis in Lemaître’s 1927 paper would have been very demanding for a 1927 reader and (ii) the astronomical data that Lemaître relied upon were quite preliminary (Lemaître’s calculation of a redshift/distance coefficient for the nebulae relied upon astronomical distances from Hubble that were established using the method of apparent magnitude, a method that was much less reliable than Hubble’s later observations using the method of Cepheid variables).

IMG_2759

Making my points at the Eddington Conference

It’s an interesting puzzle because it is thought that Lemaitre sent a copy of his paper to Eddington in 1927 – however I finished by admitting that there is a distinct possibility that Eddington simply didn’t take the time to read his former student’s paper. Sometimes the most boring explanation is the right one! The slides for my talk can be found here.

All in all, a superb conference.

 

by cormac at May 27, 2019 07:39 PM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Advantages of Digital Marketing for Any Business

The Internet age is the best time to start a new business. Now more than ever, there are many ways to reach your customers at a low cost. Social media rewards creativity and awareness of popular trends, and those are traits that do not require too much monetary investment. However, there are certain skills needed to succeed in this new arena.

Fortunately, there are professionals who specialize in digital marketing and practice Search Engine Optimization (SEO). When you hire an SEO company, you can reach customers in Utah or any other place in the world. Aside from that, there are more reasons digital marketing is something you should start.

Digital Marketing Allows You to Compete while Spending Less

When compared to traditional marketing campaigns like print and television ads, digital marketing is definitely much cheaper. Aside from that, it is very difficult to assess the effectivity of each medium since there are no tracking methods for engagement for TV and print ads. That means you will be left with no choice but to continue all their marketing campaigns.

Digital marketing has the benefit of having targeted research and systems of analysis, which can easily produce data so that you can determine which of the channels are actually effective and gain traction with the target audience. You are less likely to overspend with digital marketing and it actually costs lower, to begin with.

The use of digital marketing helps smaller companies reach out to their audience without spending a fortune. With effort, creativity, and ingenuity, you can build a digital presence that can rival those of big corporations. There are still some advantages for big companies since they are provisions to have ads with a wider reach and advanced software for the website. A smaller company can employ SEO and marketing specialists to narrow down the gap.

Mobile Marketing Expands Engagement

One of the biggest breakthroughs in the digital age was when mobile phones gained Internet capability. This has led to the slow demise of the desktop computer, as they were only used for work or gaming.
What it also brought was the phenomenon of how every person can now have Internet access all the time. Every person, even minors and children, can now access the Web and many of them already have social media accounts.

How should your marketing campaign adjust to that? The approach to your marketing campaign should now be more focused. Check on the numbers: 80% of Internet users own a smartphone and likely use it to access the Internet. 90% of them uses their phones to operate apps. Your digital marketing manager must be aware of this data and eventually analyze and adjust your campaign accordingly.

Social Media Builds Trust Through Digital “Word of Mouth”

Social media websites
Some things never change, like the effectivity of “word of mouth.” For many people, this is still the most believable review of a product or service. Nowadays, social media has acted as the modern day review board. With social media, you can easily access the opinion of a hundred friends all at once. Online polls in your school or age group can reveal valuable indicative information.

With digital marketing, you can reach more while spending less money, time, and effort. As in any business, your marketing strategy determines your success, even if you have a quality product. This is already a necessity for any business; don’t let yours be left behind.

The post Advantages of Digital Marketing for Any Business appeared first on None Equilibrium.

by Bertram Mortensen at May 27, 2019 05:34 PM

May 25, 2019

Jon Butterworth - Life and Physics

Murray Gell-Mann
Sad to learn that Murray Gell-Mann, pioneer of particle physics and more, has died at the age of 89.  Here is the obituary from Caltech. The first person to bring some order to Hadron Island and point the way to … Continue reading

by Jon Butterworth at May 25, 2019 06:50 AM

May 24, 2019

Clifford V. Johnson - Asymptotia

News from the Front, XVI: Toward Quantum Heat Engines

(The following post is a bit more technical than usual. But non-experts may still find parts helpful.)

A couple of years ago I stumbled on an entire field that I had not encountered before: the study of Quantum Heat Engines. This sounds like an odd juxtaposition of terms since, as I say in the intro to my recent paper:

The thermodynamics of heat engines, refrigerators, and heat pumps is often thought to be firmly the domain of large classical systems, or put more carefully, systems that have a very large number of degrees of freedom such that thermal effects dominate over quantum effects. Nevertheless, there is thriving field devoted to the study—both experimental and theoretical—of the thermodynamics of machines that use small quantum systems as the working substance.

It is a fascinating field, with a lot of activity going on that connects to fields like quantum information, device physics, open quantum systems, condensed matter, etc.

Anyway, I stumbled on it because, as you may know, I've been thinking (in my 21st-meets-18th century way) about heat engines a lot over the last five years since I showed how to make them from (quantum) black holes, when embedded in extended gravitational thermodynamics. I've written it all down in blog posts before, so go look if interested (here and here).

In particular, it was when working on a project I wrote about here that I stumbled on quantum heat engines, and got thinking about their power and efficiency. It was while working on that project that I had a very happy thought: Could I show that holographic heat engines (the kind I make using black holes) -at least a class of them- are actually, in some regime, quantum heat engines? That would be potentially super-useful and, of course, super-fun.

The blunt headline statement is that they are, obviously, because every stage [...] Click to continue reading this post

The post News from the Front, XVI: Toward Quantum Heat Engines appeared first on Asymptotia.

by Clifford at May 24, 2019 05:16 PM

ZapperZ - Physics and Physicists

Charles Kittel
Physicist Charles Kittel passed away this past May 15th, 2019.

This is one of those names that will not ring a bell to the public. But for most of us in the field of condensed matter physics, his name has almost soared to mythical heights. His book "Introduction to Solid State Physics" has become almost a standard to everyone entering this field of study. That text alone has educated innumerable number of physicists that went on to make contribution to a field of physics that has a direct impact on our world today. It is also a text that are used (yes, they are still being used in physics classes today) in many electrical engineering courses.

He has been honored with many awards and distinctions, including the Buckley prize from the APS. He may be gone, but his legacy, influence, and certainly his book, will live on.

Zz.

by ZapperZ (noreply@blogger.com) at May 24, 2019 01:34 PM

May 21, 2019

John Baez - Azimuth

The Monoidal Grothendieck Construction

My grad student Joe Moeller is talking at the 4th Symposium on Compositional Structures this Thursday! He’ll talk about his work with Christina Vasilakopolou, a postdoc here at U.C. Riverside. Together they created a monoidal version of a fundamental construction in category theory: the Grothendieck construction! Here is their paper:

• Joe Moeller and Christina Vasilakopoulou, Monoidal Grothendieck construction.

The monoidal Grothendieck construction plays an important role in our team’s work on network theory, in at least two ways. First, we use it to get a symmetric monoidal category, and then an operad, from any network model. Second, we use it to turn any decorated cospan category into a ‘structured cospan category’. I haven’t said anything about structured cospans yet, but they are an alternative approach to open systems, developed by my grad student Kenny Courser, that I’m very excited about. Stay tuned!

The Grothendieck construction turns a functor

F \colon \mathsf{X}^{\mathrm{op}} \to \mathsf{Cat}

into a category \int F equipped with a functor

p \colon \int F \to \mathsf{X}

The construction is quite simple but there’s a lot of ideas and terminology connected to it: for example a functor F \colon \mathsf{X}^{\mathrm{op}} \to \mathsf{Cat} is called an indexed category since it assigns a category to each object of \mathsf{X}, while the functor p \colon \int F \to \mathsf{X} is of a special sort called a fibration.

I think the easiest way to learn more about the Grothendieck construction and this new monoidal version may be Joe’s talk:

• Joe Moeller, Monoidal Grothendieck construction, SYCO4, Chapman University, 22 May 2019.

Abstract. We lift the standard equivalence between fibrations and indexed categories to an equivalence between monoidal fibrations and monoidal indexed categories, namely weak monoidal pseudofunctors to the 2-category of categories. In doing so, we investigate the relation between this global monoidal structure where the total category is monoidal and the fibration strictly preserves the structure, and a fibrewise one where the fibres are monoidal and the reindexing functors strongly preserve the structure, first hinted by Shulman. In particular, when the domain is cocartesian monoidal, lax monoidal structures on a functor to Cat bijectively correspond to lifts of the functor to MonCat. Finally, we give some indicative examples where this correspondence appears, spanning from the fundamental and family fibrations to network models and systems.

To dig deeper, try this talk Christina gave at the big annual category theory conference last year:

• Christina Vasilakopoulou, Monoidal Grothendieck construction, CT2018, University of Azores, 10 July 2018.

Then read Joe and Christina’s paper!

Here is the Grothendieck construction in a nutshell:

by John Baez at May 21, 2019 05:15 AM

May 16, 2019

John Baez - Azimuth

Enriched Lawvere Theories

My grad student Christian Williams and I finished this paper just in time for him to talk about it at SYCO:

• John Baez and Christian Williams, Enriched Lawvere theories for operational semantics.

Abstract. Enriched Lawvere theories are a generalization of Lawvere theories that allow us to describe the operational semantics of formal systems. For example, a graph-enriched Lawvere theory describes structures that have a graph of operations of each arity, where the vertices are operations and the edges are rewrites between operations. Enriched theories can be used to equip systems with operational semantics, and maps between enriching categories can serve to translate between different forms of operational and denotational semantics. The Grothendieck construction lets us study all models of all enriched theories in all contexts in a single category. We illustrate these ideas with the SKI-combinator calculus, a variable-free version of the lambda calculus, and with Milner’s calculus of communicating processes.

When Mike Stay came to U.C. Riverside to work with me about ten years ago, he knew about computation and I knew about category theory, and we started trying to talk to each other. I’d heard that categories and computer science were deeply connected: for example, people like to say that the lambda-calculus is all about cartesian closed categories. But we soon realized something funny was going on here.

Computer science is deeply concerned with processes of computation, and category theory uses morphisms to describe processes… but when cartesian closed categories are applied to the lambda calculus, their morphisms do not describe processes of computation. In fact, the process of computation is effectively ignored!

We decided that to fix this we could use 2-categories where

• objects are types. For example, there could be a type of integers, INT. There could be a type of pairs of integers, INT × INT. There could also be a boring type 1, which represents something there’s just one of.

• morphisms are terms. For example, a morphism f: 1 → INT picks out a specific natural number, like 2 or 3. There could also be a morphism +: INT × INT → INT, called ‘addition’. Combining these, we can get expressions like 2+3.

• 2-morphism are rewrites. For example, there could be a rewrite going from 2+3 to 5.

Later Mike realized that instead of 2-categories, it can be good to use graph-enriched categories: that is, things like categories where instead of a set of morphisms from one object to another, we have a graph.

In other words: instead of hom-sets, a graph-enriched category has ‘hom-graphs’. The objects of a graph-enriched category can represent types, the vertices of the hom-graphs can represent terms, and the edges of the hom-graphs can represent rewrites.

Mike teamed up with Greg Meredith to write a paper on this:

• Mike Stay and Greg Meredith, Representing operational semantics
with enriched Lawvere theories
.

Christian decided to write a paper building on this, and I’ve been helping him out because it’s satisfying to see an old dream finally realized—in a much more detailed, beautiful way than I ever imagined!

The key was to sharpen the issue by considering enriched Lawvere theories. Lawvere theories are an excellent formalism for describing algebraic structures obeying equational laws, but they do not specify how to compute in such a structure, for example taking a complex expression and simplifying it using rewrite rules. Enriched Lawvere theories let us study the process of rewriting.

Maybe I should back up a bit. A Lawvere theory is a category with finite products T generated by a single object t, for ‘type’. Morphisms t^n \to t represent n-ary operations, and commutative diagrams specify equations these operations obey. There is a theory for groups, a theory for rings, and so on. We can specify algebraic structures of a given kind in some ‘context’—that is, in some category C with finite products—by a product-preserving functor \mu: T \to C. For example, if T is the theory of groups and C is the category of sets then such a functor describes a group, but if C is the category of topological space then such a functor describes a topological group.

All this is a simple and elegant form of what computer scientists call denotational semantics: roughly, the study of types and terms, and what they signify. However, Lawvere theories know nothing of operational semantics: that is, how we actually compute. The objects of our Lawvere are types and the morphisms are terms. But there are no rewrites going between terms, only equations!

This is where enriched Lawvere theories come in. Suppose we fix a cartesian closed category V, such as the category of sets, or the category of graphs, or the category of posets, or even the category of categories. Then V-enriched category is a thing like a category, but instead of having a set of morphisms from any object to any other object, it has an object of V. That is, instead of hom-sets it can have hom-graphs, or hom-posets, or hom-categories. If it has hom-categories, then it’s a 2-category—so this setup includes my original dream, but much more!

Our paper explains how to generalize Lawvere theories to this enriched setting, and how to use these enriched Lawvere theories in operational semantics. We rely heavily on previous work, especially by Rory Lucyshyn-Wright, who in turn built on work by John Power and others. But we’re hoping that our paper, which is a bit less high-powered, will be easier for people who are familiar with category theory but not yet enriched categories. The novelty lies less in the math than its applications. Give it a try!

Here is a small piece of a hom-graph in the graph-enriched theory of the SKI combinator calculus, a variable-free version of the lambda calculus invented by Moses Schönfinkel and Haskell Curry back in the 1920s:

SKI

by John Baez at May 16, 2019 12:42 AM

May 14, 2019

Axel Maas - Looking Inside the Standard Model

Acquiring a new field
I have recently started to look into a new field: Quantum gravity. In this entry, I would like to write a bit about how this happens, acquiring a new field. Such that you can get an idea what can lead a scientist to do such a thing. Of course, in future entries I will also write more about what I am doing, but it would be a bit early to do so right now.

Acquiring a new field in science is not something done lightly. One has always not enough time for the things one does already. And when you enter a new field, stuff is slow. You have to learn a lot of basics, need to get an overview of what has been done, and what is still open. Not to mention that you have to get used to a different jargon. Thus, one rarely does so lightly.

I have in the past written already one entry about how I came to do Higgs physics. This entry was written after the fact. I was looking back, and discussed my motivation how I saw it at that time. It will be an interesting thing to look back at this entry in a few years, and judge what is left of my original motivation. And how I feel about this knowing what happened since then. But for now, I only know the present. So, lets get to it.

Quantum gravity is the hypothetical quantum version of the ordinary theory of gravity, so-called general relativity. However, it has withstood quantization for a quite a while, though there has been huge progress in the last 25 years or so. If we could quantize it, its combination with the standard model and the simplest version of dark matter would likely be able to explain almost everything we can observe. Though even then a few open questions appear to remain.

But my interest in quantum gravity comes not from the promise of such a possibility. It has rather a quite different motivation. My interest started with the Higgs.

I have written many times that we work on an improvement in the way we look at the Higgs. And, by now, in fact of the standard model. In what we get, we see a clear distinction between two concepts: So-called gauge symmetries and global symmetries. As far as we understand the standard model, it appears that global symmetries determine how many particles of a certain type exists, and into which particles they can decay or be combined. Gauge symmetries, however, seem to be just auxiliary symmetries, which we use to make calculations feasible, and they do not have a direct impact on observations. They have, of course, an indirect impact. After all, in which theory which gauge symmetry can be used to facilitate things is different, and thus the kind of gauge symmetry is more a statement about which theory we work on.

Now, if you add gravity, the distinction between both appears to blur. The reason is that in gravity space itself is different. Especially, you can deform space. Now, the original distinction of global symmetries and gauge symmetries is their relation to space. A global symmetry is something which is the same from point to point. A gauge symmetry allows changes from point to point. Loosely speaking, of course.

In gravity, space is no longer fixed. It can itself be deformed from point to point. But if space itself can be deformed, then nothing can stay the same from point to point. Does then the concept of global symmetry still make sense? Or does all symmetries become just 'like' local symmetries? Or is there still a distinction? And what about general relativity itself? In a particular sense, it can be seen as a theory with a gauge symmetry of space. Makes this everything which lives on space automatically a gauge symmetry? If we want to understand the results of what we did in the standard model, where there is no gravity, in the real world, where there is gravity, then this needs to be resolved. How? Well, my research will hopefully answer this question. But I cannot do it yet.

These questions were already for some time in the back of my mind. A few years, I actually do not know how many exactly. As quantum gravity pops up in particle physics occasionally, and I have contact with several people working on it, I was exposed to this again and again. I knew, eventually, I will need to address it, if nobody else does. So far, nobody did.

But why now? What prompted me to start now with it? As so often in science, it were other scientists.

Last year at the end of November/beginning of December, I took part in a conference in Vienna. I had been invited to talk about our research. The meeting has a quite wide scope, and also present were several people, who work on black holes and quantum physics. In this area, one goes, in a sense, halfway towards quantum gravity: One has quantum particles, but they life in a classical gravity theory, but with strong gravitational effects. Which is usually a black hole. In such a setup, the deformations of space are fixed. And also non-quantum black holes can swallow stuff. This combination appears to make the following thing: Global symmetries appear to become meaningless, because everything associated with them can vanish in the black hole. However, keeping space deformations fixed means that local symmetries are also fixed. So they appear to become real, instead of auxiliary. Thus, this seems to be quite opposite to our result. And this, and the people doing this kind of research, challenged my view of symmetries. In fact, in such a half-way case, this effect seems to be there.

However, in a full quantum gravity theory, the game changes. Then also space deformations become dynamical. At the same time, black holes need no longer to have the characteristic to swallow stuff forever, because they become dynamical, too. They develop. Thus, to answer what happens really requires full quantum gravity. And because of this situation, I decided to start to work actively on quantum gravity. Because I needed to answer whether our picture of symmetries survive, at least approximately, when there is quantum gravity. And to be able to answer such challenges. And so it began.

Within the last six months, I have now worked through a lot of the basic stuff. I have now a rough idea of what is going on, and what needs to be done. And I think, I see a way how everything can be reconciled, and make sense. It will still need a long time to complete this, but I am very optimistic right now. So optimistic, in fact, that a few days back I gave my first talk, in which I discussed this issues including quantum gravity. It will still need time, before I have a first real result. But I am quite happy how thing progress.

And that is the story how I started to look at quantum gravity in earnest. If you want to join me in this endeavor: I am always looking for collaboration partners and, of course, students who want to do their thesis work on this subject 😁

by Axel Maas (noreply@blogger.com) at May 14, 2019 03:03 PM

May 12, 2019

Marco Frasca - The Gauge Connection

Is it possible to get rid of exotic matter in warp drive?

On 1994, Miguel Alcubierre proposed a solution of the Einstein equations (see here) describing a space-time bubble moving at arbitrary speed. It is important to notice that no violation of the light speed limit happens because is the space-time moving and inside the bubble everything goes as expected. Miguel AlcubierreThis kind of solutions of the Einstein equations have a fundamental drawback: they violate Weak Energy Condition (WEC) and, in order to exist, some exotic matter with negative energy density must exist. Useless to say, nobody has ever seen such kind of matter. There seems to exist some clue in the way Casimir effect works but this just relies on the way one interprets quantum fields rather than an evidence of existence. Besides, since the initial proposal, a great number of studies have been published showing how pathological the Alcubierre’s solution can be, also recurring to quantum field theory (e.g. Hawking radiation). So, we have to turn to dream of a possible interstellar travel hoping that some smart guy will one day come out with a better solution.

Of course, Alcubierre’s solution is rather interesting from a physical point of view as it belongs to a number of older solutions, likeKip Thorne wormholes, time machines and like that, yielded by very famous authors as Kip Thorne, that arise when one impose a solution and then check the conditions of its existence. This turns out to be a determination of the energy-momentum tensor and, unavoidably, is negative. Then, they violate whatever energy condition of the Einstein equations granting pathological behaviour. On the other side, they appear the most palatable for science fiction of possible futures of space and time travels. In these times where this kind of technologies are largely employed by the film industry, moving the fantasy of millions, we would hope that such futures should also be possible.

It is interesting to note the procedure to obtain these particular solutions. One engineers it on a desk and then substitute them into the Einstein equations to see when are really a solution. One fixes in this way the energy requirements. On the other side, it is difficult to come out from the blue with a solution of the Einstein equations that provides such a particular behaviour, moving the other way around. It is also possible that such solutions are not possible and imply always a violation of the energy conditions. Some theorems have been proved in the course of time that seem to prohibit them (e.g. see here). Of course, I am convinced that the energy conditions must be respected if we want to have the physics that describes our universe. They cannot be evaded.

So, turning at the question of the title, could we think of a possible warp drive solution of the Einstein equations without exotic matter? The answer can be yes of course provided we are able to recover the York time, or warp factor, in the way Alcubierre obtained it with its pathological solution. At first, this seems an impossible mission. But the space-time bubble we are considering is a very small perturbation and perturbation theory can come to rescue. Particularly, when this perturbation can be locally very strong. On 2005, I proposed such a solution (see here) together with a technique to solve the Einstein equations when the metric is strongly perturbed. My intent at that time was to give a proof of the BKL conjecture. A smart referee suggested to me to give an example of application of the method. The metric I have obtained in this way, perturbing a Schwarzaschild metric, yields a solution that has an identical York time (warp factor) as for the Alcubierre’s metric. Of course, I am respecting energy conditions as I am directly solving the Einstein equations that do.

The identity between the York times can be obtained provided the form factor proposed by Alcubierre is taken to be 1 but this is just the simplest case. Here is an animation of my warp factor.

Warp factor

It seen the bubble moving as expected along the x direction.

My personal hope is that this will go beyond a mathematical curiosity. On the other side, it should be understood how to provide such kind of perturbations to a given metric. I can think to the Einstein-Maxwell equations solved using perturbation theory. There is a lot of literature about and a lot of great contributions on this argument.

Finally, this could give a meaning to the following video by NASA.

by mfrasca at May 12, 2019 05:59 PM

ZapperZ - Physics and Physicists

The Geekiest T-Shirt That I've Ever Bought
I just had to get this one. I found this last week during the members night at Chicago's Adler Planetarium.


The people that I were with of course knew that this is referring to "force", but they didn't get the connection. So I had to explain to them that Newton's 2nd law, i.e. F=ma can be expressed in a more general form, i.e. F = dp/dt, where p is momentum mv. Thus

F = d/dt (mv)

Of course, I'm not surprised that most people, and probably most of Adler's visitors, would not get this unless they know a bit of calculus and have done general physics with calculus. Maybe that was why this t-shirt was on sale! :)

Maybe I'll wear this when I teach kinematics this Fall!

Zz.

by ZapperZ (noreply@blogger.com) at May 12, 2019 02:40 PM

May 10, 2019

ZapperZ - Physics and Physicists

Table-Top Laser Ablation Unit
I was at the Chicago's Field Museum Members Night last night. Of course, there were lots of fascinating things to see, and wonderful scientists and museum staff to talk to. But inevitably, the experimentalist in me can't stop itself from geeking out over neat gadgets.

This was one such gadget. It is, believe it or not, a table-top laser ablation unit. It is no more bigger than shoe box. I was surprised when I was told what it was, and of course, I wanted to learn more. It appears that this is still a prototype, invented by the smart folks at ETH Zurich (of course!). The scientist at Field Museum uses it to do chemical analysis on trace elements in various objects in the field, where the trace elements are just too minute in quantity that x-ray fluorescence would not be effective.


Now, you have to understand that typically, laser ablation systems tend to occupy whole rooms! It's job is to shoot laser pulses at a target, causing the evaporation of that material. The vapor then typically will migrate to a substrate where it will form a thin film, or coat another object. People use this technique often to make what is known as epitaxial films, where, if suitably chosen, the new film will have the same crystal structure as the substrate, usually up to a certain thickness.

So that was why I was fascinated to see a laser ablation kit that is incredibly small. Granted, they don't need to do lots of ablating. They only need to sample the vapor enough to do elemental analysis. The laser source is commercially bought, but the unit that is in the picture directs the laser to the target, collects the vapor, and then siphon it to a mass spectrometer or something to do its analysis. The whole thing, with the laser and the analyzer, fits on a table top, making it suitable to do remote analysis on items that can't be moved.

And of course, as always, I like to tout of the fact that many of these techniques originate out of physics research, and that eventually, they trickle down to applications elsewhere. But you already know that, don't you?

Zz.

by ZapperZ (noreply@blogger.com) at May 10, 2019 01:12 PM

May 08, 2019

Jon Butterworth - Life and Physics

Mosquitos and Toblerones
A couple of years ago I went to see Lucy Kirkwood’s play Mosquitos at the National Theatre. It starred Olivia Coleman and Olivia Williams, who were both brilliant, and was set largely in and around CERN. There was a lot … Continue reading

by Jon Butterworth at May 08, 2019 07:20 PM

May 04, 2019

Clifford V. Johnson - Asymptotia

Endgame Memories

About 2-3 (ish) years ago, I was asked to visit the Disney/Marvel mothership in Burbank for a meeting. I was ushered into the inner workings of the MCU, past a statue of the newly acquired Spidey, and into a room. Present were Christopher Markus and Stephen McFeely, the writers of … Click to continue reading this post

The post Endgame Memories appeared first on Asymptotia.

by Clifford at May 04, 2019 06:34 PM

ZapperZ - Physics and Physicists

Why Does Light Bend When It Enters Glass?
Don Lincoln tackles another "everyday" phenomenon. This time, he tries to give you an "explanation" on why light changes direction when it goes from one medium to another, and why some of the more popular explanation that have been given may be either incomplete, or wrong.



Certainly, any undergraduate physics student would have already dealt with the boundary conditions using Maxwell's equations, so this should be entirely new. However, he skipped rather quickly something that I thought was not handled thoroughly.

The continuity of the parallel component of E to the boundary is fine. However, Lincoln argued that the reason why the perpendicular component of the F field is shorter in glass is due to the polarization of the material, and thus, the sum of the light's E-field and the E-field from the polarization will cause the net, resultant E-field to be shorter.

But if the material's polarization can affect the perpendicular component, why doesn't it also affect the parallel component? After all, we assume that the material is isotropic. This, he left out, and at least to me, made it sound that the parallel component is not affected. If this is so, why?

Zz.

by ZapperZ (noreply@blogger.com) at May 04, 2019 02:55 PM

April 30, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A Week at The Surf Experience

I don’t often take a sun holiday these days, but I had a fabulous time last week at The Surf Experience in Lagos, Portugal. I’m not an accomplished surfer by any measure, but there is nothing quite like the thrill of catching a few waves in the sea with the sun overhead – a nice change from the indoors world of academia.

Not for the first time, I signed up for a residential course with The Surf Experience in Lagos. Founded by veteran German surfer Dago Lipke, guests of The Surf Experience stay at the surf lodge Vila Catarina, a lovely villa in the hills above Lagos, complete with beautiful gardens and swimming pool. Sumptuous meals are provided by Dagos’s wife Connie, a wonderful cook. Instead of wandering around town trying to find a different restaurant every evening, guests enjoy an excellent meal in a quiet setting in good company, followed by a game of pool or chess. And it really is good company. Guests at TSE tend mainly to hail from Germany and Switzerland, with a sprinkling from France and Sweden, so it’s truly international – quite a contrast to your average package tour (or indeed our college staff room). Not a mention of Brexit, and an excellent opportunity to improve my German. (Is that what you tell yourself?- Ed)

IMG_2637 (1)

Hanging out at the pool before breakfast

IMG_2634

Fine dining at The Surf Experience

IMG_2624

A game of cards and a conversation instead of a noisy bar

Of course, no holiday is perfect and in this case I managed to pick up an injury on the first day. Riding the tiniest wave all the way back to the beach, I got unexpectedly thrown off, hitting my head off the bottom at speed. (This is the most elementary error you can make in surfing and it risks serious injury, from concussion to spinal fracture). Luckily, I walked away with nothing more than severe bruising to the neck and chest (as later established by X-ray at the local medical clinic, also an interesting experience). So no life-altering injuries, but like a jockey with a broken rib, I was too sore to get back on the horse for few days. Instead, I tried Stand Up Paddling for the first time, which I thoroughly enjoyed. It’s more exciting than it looks, must get my own board for calm days at home.

E6Jc2LvY

Stand Up Paddling in Lagos with Kiteschool Portugal

Things got even better towards the end of the week as I began to heal. Indeed, the entire surf lodge had a superb day’s surfing yesterday on beautiful small green waves at a beach right next to town (in Ireland, we very rarely see clean conditions like this, the surf is mainly driven by wind). It was fantastic to catch wave after wave throughout the afternoon, even if clambering back on the board after each wasn’t much fun for yours truly.

This morning, I caught a Ryanair flight back to Dublin from Faro, should be back in the office by late afternoon. Oddly enough, I feel enormously refreshed – perhaps it’s the feeling of gradually healing. Hopefully the sensation of being continuously kicked in the ribs will disappear soon and I’ll be back on the waves in June. In the meantime, this week marks a study period for our students before their exams, so it’s an ideal time to prepare my slides for the Eddington conference in Paris later this month.

Update

I caught a slight cold on the way back, so today I’m wandering around college like a lunatic going cough, ‘ouch’ , sneeze, ‘ouch’.  Maybe it’s karma for flying Ryanair – whatever about indulging in one or two flights a year, it’s a terrible thing to use an airline whose CEO continues to openly deny the findings of climate scientists.

 

by cormac at April 30, 2019 09:49 PM

April 25, 2019

Clifford V. Johnson - Asymptotia

Black Hole Session

Well I did not get the special NYT issue as a keepsake, but this is maybe better: I got to attend the first presentation of the “black hole picture” scientific results at a conference, the APS April meeting (Sunday April 14th 2019). I learned so much! These are snaps of … Click to continue reading this post

The post Black Hole Session appeared first on Asymptotia.

by Clifford at April 25, 2019 06:35 PM

April 24, 2019

Andrew Jaffe - Leaves on the Line

Spring Break?

Somehow I’ve managed to forget my usual end-of-term post-mortem of the year’s lecturing. I think perhaps I’m only now recovering from 11 weeks of lectures, lab supervision, tutoring alongside a very busy time analysing Planck satellite data.

But a few weeks ago term ended, and I finished teaching my undergraduate cosmology course at Imperial, 27 lectures covering 14 billion years of physics. It was my fourth time teaching the class (I’ve talked about my experiences in previous years here, here, and here), but this will be the last time during this run. Our department doesn’t let us teach a course more than three or four years in a row, and I think that’s a wise policy. I think I’ve arrived at some very good ways of explaining concepts such as the curvature of space-time itself, and difficulties with our models like the 122-or-so-order-of-magnitude cosmological constant problem, but I also noticed that I wasn’t quite as excited as in previous years, working up from the experimentation of my first time through in 2009, putting it all on a firmer foundation — and writing up the lecture notes — in 2010, and refined over the last two years. This year’s teaching evaluations should come through soon, so I’ll have some feedback, and there are still about six weeks until the students’ understanding — and my explanations — are tested in the exam.

Next year, I’ve got the frankly daunting responsibility of teaching second-year quantum mechanics: 30 lectures, lots of problem sheets, in-class problems to work through, and of course the mindbending weirdness of the subject itself. I’d love to teach them Dirac’s very useful notation which unifies the physical concept of quantum states with the mathematical ideas of vectors, matrices and operators — and which is used by all actual practitioners from advanced undergraduates through working physicists. But I’m told that students find this an extra challenge rather than a simplification. Comments from teachers and students of quantum mechanics are welcome.

by Andrew at April 24, 2019 01:19 AM

April 23, 2019

Georg von Hippel - Life on the lattice

Looking for guest blogger(s) to cover LATTICE 2018
Since I will not be attending LATTICE 2018 for some excellent personal reasons, I am looking for a guest blogger or even better several guest bloggers from the lattice community who would be interested in covering the conference. Especially for advanced PhD students or junior postdocs, this might be a great opportunity to get your name some visibility. If you are interested, drop me a line either in the comment section or by email (my university address is easy to find).

by Georg v. Hippel (noreply@blogger.com) at April 23, 2019 01:18 PM

April 16, 2019

Matt Strassler - Of Particular Significance

The Black Hole `Photo’: Seeing More Clearly

THIS POST CONTAINS ERRORS CONCERNING THE EXISTENCE AND VISIBILITY OF THE SO-CALLED PHOTON-SPHERE AND SHADOW; THESE ERRORS WERE COMMON TO ESSENTIALLY ALL REPORTING ON THE BLACK HOLE ‘PHOTO’.  IT HAS BEEN SUPERSEDED BY THIS POST, WHICH CORRECTS THESE ERRORS AND EXPLAINS THE SITUATION.

Ok, after yesterday’s post, in which I told you what I still didn’t understand about the Event Horizon Telescope (EHT) black hole image (see also the pre-photo blog post in which I explained pedagogically what the image was likely to show and why), today I can tell you that quite a few of the gaps in my understanding are filling in (thanks mainly to conversations with Harvard postdoc Alex Lupsasca and science journalist Davide Castelvecchi, and to direct answers from professor Heino Falcke, who leads the Event Horizon Telescope Science Council and co-wrote a founding paper in this subject).  And I can give you an update to yesterday’s very tentative figure.

First: a very important point, to which I will return in a future post, is that as I suspected, it’s not at all clear what the EHT image really shows.   More precisely, assuming Einstein’s theory of gravity is correct in this context:

  • The image itself clearly shows a black hole’s quasi-silhouette (called a `shadow’ in expert jargon) and its bright photon-sphere where photons [particles of light — of all electromagnetic waves, including radio waves] can be gathered and focused.
  • However, all the light (including the observed radio waves) coming from the photon-sphere was emitted from material well outside the photon-sphere; and the image itself does not tell you where that material is located.  (To quote Falcke: this is `a blessing and a curse’; insensitivity to the illumination source makes it easy to interpret the black hole’s role in the image but hard to learn much about the material near the black hole.) It’s a bit analogous to seeing a brightly shining metal ball while not being able to see what it’s being lit by… except that the photon-sphere isn’t an object.  It’s just a result of the play of the light [well, radio waves] directed by the bending effects of gravity.  More on that in a future post.
  • When you see a picture of an accretion disk and jets drawn to illustrate where the radio waves may come from, keep in mind that it involves additional assumptions — educated assumptions that combine many other measurements of M87’s black hole with simulations of matter, gravity and magnetic fields interacting near a black hole.  But we should be cautious: perhaps not all the assumptions are right.  The image shows no conflicts with those assumptions, but neither does it confirm them on its own.

Just to indicate the importance of these assumptions, let me highlight a remark made at the press conference that the black hole is rotating quickly, clockwise from our perspective.  But (as the EHT papers state) if one doesn’t make some of the above-mentioned assumptions, one cannot conclude from the image alone that the black hole is actually rotating.  The interplay of these assumptions is something I’m still trying to get straight.

Second, if you buy all the assumptions, then the picture I drew in yesterday’s post is mostly correct except (a) the jets are far too narrow, and shown overly disconnected from the disk, and (b) they are slightly mis-oriented relative to the orientation of the image.  Below is an improved version of this picture, probably still not the final one.  The new features: the jets (now pointing in the right directions relative to the photo) are fatter and not entirely disconnected from the accretion disk.  This is important because the dominant source of illumination of the photon-sphere might come from the region where the disk and jets meet.

My3rdGuessBHPhoto.png

Updated version of yesterday’s figure: main changes are the increased width and more accurate orientation of the jets.  Working backwards: the EHT image (lower right) is interpreted, using mainly Einstein’s theory of gravity, as (upper right) a thin photon-sphere of focused light surrounding a dark patch created by the gravity of the black hole, with a little bit of additional illumination from somewhere.  The dark patch is 2.5 – 5 times larger than the event horizon of the black hole, depending on how fast the black hole is rotating; but the image itself does not tell you how the photon-sphere is illuminated or whether the black hole is rotating.  Using further assumptions, based on previous measurements of various types and computer simulations of material, gravity and magnetic fields, a picture of the black hole’s vicinity (upper left) can be inferred by the experts. It consists of a fat but tenuous accretion disk of material, almost face-on, some of which is funneled into jets, one heading almost toward us, the other in the opposite direction.  The material surrounds but is somewhat separated from a rotating black hole’s event horizon.  At this radio frequency, the jets and disk are too dim in radio waves to see in the image; only at (and perhaps close to) the photon-sphere, where some of the radio waves are collected and focused, are they bright enough to be easily discerned by the Event Horizon Telescope.

 

by Matt Strassler at April 16, 2019 12:53 PM

Jon Butterworth - Life and Physics

The Universe Speaks in Numbers
I have reviewed Graham Farmelo’s new book for Nature. You can find the full review here. Mathematics, physics and the relationship between the two is a fascinating topic which sparks much discussion. The review only came out this morning and … Continue reading

by Jon Butterworth at April 16, 2019 12:16 PM

April 15, 2019

Matt Strassler - Of Particular Significance

The Black Hole `Photo’: What Are We Looking At?

The short answer: I’m really not sure yet.  [This post is now largely superseded by the next one, in which some of the questions raised below have now been answered.]  EVEN THAT POST WAS WRONG ABOUT THE PHOTON-SPHERE AND SHADOW.  SEE THIS POST FROM JUNE 2019 FOR SOME ESSENTIAL CORRECTIONS THAT WERE LEFT OUT OF ALL REPORTING ON THIS SUBJECT.

Neither are some of my colleagues who know more about the black hole geometry than I do. And at this point we still haven’t figured out what the Event Horizon Telescope experts do and don’t know about this question… or whether they agree amongst themselves.

[Note added: last week, a number of people pointed me to a very nice video by Veritasium illustrating some of the features of black holes, accretion disks and the warping of their appearance by the gravity of the black hole.  However, Veritasium’s video illustrates a non-rotating black hole with a thin accretion disk that is edge-on from our perspective; and this is definitely NOT what we are seeing!]

As I emphasized in my pre-photo blog post (in which I described carefully what we were likely to be shown, and the subtleties involved), this is not a simple photograph of what’s `actually there.’ We all agree that what we’re looking at is light from some glowing material around the solar-system-sized black hole at the heart of the galaxy M87.  But that light has been wildly bent on its path toward Earth, and so — just like a room seen through an old, warped window, and a dirty one at that — it’s not simple to interpret what we’re actually seeing. Where, exactly, is the material `in truth’, such that its light appears where it does in the image? Interpretation of the image is potentially ambiguous, and certainly not obvious.

The naive guess as to what to expect — which astronomers developed over many years, based on many studies of many suspected black holes — is crudely illustrated in the figure at the end of this post.  Material around a black hole has two main components:

  • An accretion disk of `gas’ (really plasma, i.e. a very hot collection of electrons, protons, and other atomic nuclei) which may be thin and concentrated, or thick and puffy, or something more complicated.  The disk extends inward to within a few times the radius of the black hole’s event horizon, the point of no-return; but how close it can be depends on how fast the black hole rotates.
  • Two oppositely-directed jets of material, created somehow by material from the disk being concentrated and accelerated by magnetic fields tied up with the black hole and its accretion disk; the jets begin not far from the event horizon, but then extend outward all the way to the outer edges of the entire galaxy.

But even if this is true, it’s not at all obvious (at least to me) what these objects look like in an image such as we saw Wednesday. As far as I am currently aware, their appearance in the image depends on

  • Whether the disk is thick and puffy, or thin and concentrated;
  • How far the disk extends inward and outward around the black hole;
  • The process by which the jets are formed and where exactly they originate;
  • How fast the black hole is spinning;
  • The orientation of the axis around which the black hole is spinning;
  • The typical frequencies of the radio waves emitted by the disk and by the jets (compared to the frequency, about 230 Gigahertz, observed by the Event Horizon Telescope);

and perhaps other things. I can’t yet figure out what we do and don’t know about these things; and it doesn’t help that some of the statements made by the EHT scientists in public and in their six papers seem contradictory (and I can’t yet say whether that’s because of typos, misstatements by them, or [most likely] misinterpretations by me.)

So here’s the best I can do right now, for myself and for you. Below is a figure that is nothing but an illustration of my best attempt so far to make sense of what we are seeing. You can expect that some fraction of this figure is wrong. Increasingly I believe this figure is correct in cartoon form, though the picture on the left is too sketchy right now and needs improvement.  [NOTE ADDED: AS EXPLAINED IN THIS MORE RECENT POST, THE “PHOTON-SPHERE” DOES NOT EXIST FOR A ROTATING BLACK HOLE; THE “PHOTON-RING” OF LIGHT THAT SURROUNDS THE SHADOW DOES NOT DOMINATE WHAT IS ACTUALLY SEEN IN THE IMAGE; AND THE DARK PATCH IN THE IMAGE ISN’T NECESSARILY THE ENTIRE SHADOW.]  What I’ll be doing this week is fixing my own misconceptions and trying to become clear on what the experts do and don’t know. Experts are more than welcome to set me straight!

In short — this story is not over, at least not for me. As I gain a clearer understanding of what we do and don’t know, I’ll write more about it.

 

MyFirstGuessBHPhoto.png

My personal confused and almost certainly inaccurate understanding [the main inaccuracy is that the disk and jets are fatter than shown, and connected to one another near the black hole; that’s important because the main illumination source may be the connection region; also jets aren’t oriented quite right] of how one might interpret the black hole image; all elements subject to revision as I learn more. Left: the standard guess concerning the immediate vicinity of M87’s black hole: an accretion disk oriented nearly face-on from Earth’s perspective, jets aimed nearly at and away from us, and a rotating black hole at the center.  The orientation of the jets may not be correct relative to the photo.  Upper right: The image after the radio waves’ paths are bent by gravity.  The quasi-silhouette of the black hole is larger than the `true’ event horizon, a lot of radio waves are concentrated at the ‘photon-sphere’ just outside (brighter at the bottom due to the black-hole spinning clockwise around an axis slightly askew to our line of sight); some additional radio waves from the accretion disk and jets further complicate the image. Most of the disk and jets are too dim to see.  Lower Right: This image is then blurred out by the Event Horizon Telescope’s limitations, partly compensated for by heavy-duty image processing.

 

by Matt Strassler at April 15, 2019 04:02 PM

April 11, 2019

Jon Butterworth - Life and Physics

Exploring the “Higgs Portal”
The Higgs boson is unique. Does it open a door to Dark Matter? All known fundamental particles acquire mass by interacting with the Higgs boson. Actually, more correctly, they interact with a quantum field which is present even in “empty” … Continue reading

by Jon Butterworth at April 11, 2019 04:16 PM

April 10, 2019

Jon Butterworth - Life and Physics

Particle & astro-particle physics annual UK meeting
The annual UK particle physics and astroparticle physics conference was hosted by Imperial this week, and has just finished. Some slightly random highlights. Crisis or no crisis, the future of particle physics is a topic, of course. An apposite quote from … Continue reading

by Jon Butterworth at April 10, 2019 03:25 PM

Clifford V. Johnson - Asymptotia

It’s a Black Hole!

Yes, it’s a black hole all right. Following on from my reflections from last night, I can report that the press conference revelations were remarkable indeed. Above you see the image they revealed! It is the behemoth at the centre of the galaxy M87! This truly groundbreaking image is the … Click to continue reading this post

The post It’s a Black Hole! appeared first on Asymptotia.

by Clifford at April 10, 2019 01:37 PM

April 06, 2019

Andrew Jaffe - Leaves on the Line

@TheMekons make the world alright, briefly, at the 100 Club, London.

by Andrew at April 06, 2019 10:17 AM

March 31, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

My favourite conference; the Institute of Physics Spring Weekend

This weekend I attended the annual meeting of the Institute of Physics in Ireland. I always enjoy these meetings – more relaxing than a technical conference and a great way of keeping in touch with physicists from all over the country. As ever, there were a number of interesting presentations, plenty of discussions of science and philosophy over breakfast, lunch and dinner, all topped off by the annual awarding of the Rosse Medal, a highly competitive competition for physics postgraduates across the nation.

banner

The theme of this year’s meeting was ‘A Climate of Change’ and thus the programme included several talks on the highly topical subject of anthropogenic climate change. First up was ‘The science of climate change’, a cracking talk on the basic physics of climate change by Professor Joanna Haigh of Imperial College London. This was followed by ‘Climate change: where we are post the IPCC report and COP24’, an excellent presentation by Professor John Sweeney of Maynooth University on the latest results from the IPCC. Then it was my turn. In ‘Climate science in the media – a war on information?’,  I compared the coverage of climate change in the media with that of other scientific topics such as medical science and and big bang cosmology. My conclusion was that climate change is a difficult subject to convey to the public, and matters are not helped by actors who deliberately attempt to muddle the science and downplay the threat. You can find details of the full conference programme here and the slides for my own talk are here.

 

Images of my talk from IoP Ireland 

There followed by a panel discussion in which Professor Haigh, Professor Sweeney and I answered questions from the floor on climate science. I don’t always enjoy panel discussions, but I think this one was useful thanks to some excellent chairing by Paul Hardaker of the Institute of Physics.

IMG_2504 (1)

Panel discussion of the threat of anthopogenic climate change

After lunch, we were treated to a truly fascinating seminar: ‘Tropical storms, hurricanes, or just a very windy day?: Making environmental science accessible through Irish Sign Language’, by Dr Elizabeth Mathews of Dublin City University, on the challenge of making media descriptions of threats such as storms hurricanes and climate change accessible to deaf people. This was followed by a most informative talk by Dr Bajram Zeqiri of the National Physical Laboratory on the recent redefinition of the kilogram,  ‘The measure of all things: redefinition of the kilogram, the kelvin, the ampere and the mole’.

Finally, we had the hardest part of the day, the business of trying to select the best postgraduate posters and choosing a winner from the shortlist. As usual, I was blown away by the standard, far ahead of anything I or my colleagues ever produced. In the end, the Rosse Medal was awarded to Sarah Markham of the University of Limerick for a truly impressive poster and presentation.

D25jKhvXcAE8vdA

Viewing posters at the IoP 2019 meeting; image courtesy of IoP Ireland

All in all, another super IoP Spring weekend. Now it’s back to earth and back to teaching…

by cormac at March 31, 2019 08:51 PM

March 29, 2019

Robert Helling - atdotde

Proving the Periodic Table
The year 2019 is the International Year of the Periodic Table celebrating the 150th anniversary of Mendeleev's discovery. This prompts me to report on something that I learned in recent years when co-teaching "Mathematical Quantum Mechanics" with mathematicians in particular with Heinz Siedentop: We know less about the mathematics of the periodic table) than I thought.



In high school chemistry you learned that the periodic table comes about because of the orbitals in atoms. There is Hundt's rule that tells you the order in which you have to fill the shells in and in them the orbitals (s, p, d, f, ...). Then, in your second semester in university, you learn to derive those using Sehr\"odinger's equation: You diagonalise the Hamiltonian of the hyrdrogen atom and find the shells in terms of the main quantum number $n$ and the orbitals in terms of the angular momentum quantum number $L$ as $L=0$ corresponds to s, $L=1$ to p and so on. And you fill the orbitals thanks to the Pauli excursion principle. So, this proves the story of the chemists.

Except that it doesn't: This is only true for the hydrogen atom. But the Hamiltonian for an atom nuclear charge $Z$ and $N$ electrons (so we allow for ions) is (in convenient units)
$$ a^2+b^2=c^2$$

$$ H = -\sum_{i=1}^N \Delta_i -\sum_{i=1}^N \frac{Z}{|x_i|} + \sum_{i\lt j}^N\frac{1}{|x_i-x_j|}.$$

The story of the previous paragraph would be true if the last term, the Coulomb interaction between the electrons would not be there. In that case, there is no interaction between the electrons and we could solve a hydrogen type problem for each electron separately and then anti-symmetrise wave functions in the end in a Slater determinant to take into account their Fermionic nature. But of course, in the real world, the Coulomb interaction is there and it contributes like $N^2$ to the energy, so it is of the same order (for almost neutral atoms) like the $ZN$ of the electron-nucleon potential.

The approximation of dropping the electron-electron Coulomb interaction is well known in condensed matter systems where there resulting theory is known as a "Fermi gas". There it gives you band structure (which is then used to explain how a transistor works)


Band structure in a NPN-transistor
Also in that case, you pretend there is only one electron in the world that feels the periodic electric potential created by the nuclei and all the other electrons which don't show up anymore in the wave function but only as charge density.

For atoms you could try to make a similar story by taking the inner electrons into account by saying that the most important effect of the ee-Coulomb interaction is to shield the potential of the nucleus thereby making the effective $Z$ for the outer electrons smaller. This picture would of course be true if there were no correlations between the electrons and all the inner electrons are spherically symmetric in their distribution around the nucleus and much closer to the nucleus than the outer ones.  But this sounds more like a day dream than a controlled approximation.

In the condensed matter situation, the standing for the Fermi gas is much better as there you could invoke renormalisation group arguments as the conductivities you are interested in are long wave length compared to the lattice structure, so we are in the infra red limit and the Coulomb interaction is indeed an irrelevant term in more than one euclidean dimension (and yes, in 1D, the Fermi gas is not the whole story, there is the Luttinger liquid as well).

But for atoms, I don't see how you would invoke such RG arguments.

So what can you do (with regards to actually proving the periodic table)? In our class, we teach how Lieb and Simons showed that in the $N=Z\to \infty$ limit (which in some sense can also be viewed as the semi-classical limit when you bring in $\hbar$ again) that the ground state energy $E^Q$ of the Hamiltonian above is in fact approximated by the ground state energy $E^{TF}$ of the Thomas-Fermi model (the simplest of all density functional theories, where instead of the multi-particle wave function you only use the one-particle electronic density $\rho(x)$ and approximate the kinetic energy by a term like $\int \rho^{5/3}$ which is exact for the three fermi gas in empty space):

$$E^Q(Z) = E^{TF}(Z) + O(Z^2)$$

where by a simple scaling argument $E^{TF}(Z) \sim Z^{7/3}$. More recently, people have computed more terms in these asymptotic which goes in terms of $Z^{-1/3}$, the second term ($O(Z^{6/3})= O(Z^2)$ is known and people have put a lot of effort into $O(Z^{5/3})$ but it should be clear that this technology is still very very far from proving anything "periodic" which would be $O(Z^0)$. So don't hold your breath hoping to find the periodic table from this approach.

On the other hand, chemistry of the periodic table (where the column is supposed to predict chemical properties of the atom expressed in terms of the orbitals of the "valence electrons") works best for small atoms. So, another sensible limit appears to be to keep $N$ small and fixed and only send $Z\to\infty$. Of course this is not really describing atoms but rather highly charged ions.

The advantage of this approach is that in the above Hamiltonian, you can absorb the $Z$ of the electron-nucleon interaction into a rescaling of $x$ which then let's $Z$ reappear in front of the electron-electron term as $1/Z$. Then in this limit, one can try to treat the ugly unwanted ee-term perturbatively.

Friesecke (from TUM) and collaborators have made impressive progress in this direction and in this limit they could confirm that for $N < 10$ the chemists' picture is actually correct (with some small corrections). There are very nice slides of a seminar talk by Friesecke on these results.

Of course, as a practitioner, this will not surprise you (after all, chemistry works) but it is nice to know that mathematicians can actually prove things in this direction. But it there is still some way to go even 150 years after Mendeleev.

by Unknown (noreply@blogger.com) at March 29, 2019 11:02 AM

March 21, 2019

Alexey Petrov - Symmetry factor

CP-violation in charm observed at CERN

 

There is a big news that came from CERN today. It was announced at a conference called Recontres de Moriond, one of the major yearly conferences in the field of particle physics. One of the CERN’s experiments, LHCb, reported an observation — yes, observation, not an evidence for, but an observation, of CP-violation in charmed system. Why is it big news and why should you care?

You should care about this announcement because it has something to do with how our Universe looks like. As you look around, you might notice an interesting fact: everything is made of matter. So what about it? Well, one thing is missing from our everyday life: antimatter.

As it turns out, physicists believe that the amount of matter and antimatter was the same after the Universe was created. So, the $1,110,000 question is: what happened to antimatter? According to Sakharov’s criteria for baryonogenesis (a process of creating  more baryons, like protons and neutrons, than anti-baryons), one of the conditions for our Universe to be the way it is would be to have matter particles interact slightly differently from the corresponding antimatter particles. In particle physics this condition is called CP-violation. It has been observed in beauty and strange quarks, but never in charm quarks. As charm quarks are fundamentally different from both beauty and strange ones (electrical charge, mass, ways they interact, etc.), physicists hoped that New Physics, something that we have not yet seen or predicted, might be lurking nearby and can be revealed in charm decays. That is why so much attention has been paid to searches for CP-violation in charm.

Now there are indications that the search is finally over: LHCb announced that they observed CP-violation in charm. Here is their announcement (look for a news item from 21 March 2019). A technical paper can be found here, discussing how LHCb extracted CP-violating observables from time-dependent analysis of D -> KK and D-> pipi decays.

The result is generally consistent with the Standard Model expectations. However, there are theory papers (like this one) that predict the Standard Model result to be about seven times smaller with rather small uncertainty.  There are three possible interesting outcomes:

  1. Experimental result is correct but the theoretical prediction mentioned above is not. Well, theoretical calculations in charm physics are hard and often unreliable, so that theory paper underestimated the result and its uncertainties.
  2. Experimental result is incorrect but the theoretical prediction mentioned above is correct. Maybe LHCb underestimated their uncertainties?
  3. Experimental result is correct AND the theoretical prediction mentioned above is correct. This is the most interesting outcome: it implies that we see effects of New Physics.

What will it be? Time will show.

More technical note on why it is hard to see CP-violation in charm.

Once reason that CP-violating observables are hard to see in charm is because they are quite small, at least in the Standard Model.  All final/initial state quarks in the D -> KK or D -> pi pi transition belong to the first two generations. The CP-violating asymmetry that arises when we compare time-dependent decay rates of D0 to a pair of kaons or pions to the corresponding decays of anti-D0 particle can only happen if one reaches the weak phase taht is associated with the third generation of quarks (b and t), which is possible via penguin amplitude. The problem is that the penguin amplitude is small, as Glashow-Illiopulos -Maiani (GIM) mechanism makes it to be proportional to m_b^2 times tiny CKM factors. Strong phases needed for this asymmetry come from the tree-level decays and (supposedly) are largely non-perturbative.

Notice that in B-physics the situation is exactly the opposite. You get the weak phase from the tree-level amplitude and the penguin one is proportional to m_top^2, so CP-violating interference is large.

Ask me if you want to know more!

by apetrov at March 21, 2019 06:45 PM

March 16, 2019

Robert Helling - atdotde

Nebelkerze CDU-Vorschlag zu "keine Uploadfilter"
Sorry, this one of the occasional posts about German politics and thus in German. This is my posting to a German speaking mailing lists discussing the upcoming EU copyright directive (must be stopped in current from!!! March 23rd international protest day) and now the CDU party has proposed how to implement it in German law, although so unspecific that all the problematic details are left out. Here is the post.

Vielleicht bin ich zu doof, aber ich verstehe nicht, wo der genaue Fortschritt zu dem, was auf EU-Ebene diskutiert wird, sein soll. Ausser dass der CDU-Vorschlag so unkonkret ist, dass alle internen Widersprüche im Nebel verschwinden. Auch auf EU-Ebene sagen doch die Befuerworter, dass man viel lieber Lizenzen erwerben soll, als filtern. Das an sich ist nicht neu.

Neu, zumindest in diesem Handelsblatt-Artikel, aber sonst habe ich das nirgends gefunden, ist die Erwähnung von Hashsummen („digitaler Fingerabdruck“) oder soll das eher sowas wie ein digitales Wasserzeichen sein? Das wäre eine echte Neuerung, würde das ganze Verfahren aber sofort im Keim ersticken, da damit nur die Originaldatei geschützt wäre (das waere ja auch trivial festzustellen), aber jede Form des abgeleiteten Werkes komplett durch die Maschen fallen würde und man durch eine Trivialänderung Werke „befreien“ könnte. Ansonsten sind wir wieder bei den zweifelhaften, auf heute noch nicht existierender KI-Technologie beruhenden Filtern.

Das andere ist die Pauschallizenz. Ich müsste also nicht mehr mit allen Urhebern Verträge abschliessen, sondern nur noch mit der VG Internet. Da ist aber wieder die grosse Preisfrage, für wen die gelten soll. Intendiert sind natürlich wieder Youtube, Google und FB. Aber wie formuliert man das? Das ist ja auch der zentrale Stein des Anstoßes der EU-Direktive: Eine Pauschallizenz brauchen all, ausser sie sind nichtkommerziell (wer ist das schon), oder (jünger als drei Jahre und mit wenigen Benutzern und kleinem Umsatz) oder man ist Wikipedia oder man ist GitHub? Das waere wieder die „Internet ist wie Fernsehen - mit wenigen grossen Sendern und so - nur eben anders“-Sichtweise, wie sie von Leuten, die das Internet aus der Ferne betrachten so gerne propagiert wird. Weil sie eben alles andere praktisch platt macht. Was ist denn eben mit den Foren oder Fotohostern? Müssten die alle eine Pauschallizenz erwerben (die eben so hoch sein müsste, dass sie alle Film- und Musikrechte der ganzen Welt pauschal abdeckt)? Was verhindert, dass das am Ende ein „wer einen Dienst im Internet betreibt, der muss eben eine kostenpflichtige Internetlizenz erwerben, bevor er online gehen kann“-Gesetz wird, das bei jeder nichttrivialen Höhe der Lizenzgebühr das Ende jeder gras roots Innovation waere?

Interessant waere natuerlich auch, wie die Einnahmen der VG Internet verteilt werden. Ein Schelm waere, wenn das nicht in großen Teilen zB bei Presseverlegern landen würde. Das waere doch dann endlich das „nehmt denjenigen, die im Internet Geld verdienen dieses weg und gebt es und, die nicht mehr so viel Geld verdienen“-Gesetz. Dann müsste die Lizenzgebühr am besten ein Prozentsatz des Umsatz sein, am besten also eine Internet-Steuer.

Und ich fange nicht damit an, wozu das führt, wenn alle europäischen Länder so krass ihre eigene Umsetzungssuppe kochen.

Alles in allem ein ziemlich gelungener Coup der CDU, der es schaffen kann, den Kritikern von Artikel 13 in der öffentlichen Meinung den Wind aus den Segeln zu nehmen, indem man es alles in eine inkonkrete Nebelwolke packt, wobei die ganzen problematischen Regelungen in den Details liegen dürften.

by Unknown (noreply@blogger.com) at March 16, 2019 09:43 AM

March 13, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

RTE’s Brainstorm; a unique forum for public intellectuals

I have an article today on RTE’s ‘Brainstorm’ webpage, my tribute to Stephen Hawking one year after his death.

"Hawking devoted a great deal of time to science outreach, unusual for a scientist at this level"

I wasn’t aware of the RTE brainstorm initiative until recently, but I must say it is a very interesting and useful resource. According to the mission statement on the website“RTÉ Brainstorm is where the academic and research community will contribute to public debate, reflect on what’s happening in the world around us and communicate fresh thinking on a broad range of issues”.  A partnership between RTE, University College Cork, NUI Galway, University of Limerick, Dublin City University, Ulster University, Maynooth University and the Technological University of Dublin, the idea is to provide an online platform for academics and other specialists to engage in public discussions of interesting ideas and perspectives in user-friendly language.  You can find a very nice description of the initiative in The Irish Times here .

I thoroughly approve of this initiative. Many academics love to complain about the portrayal of their subject (and a lot of other subjects) in the media; this provides a simple and painless method for such people to reach a wide audience. Indeed, I’ve always liked the idea of the public intellectual. Anyone can become a specialist in a given topic; it’s a lot harder to make a meaningful contribution to public debate. Some would say this is precisely the difference between the academic and the public intellectual. Certainly, I enjoy engaging in public discussions of matters close to my area of expertise and I usually learn something new.  That said, a certain humility is an absolute must – it’s easy to forget that detailed knowledge of a subject does not automatically bestow the wisdom of Solomon. Indeed, there is nothing worse than listing to an specialist use their expertise to bully others into submission – it’s all about getting the balance right and listening as well as informing….

by cormac at March 13, 2019 07:28 PM

March 06, 2019

Robert Helling - atdotde

Challenge: How to talk to a flat earther?
Further down the rabbit hole, over lunch I finished watching "Behind the Curve", a Netflix documentary on people believing the earth is a flat disk. According to them, the north pole is in the center, while Antarctica is an ice wall at the boundary. Sun and moon are much closer and flying above this disk while the stars are on some huge dome like in a planetarium. NASA is a fake agency promoting the doctrine and airlines must be part of the conspiracy as they know that you cannot directly fly between continents on the southern hemisphere (really?).

These people are happily using GPS for navigation but have a general mistrust in the science (and their teachers) of at least two centuries.

Besides the obvious "I don't see curvature of the horizon" they are even conducting experiments to prove their point (fighting with laser beams not being as parallel over miles of distance as they had hoped for). So at least some of them might be open to empirical disprove.

So here is my challenge: Which experiment would you conduct with them to convince them? Warning: Everything involving stuff disappearing at the horizon (ships sailing away, being able to see further from a tower) are complicated by non-trivial diffraction in the atmosphere which would very likely turn this observation inconclusive. The sun being at different declination (height) at different places might also be explained by being much closer and a Foucault pendulum might be too indirect to really convince them (plus it requires some non-elementary math to analyse).

My personal solution is to point to the observation that the declination of Polaris (around which I hope they can agree the night sky rotates) is given my the geographical latitude: At the north pole it is right above you but is has to go down the more south you get. I cannot see how this could be reconciled with a dome projection.

How would you approach this? The rules are that it must only involve observations available to everyone, no spaceflight, no extra high altitude planes. You are allowed to make use of the phone, cameras, you can travel (say by car or commercial flight but you cannot influence the flight route). It does not involve lots of money or higher math.


by Unknown (noreply@blogger.com) at March 06, 2019 02:24 PM

February 24, 2019

Michael Schmitt - Collider Blog

Miracles when you use the right metric

I recommend reading, carefully and thoughtfully, the preprint “The Metric Space of Collider Events” by Patrick Komiske, Eric Metodiev, and Jesse Thaler (arXiv:1902.02346). There is a lot here, perhaps somewhat cryptically presented, but much of it is exciting.

First, you have to understand what the Earth Mover’s Distance (EMD) is. This is easier to understand than the Wasserstein Metric of which it is a special case. The EMD is a measure of how different two pdfs (probability density functions) are and it is rather different than the usual chi-squared or mean integrated squared error because it emphasizes separation rather than overlap. The idea is look at how much work you have to do to reconstruct one pdf from another, where “reconstruct” means transporting a portion of the first pdf a given distance. You keep track of the “work” you do, which means the amount of area (i.e.,”energy” or “mass”) you transport and how far you transport it. The Wikipedia article aptly makes an analogy with suppliers delivering piles of stones to customers. The EMD is the smallest effort required.

The EMD is a rich concept because it allows you to carefully define what “distance” means. In the context of delivering stones, transporting them across a plain and up a mountain are not the same. In this sense, rotating a collision event about the beam axis should “cost” nothing – i.e, be irrelevant — while increasing the energy or transverse momentum should, because it is phenomenologically interesting.

The authors want to define a metric for LHC collision events with the notion that events that come from different processes would be well separated. This requires a definition of “distance” – hence the word “metric” in the title. You have to imagine taking one collision event consisting of individual particle or perhaps a set of hadronic jets, and transporting pieces of it in order to match some other event. If you have to transport the pieces a great distance, then the events are very different. The authors’ ansatz is a straight forward one, depending essentially on the angular distance θij/R plus a term than takes into account the difference in total energies of the two events. Note: the subscripts i and j refer to two elements from the two different events. The paper gives a very nice illustration for two top quark events (read and blue):

Transformation of one top quark event into another

The first thing that came to mind when I had grasped, with some effort, the suggested metric, was that this could be a great classification tool. And indeed it is. The authors show that a k-nearest neighbors algorithm (KNN), straight out of the box, equipped with their notion of distance, works nearly as well as very fancy machine learning techniques! It is crucial to note that there is no training here, no search for a global minimum of some very complicated objection function. You only have to evaluate the EMD, and in their case, this is not so hard. (Sometimes it is.) Here are the ROC curves:

ROC curves. The red curve is the KNN with this metric, and the other curves close by are fancy ML algorithms. The light blue curve is a simple cut on N-subjettiness observables, itself an important theoretical tool


I imagine that some optimization could be done to close the small gap with respect to the best performing algorithms, for example in improving on the KNN.

The next intriguing idea presented in this paper is the fractal dimension, or correlation dimension, dim(Q), associated with their metric. The interesting bit is how dim(Q) depends on the mass/energy scale Q, which can plausibly vary from a few GeV (the regime of hadronization) up to the mass of the top quark (173 GeV). The authors compare three different sets of jets from ordinary QCD production, from W bosons decaying hadronically, and from top quarks, because one expects the detailed structure to be distinctly different, at least if viewed with the right metric. And indeed, the variation of dim(Q) with Q is quite different:

dim(Q) as a function of Q for three sources of jets


(Note these jets all have essentially the same energy.) There are at least three take-away points. First, the dim(Q) is much higher for top jets than for W and QCD jets, and W is higher than QCD. This hierarchy reflects the relative complexity of the events, and hints at new discriminating possibilities. Second, they are more similar at low scales where the structure involves hadronication, and more different at high scales which should be dominated by the decay structure. This is born out by they decay products only curves. Finally, there is little difference in the curves based on particles or on partons, meaning that the result is somehow fundamental and not an artifact of hadronization itself. I find this very exciting.

The authors develop the correlation distance dim(Q) further. It is a fact that a pair of jets from W decays boosted to the same degree can be described by a single variable: the ratio of their energies. This can be mapped onto an annulus in a abstract dimensional space (see the paper for slightly more detail). The interesting step is to look at how the complexity of individual events, reflected in dim(Q), varies around the annulus:

Embedding of W jets and how dim(Q) varies around the annulus and inside it


The blue events to the lower left are simple, with just a single round dot (jet) in the center, while the red events in the upper right have two dots of nearly equal size. The events in the center are very messy, with many dots of several sizes. So morphology maps onto location in this kinematic plane.

A second illustration is provided, this time based on QCD jets of essentially the same energy. The jet masses will span a range determined by gluon radiation and the hadronization process. Jets at lower mass should be clean and simple while jets at high mass should show signs of structure. This is indeed the case, as nicely illustrated in this picture:

How complex jet substructure correlates with jet mass


This picture is so clear it is almost like a textbook illustration.

That’s it. (There is one additional topic involving infrared divergence, but since I do not understand it I won’t try to describe it here.) The paper is short with some startling results. I look forward to the authors developing these studies further, and for other researchers to think about them and apply them to real examples.

by Michael Schmitt at February 24, 2019 05:16 PM

February 22, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

The joys of mid term

Thank God for mid-term, or ‘reading week’ as it is known in some colleges. Time was I would have spent the week on the ski slopes, but these days I see the mid-term break as a precious opportunity to catch up – a nice relaxed week in which I can concentrate on correcting assessments, preparing teaching notes and setting end-of-semester exams. There is a lot of satisfaction in getting on top of things, if only temporarily!

Then there’s the research. To top the week off nicely, I heard this morning that my proposal to give a talk at the forthcoming Authur Eddington conference  in Paris has been accepted; this is great news as the conference will mark the centenary of Eddington’s measurement of the bending of starlight by the sun, an experiment that provided key evidence in support Einstein’s general theory of relativity. To this day, some historians question the accuracy of Eddington’s result, while most physicists believe his findings were justified, so it should make for an interesting conference .

Eddinton

 

by cormac at February 22, 2019 04:45 PM