Particle Physics Planet


April 26, 2019

Peter Coles - In the Dark

Dos and Don’ts of reduced chi-squared

Yesterday I saw a tweet about an arXiv paper and thought I’d share it here. It’s not new but I’ve never seen it before and I think it’s well worth reading. The abstract of the paper is:

Reduced chi-squared is a very popular method for model assessment, model comparison, convergence diagnostic, and error estimation in astronomy. In this manuscript, we discuss the pitfalls involved in using reduced chi-squared. There are two independent problems: (a) The number of degrees of freedom can only be estimated for linear models. Concerning nonlinear models, the number of degrees of freedom is unknown, i.e., it is not possible to compute the value of reduced chi-squared. (b) Due to random noise in the data, also the value of reduced chi-squared itself is subject to noise, i.e., the value is uncertain. This uncertainty impairs the usefulness of reduced chi-squared for differentiating between models or assessing convergence of a minimisation procedure. The impact of noise on the value of reduced chi-squared is surprisingly large, in particular for small data sets, which are very common in astrophysical problems. We conclude that reduced chi-squared can only be used with due caution for linear models, whereas it must not be used for nonlinear models at all. Finally, we recommend more sophisticated and reliable methods, which are also applicable to nonlinear models.

I added the link at the beginning; you can download a PDF of the paper here.

I’ve never really understood why this statistic (together with related frequentist-inspired ideas) is treated with such reverence by astronomers, so this paper offers a valuable critique to those tempted to rely on it blindly.

 

 

by telescoper at April 26, 2019 11:29 AM

Emily Lakdawalla - The Planetary Society Blog

What can we learn from a failed return to the Moon?
Thirty years ago, President George H.W. Bush announced an ambitious program to return humans to the Moon. It failed. Today the Trump Administration wants the same thing. Can a failed lunar return effort help this one succeed?

April 26, 2019 05:37 AM

April 25, 2019

Christian P. Robert - xi'an's og

likelihood free nested sampling

A recent paper by Mikelson and Khammash found on bioRxiv considers the (paradoxical?) mixture of nested sampling and intractable likelihood. They however cover only the case when a particle filter or another unbiased estimator of the likelihood function can be found. Unless I am missing something in the paper, this seems a very costly and convoluted approach when pseudo-marginal MCMC is available. Or the rather substantial literature on computational approaches to state-space models. Furthermore simulating under the lower likelihood constraint gets even more intricate than for standard nested sampling as the parameter space is augmented with the likelihood estimator as an extra variable. And this makes a constrained simulation the harder, to the point that the paper need resort to a Dirichlet process Gaussian mixture approximation of the constrained density. It thus sounds quite an intricate approach to the problem. (For one of the realistic examples, the authors mention a 12 hour computation on a 48 core cluster. Producing an approximation of the evidence that is not unarguably stabilised, contrary to the above.) Once again, not being completely up-to-date in sequential Monte Carlo, I may miss a difficulty in analysing such models with other methods, but the proposal seems to be highly demanding with respect to the target.

by xi'an at April 25, 2019 10:19 PM

Emily Lakdawalla - The Planetary Society Blog

Curiosity Update, Sols 2313-2387: Two New Drill Holes Despite Memory Problems
The Curiosity team is touring Glen Torridon, the Valley of Clay, south of Vera Rubin Ridge, happily photographing everything and zapping rocks. It’s clearly a delight for the team to be in a place they’ve been hoping to reach for 7 years.

April 25, 2019 07:11 PM

Peter Coles - In the Dark

Silhouettes of Gustav Mahler – Otto Böhler

Schattenbilder (Silhouettes) of Gustav Mahler conducting, by Otto Böhler (1847–1913), published posthumously in 1914.

by telescoper at April 25, 2019 06:41 PM

Clifford V. Johnson - Asymptotia

Black Hole Session

Well I did not get the special NYT issue as a keepsake, but this is maybe better: I got to attend the first presentation of the “black hole picture” scientific results at a conference, the APS April meeting (Sunday April 14th 2019). I learned so much! These are snaps of … Click to continue reading this post

The post Black Hole Session appeared first on Asymptotia.

by Clifford at April 25, 2019 06:35 PM

Peter Coles - In the Dark

More Order-of-Magnitude Physics

A very busy day today so I thought I’d just do a quick post to give you a chance to test your brains with some more order-of-magnitude physics problems. I like using these in classes because they get people thinking about the physics behind problems without getting too bogged down in or turned off by complicated mathematics. If there’s any information missing that you need to solve the problem, make an order-of-magnitude estimate!

Give  order of magnitude answers to the following questions:

  1. What is the maximum distance at which it could be possible for a car’s headlights to be resolved by the human eye?
  2. How much would a pendulum clock gain or lose (say which) in a week if moved from a warm room into a cold basement?
  3. What area would be needed for a terrestrial solar power station capable of producing 1GW of power?
  4. What mass of cold water could be brought to the boil using the energy dissipated when a motor car is brought to rest from 100 km/h?
  5. How many visible photons are emitted by a 100W light bulb during its lifetime?

There’s no prize involved, but feel free to post answers through the comments box. It would be helpful if you explained a  bit about how you arrived at your answer!

by telescoper at April 25, 2019 12:57 PM

Emily Lakdawalla - The Planetary Society Blog

How Your Donations are Helping Planetary Society Asteroid Hunters
Each year, we ask our Shoemaker NEO Grant winners to help us tell the world about their work. Here's what they said.

April 25, 2019 11:00 AM

Lubos Motl - string vacua and pheno

Four interesting papers
On hep-ph, I choose a paper by Benedetti, Li, Maxin, Nanopoulos about a natural D-braneworld (supersymmetric) inspired model. The word "inspired" means that they believe that similar models (effective QFTs) arise from a precise string theory analysis of some compactifications but as phenomenologists, they don't want to do the string stuff precisely. ;-) It belongs to these authors' favorite flipped \(SU(5)\) or \({\mathcal F}\)-\(SU(5)\) class of models and involves new fields, flippons, near \(1\TeV\). The LSP is Higgsino-like and the viable parameter space is rather compact and nice. The model seems natural – one may get fine-tuning below \(\Delta_{EW}\lt 100\).

It's an example showing that with some cleverness, natural models are still viable – and Nature is almost certainly more clever than the humans.



On hep-th, I start with Shirai-Yamazaki who point out some interesting tension between two types of ideas both of which arise from Cumrun Vafa's swampland program. One of them is a scalar-force generalization of our weak gravity conjecture; the other is a more recent Vafa-et-al. de Sitter swampland conjecture.

The tension has a simple origin: the (scalar-force-type) weak gravity conjectures tend to prohibit light scalars like quintessence because those would yield "weaker-than-gravity forces mediated by scalars". On the other hand, as you know, the Swampland de Sitter conjecture wants to outlaw inflation and replace it with quintessence, so it needs quintessence. Whether there is a real contradiction depends on the precise formulation of both conjectures, especially the scalar-force weak gravity conjecture, they conclude.

It's very interesting. We're apparently still not certain about the precise form of these inequality-like principles. There are contexts in which I am even uncertain about the direction of the inequalities. So the vagueness may be as bad as saying that "something new happens when some ratio of parameters approaches some value from either side".



Finally, a cool research of the scattering theory involving new or understudied asymptotic states. There are two new hep-th papers, by Pate, Raclariu, and Strominger; and by Nandan, Schreiber, Volovich, Zlotnikov. They have some overlap – the authors know each other (Volovich was a student of Strominger's; yes, Strominger's current two co-authors are nice young ladies, too) so they probably knew about their shared interests in advance.

Normally, we study scattering with asymptotic states which are particles with nicely well-defined momentum vectors \(p^\mu\). Those are eigenstates under spacetime translations. However, they study "celestial" scattering amplitudes of asymptotic states that are eigenstates of the boosts, basically of the Lorentz group \(SO(3,1)\). These asymptotic states should be eigenstates \((h,\bar h)\) of the \(SL(2)\times SL(2)\) complexified version of the Lorentz algebra. OK, you want eigenstates of \(j_L^2,j_R^2,j_{L,3},j_{R,3}\) in the complexified \(SL(2,\CC)\times SL(2,\CC)\).

Either one or two papers study new kind of soft theorems – designed for these "celestial" states instead of the momentum eigenstates. Pate et al. say these new theorems cannot be derived from low-energy effective action. It seems rather incredible or extraordinary because they seem to work just with another "differently singular" basis of the states – all their laws should be just some rearrangement of the usual ones. OK, but we are told that they are not. These new claims about soft theorems etc. spiritually agree with Strominger-and-collaborators' recent claims that the information about the infalling matter may be stored in rather "classical" properties (or hair) of the black hole, and similar stuff.

Nandan et al. analyze the "celestial" scattering and try to reproduce the counterparts of the insights we normally do with the momentum-based scattering: new, "celestial" partial wave decomposition, crossing symmetry, and optical theorem. They also study some soft limits.

The momentum scattering states are the gold standard and I believe they will remain dominant in the literature in coming years. But these "celestial" states and symmetries – and all the laws in the new bases – are arguably "comparably" fundamental. To focus on the momentum eigenstates and not the "celestial", Lorentz group eigenstates means to be biased in a certain way and the authors of both of these papers are trying to remove the bias from the literature and fill the holes.

The claim that these new states – with some singular support near the light-cone or high-energy particles – allow us to derive something that was previously unknown is extremely provocative or exciting. It should be possible according to some intuition, like mine, but the intuition isn't backed by any real evidence. Similar patterns obtained from the "celestial" viewpoint could be relevant for deep questions in physics as well, including the clarification of the twistor/amplituhedron forms of some amplitudes, the information loss problem, and naturalness, I vaguely think.

by Luboš Motl (noreply@blogger.com) at April 25, 2019 10:15 AM

April 24, 2019

Christian P. Robert - xi'an's og

posterior distribution missing the MLE

An X validated question as to why the MLE is not necessarily (well) covered by a posterior distribution. Even for a flat prior… Which in restrospect highlights the fact that the MLE (and the MAP) are invasive species in a Bayesian ecosystem. Since they do not account for the dominating measure. And hence do not fare well under reparameterisation. (As a very much to the side comment, I also managed to write an almost identical and simultaneous answer to the first answer to the question.)

by xi'an at April 24, 2019 10:19 PM

Peter Coles - In the Dark

The First Bookie

I read an interesting piece in Sunday’s Observer which is mainly about the challenges facing the modern sports betting industry but which also included some interesting historical snippets about the history of gambling.

One thing that I didn’t know before reading this article was that it is generally accepted that the first ever bookmaker was a chap called Harry Ogden who started business in the late 18th century on Newmarket Heath. Organized horse-racing had been going on for over a century by then, and gambling had co-existed with it, not always legally. Before Harry Ogden, however, the types of wager were very different from what we have nowadays. For one thing bets would generally be offered on one particular horse (the Favourite), against the field. There being only two outcomes these were generally even-money bets, and the wagers were made between individuals rather than being administered by a `turf accountant’.

Then up stepped Harry Ogden, who introduced the innovation of laying odds on every horse in a race. He set the odds based on his knowledge of the form of the different horses (i.e. on their results in previous races), using this data to estimate probabilities of success for each one. This kind of `book’, listing odds for all the runners in a race, rapidly became very popular and is still with us today. The way of specifying odds as fractions (e.g. 6/1 against, 7/1 on) derives from this period.

Ogden wasn’t interested in merely facilitating other people’s wagers: he wanted to make a profit out of this process and the system he put in place to achieve this survives to this day. In particular he introduced a version of the overround, which works as follows. I’ll use a simple example from football rather than horse-racing because I was thinking about it the other day while I was looking at the bookies odds on relegation from the Premiership.

Suppose there is a football match, which can result either in a HOME win, an AWAY win or a DRAW. Suppose the bookmaker’s expert analysts – modern bookmakers employ huge teams of these – judge the odds of these three outcomes to be: 1-1 (evens) on a HOME win, 2-1 against the DRAW and 5-1 against the AWAY win. The corresponding probabilities are: 1/2 for the HOME win, 1/3 for the DRAW and 1/6 for the AWAY win. Note that these add up to 100%, as they are meant to be probabilities and these are the only three possible outcomes. These are `true odds’.

Offering these probabilities as odds to punters would not guarantee a return for the bookie, who would instead change the odds so they add up to more than 100%. In the case above the bookie’s odds might be: 4-6 for the HOME win; 6-4 for the DRAW and 4-1 against the AWAY win. The implied probabilities here are 3/5, 2/5 and 1/5 respectively, which adds up to 120%, not 100%. The excess is the overround or `bookmaker’s margin’ – in this case 20%.

This is quite the opposite to the Dutch Book case I discussed here.

Harry Ogden applied his method to horse races with many more possible outcomes, but the principle is the same: work out your best estimate of the true odds then apply your margin to calculate the odds offered to the punter.

One thing this means is that you have to be careful f you want to estimate the probability of an event from a bookie’s odds. If they offer you even money then that does not mean they you have a 50-50 chance!

by telescoper at April 24, 2019 11:43 AM

Andrew Jaffe - Leaves on the Line

Spring Break?

Somehow I’ve managed to forget my usual end-of-term post-mortem of the year’s lecturing. I think perhaps I’m only now recovering from 11 weeks of lectures, lab supervision, tutoring alongside a very busy time analysing Planck satellite data.

But a few weeks ago term ended, and I finished teaching my undergraduate cosmology course at Imperial, 27 lectures covering 14 billion years of physics. It was my fourth time teaching the class (I’ve talked about my experiences in previous years here, here, and here), but this will be the last time during this run. Our department doesn’t let us teach a course more than three or four years in a row, and I think that’s a wise policy. I think I’ve arrived at some very good ways of explaining concepts such as the curvature of space-time itself, and difficulties with our models like the 122-or-so-order-of-magnitude cosmological constant problem, but I also noticed that I wasn’t quite as excited as in previous years, working up from the experimentation of my first time through in 2009, putting it all on a firmer foundation — and writing up the lecture notes — in 2010, and refined over the last two years. This year’s teaching evaluations should come through soon, so I’ll have some feedback, and there are still about six weeks until the students’ understanding — and my explanations — are tested in the exam.

Next year, I’ve got the frankly daunting responsibility of teaching second-year quantum mechanics: 30 lectures, lots of problem sheets, in-class problems to work through, and of course the mindbending weirdness of the subject itself. I’d love to teach them Dirac’s very useful notation which unifies the physical concept of quantum states with the mathematical ideas of vectors, matrices and operators — and which is used by all actual practitioners from advanced undergraduates through working physicists. But I’m told that students find this an extra challenge rather than a simplification. Comments from teachers and students of quantum mechanics are welcome.

by Andrew at April 24, 2019 01:19 AM

April 23, 2019

Christian P. Robert - xi'an's og

a resolution of the Jeffreys-Lindley paradox

“…it is possible to have the best of both worlds. If one allows the significance level to decrease as the sample size gets larger (…) there will be a finite number of errors made with probability one. By allowing the critical values to diverge slowly, one may catch almost all the errors.” (p.1527)

When commenting another post, Michael Naaman pointed out to me his 2016 Electronic Journal of Statistics paper where he resolves the Jeffreys-Lindley paradox. The argument there is to consider a Type I error going to zero with the sample size n going to infinity but slowly enough for both Type I and Type II errors to go to zero. And guarantee  a finite number of errors as the sample size n grows to infinity. This translates for the Jeffreys-Lindley paradox into a pivotal quantity within the posterior probability of the null that converges to zero with n going to infinity. Hence makes it (most) agreeable with the Type I error going to zero. Except that there is little reason to assume this pivotal quantity goes to infinity with n, despite its distribution remaining constant in n. Being constant is less unrealistic, by comparison! That there exists an hypothetical sequence of observations such that the p-value and the posterior probability agree, even exactly, does not “solve” the paradox in my opinion.

by xi'an at April 23, 2019 10:19 PM

Emily Lakdawalla - The Planetary Society Blog

InSight Detects Some Very Small Marsquakes
InSight has finally detected its first Marsquakes, but so far, none have been large enough to produce good science. Still, it’s great news that the seismometer is producing sensible data.

April 23, 2019 05:19 PM

Georg von Hippel - Life on the lattice

Book Review: "Lattice QCD — Practical Essentials"
There is a new book about Lattice QCD, Lattice Quantum Chromodynamics: Practical Essentials by Francesco Knechtli, Michael Günther and Mike Peardon. At a 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.

In line with this aim, the authors spend relatively little time on the physical or field theoretic background; while some more advanced topics such as the Nielson-Ninomiya theorem and the Symanzik effective theory or touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are altogether omitted. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation fro pure gauge fields, and hybrid Monte Carlo for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging and explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.

Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.

___
Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.

by Georg v. Hippel (noreply@blogger.com) at April 23, 2019 01:19 PM

Georg von Hippel - Life on the lattice

Looking for guest blogger(s) to cover LATTICE 2018
Since I will not be attending LATTICE 2018 for some excellent personal reasons, I am looking for a guest blogger or even better several guest bloggers from the lattice community who would be interested in covering the conference. Especially for advanced PhD students or junior postdocs, this might be a great opportunity to get your name some visibility. If you are interested, drop me a line either in the comment section or by email (my university address is easy to find).

by Georg v. Hippel (noreply@blogger.com) at April 23, 2019 01:18 PM

Georg von Hippel - Life on the lattice

Looking for guest bloggers to cover LATTICE 2019
My excellent reason for not attending LATTICE 2018 has become a lot bigger, much better at many things, and (if possible) even more beautiful — which means I won't be able to attend LATTICE 2019 either (I fully expect to attend LATTICE 2020, though). So once again I would greatly welcome guest bloggers willing to cover LATTICE 2019; if you are at all interested, send me an email or write a comment to this post.

by Georg v. Hippel (noreply@blogger.com) at April 23, 2019 01:18 PM

Christian P. Robert - xi'an's og

tenure track position in Clermont, Auvergne

My friend Arnaud Guillin pointed out this opening of a tenure-track professor position at his University of Clermont Auvergne, in Central France. With specialty in statistics and machine-learning, especially deep learning. The deadline for applications is 12 May 2019. (Tenure-track positions are quite rare in French universities and this offer includes a limited teaching load over three years, potential tenure and titularisation at the end of a five year period, and is restricted to candidates who did their PhD or their postdoc abroad.)

by xi'an at April 23, 2019 12:07 PM

Peter Coles - In the Dark

Lights all askew in the Heavens – the 1919 Eclipse Expeditions

I completely forgot to upload the slides from my talk at the Open Meeting of the Royal Astronomical Society on April 12 2019 so here they are now!

Just a reminder that the centenary of the famous 1919 Eclipse Expeditions is on 29 May 2019.

by telescoper at April 23, 2019 08:44 AM

April 16, 2019

Matt Strassler - Of Particular Significance

The Black Hole `Photo’: Seeing More Clearly

Ok, after yesterday’s post, in which I told you what I still didn’t understand about the Event Horizon Telescope (EHT) black hole image (see also the pre-photo blog post in which I explained pedagogically what the image was likely to show and why), today I can tell you that quite a few of the gaps in my understanding are filling in (thanks mainly to conversations with Harvard postdoc Alex Lupsasca and science journalist Davide Castelvecchi, and to direct answers from professor Heino Falcke, who leads the Event Horizon Telescope Science Council and co-wrote a founding paper in this subject).  And I can give you an update to yesterday’s very tentative figure.

First: a very important point, to which I will return in a future post, is that as I suspected, it’s not at all clear what the EHT image really shows.   More precisely, assuming Einstein’s theory of gravity is correct in this context:

  • The image itself clearly shows a black hole’s quasi-silhouette (called a `shadow’ in expert jargon) and its bright photon-sphere where photons [particles of light — of all electromagnetic waves, including radio waves] can be gathered and focused.
  • However, all the light (including the observed radio waves) coming from the photon-sphere was emitted from material well outside the photon-sphere; and the image itself does not tell you where that material is located.  (To quote Falcke: this is `a blessing and a curse’; insensitivity to the illumination source makes it easy to interpret the black hole’s role in the image but hard to learn much about the material near the black hole.) It’s a bit analogous to seeing a brightly shining metal ball while not being able to see what it’s being lit by… except that the photon-sphere isn’t an object.  It’s just a result of the play of the light [well, radio waves] directed by the bending effects of gravity.  More on that in a future post.
  • When you see a picture of an accretion disk and jets drawn to illustrate where the radio waves may come from, keep in mind that it involves additional assumptions — educated assumptions that combine many other measurements of M87’s black hole with simulations of matter, gravity and magnetic fields interacting near a black hole.  But we should be cautious: perhaps not all the assumptions are right.  The image shows no conflicts with those assumptions, but neither does it confirm them on its own.

Just to indicate the importance of these assumptions, let me highlight a remark made at the press conference that the black hole is rotating quickly, clockwise from our perspective.  But (as the EHT papers state) if one doesn’t make some of the above-mentioned assumptions, one cannot conclude from the image alone that the black hole is actually rotating.  The interplay of these assumptions is something I’m still trying to get straight.

Second, if you buy all the assumptions, then the picture I drew in yesterday’s post is mostly correct except (a) the jets are far too narrow, and shown overly disconnected from the disk, and (b) they are slightly mis-oriented relative to the orientation of the image.  Below is an improved version of this picture, probably still not the final one.  The new features: the jets (now pointing in the right directions relative to the photo) are fatter and not entirely disconnected from the accretion disk.  This is important because the dominant source of illumination of the photon-sphere might come from the region where the disk and jets meet.

My3rdGuessBHPhoto.png

Updated version of yesterday’s figure: main changes are the increased width and more accurate orientation of the jets.  Working backwards: the EHT image (lower right) is interpreted, using mainly Einstein’s theory of gravity, as (upper right) a thin photon-sphere of focused light surrounding a dark patch created by the gravity of the black hole, with a little bit of additional illumination from somewhere.  The dark patch is 2.5 – 5 times larger than the event horizon of the black hole, depending on how fast the black hole is rotating; but the image itself does not tell you how the photon-sphere is illuminated or whether the black hole is rotating.  Using further assumptions, based on previous measurements of various types and computer simulations of material, gravity and magnetic fields, a picture of the black hole’s vicinity (upper left) can be inferred by the experts. It consists of a fat but tenuous accretion disk of material, almost face-on, some of which is funneled into jets, one heading almost toward us, the other in the opposite direction.  The material surrounds but is somewhat separated from a rotating black hole’s event horizon.  At this radio frequency, the jets and disk are too dim in radio waves to see in the image; only at (and perhaps close to) the photon-sphere, where some of the radio waves are collected and focused, are they bright enough to be easily discerned by the Event Horizon Telescope.

 

 

by Matt Strassler at April 16, 2019 12:53 PM

Jon Butterworth - Life and Physics

The Universe Speaks in Numbers
I have reviewed Graham Farmelo’s new book for Nature. You can find the full review here. Mathematics, physics and the relationship between the two is a fascinating topic which sparks much discussion. The review only came out this morning and … Continue reading

by Jon Butterworth at April 16, 2019 12:16 PM

April 15, 2019

Matt Strassler - Of Particular Significance

The Black Hole `Photo’: What Are We Looking At?

The short answer: I’m really not sure yet.  [This post is now largely superseded by the next one, in which some of the questions raised below have now been answered.]

Neither are some of my colleagues who know more about the black hole geometry than I do. And at this point we still haven’t figured out what the Event Horizon Telescope experts do and don’t know about this question… or whether they agree amongst themselves.

[Note added: last week, a number of people pointed me to a very nice video by Veritasium illustrating some of the features of black holes, accretion disks and the warping of their appearance by the gravity of the black hole.  However, Veritasium’s video illustrates a non-rotating black hole with a thin accretion disk that is edge-on from our perspective; and this is definitely NOT what we are seeing!]

As I emphasized in my pre-photo blog post (in which I described carefully what we were likely to be shown, and the subtleties involved), this is not a simple photograph of what’s `actually there.’ We all agree that what we’re looking at is light from some glowing material around the solar-system-sized black hole at the heart of the galaxy M87.  But that light has been wildly bent on its path toward Earth, and so — just like a room seen through an old, warped window, and a dirty one at that — it’s not simple to interpret what we’re actually seeing. Where, exactly, is the material `in truth’, such that its light appears where it does in the image? Interpretation of the image is potentially ambiguous, and certainly not obvious.

The naive guess as to what to expect — which astronomers developed over many years, based on many studies of many suspected black holes — is crudely illustrated in the figure at the end of this post.  Material around a black hole has two main components:

  • An accretion disk of `gas’ (really plasma, i.e. a very hot collection of electrons, protons, and other atomic nuclei) which may be thin and concentrated, or thick and puffy, or something more complicated.  The disk extends inward to within a few times the radius of the black hole’s event horizon, the point of no-return; but how close it can be depends on how fast the black hole rotates.
  • Two oppositely-directed jets of material, created somehow by material from the disk being concentrated and accelerated by magnetic fields tied up with the black hole and its accretion disk; the jets begin not far from the event horizon, but then extend outward all the way to the outer edges of the entire galaxy.

But even if this is true, it’s not at all obvious (at least to me) what these objects look like in an image such as we saw Wednesday. As far as I am currently aware, their appearance in the image depends on

  • Whether the disk is thick and puffy, or thin and concentrated;
  • How far the disk extends inward and outward around the black hole;
  • The process by which the jets are formed and where exactly they originate;
  • How fast the black hole is spinning;
  • The orientation of the axis around which the black hole is spinning;
  • The typical frequencies of the radio waves emitted by the disk and by the jets (compared to the frequency, about 230 Gigahertz, observed by the Event Horizon Telescope);

and perhaps other things. I can’t yet figure out what we do and don’t know about these things; and it doesn’t help that some of the statements made by the EHT scientists in public and in their six papers seem contradictory (and I can’t yet say whether that’s because of typos, misstatements by them, or [most likely] misinterpretations by me.)

So here’s the best I can do right now, for myself and for you. Below is a figure that is nothing but an illustration of my best attempt so far to make sense of what we are seeing. You can expect that some fraction of this figure is wrong. Increasingly I believe this figure is correct in cartoon form, though the picture on the left is too sketchy right now and needs improvement.  What I’ll be doing this week is fixing my own misconceptions and trying to become clear on what the experts do and don’t know. Experts are more than welcome to set me straight!

In short — this story is not over, at least not for me. As I gain a clearer understanding of what we do and don’t know, I’ll write more about it.

 

MyFirstGuessBHPhoto.png

My personal confused and almost certainly inaccurate understanding [the main inaccuracy is that the disk and jets are fatter than shown, and connected to one another near the black hole; that’s important because the main illumination source may be the connection region; also jets aren’t oriented quite right] of how one might interpret the black hole image; all elements subject to revision as I learn more. Left: the standard guess concerning the immediate vicinity of M87’s black hole: an accretion disk oriented nearly face-on from Earth’s perspective, jets aimed nearly at and away from us, and a rotating black hole at the center.  The orientation of the jets may not be correct relative to the photo.  Upper right: The image after the radio waves’ paths are bent by gravity.  The quasi-silhouette of the black hole is larger than the `true’ event horizon, a lot of radio waves are concentrated at the ‘photon-sphere’ just outside (brighter at the bottom due to the black-hole spinning clockwise around an axis slightly askew to our line of sight); some additional radio waves from the accretion disk and jets further complicate the image. Most of the disk and jets are too dim to see.  Lower Right: This image is then blurred out by the Event Horizon Telescope’s limitations, partly compensated for by heavy-duty image processing.

 

by Matt Strassler at April 15, 2019 04:02 PM

April 11, 2019

Jon Butterworth - Life and Physics

Exploring the “Higgs Portal”
The Higgs boson is unique. Does it open a door to Dark Matter? All known fundamental particles acquire mass by interacting with the Higgs boson. Actually, more correctly, they interact with a quantum field which is present even in “empty” … Continue reading

by Jon Butterworth at April 11, 2019 04:16 PM

April 10, 2019

ZapperZ - Physics and Physicists

First Images of a Black Hole
After a week of rumors and build-up, the news finally broke and it is what we have been expecting. It is the announcement that we finally have our first image of a black hole.

The first direct visual evidence of a black hole and its “shadow” has been revealed today by astronomers working on the Event Horizon Telescope (EHT). The image is of the supermassive black hole that lies at the centre of the huge Messier 87 galaxy, in the Virgo galaxy cluster. Located 55 million light-years from Earth, the black hole has been determined to have a mass 6.5-billion times that of the Sun, with an uncertainty of 0.7 billion solar masses.

You can actually read the papers that were published related to this announcement, so you can find a lot more details there.

Well done, folks!!

Zz.

by ZapperZ (noreply@blogger.com) at April 10, 2019 06:31 PM

Jon Butterworth - Life and Physics

Particle & astro-particle physics annual UK meeting
The annual UK particle physics and astroparticle physics conference was hosted by Imperial this week, and has just finished. Some slightly random highlights. Crisis or no crisis, the future of particle physics is a topic, of course. An apposite quote from … Continue reading

by Jon Butterworth at April 10, 2019 03:25 PM

Matt Strassler - Of Particular Significance

A Black Day (and a Happy One) In Scientific History

Wow.

Twenty years ago, astronomers Heino Falcke, Fulvio Melia and Eric Agol (a former colleague of mine at the University of Washington) pointed out that the black hole at the center of our galaxy, the Milky Way, was probably big enough to be observed — not with a usual camera using visible light, but using radio waves and clever techniques known as “interferometry”.  Soon it was pointed out that the black hole in M87, further but larger, could also be observed.  [How? I explained this yesterday in this post.]   

And today, an image of the latter, looking quite similar to what we expected, was presented to humanity.  Just as with the discovery of the Higgs boson, and with LIGO’s first discovery of gravitational waves, nature, captured by the hard work of an international group of many scientists, gives us something definitive, uncontroversial, and spectacularly in line with expectations.

EHTDiscoveryM87.png

An image of the dead center of the huge galaxy M87, showing a glowing ring of radio waves from a disk of rapidly rotating gas, and the dark quasi-silhouette of a solar-system-sized black hole.  Congratulations to the Event Horizon Telescope team

I’ll have more to say about this later [have to do non-physics work today 😦 ] and in particular about the frustration of not finding any helpful big surprises during this great decade of fundamental science — but for now, let’s just enjoy this incredible image for what it is, and congratulate those who proposed this effort and those who carried it out.

 

by Matt Strassler at April 10, 2019 02:16 PM

Clifford V. Johnson - Asymptotia

It’s a Black Hole!

Yes, it’s a black hole all right. Following on from my reflections from last night, I can report that the press conference revelations were remarkable indeed. Above you see the image they revealed! It is the behemoth at the centre of the galaxy M87! This truly groundbreaking image is the … Click to continue reading this post

The post It’s a Black Hole! appeared first on Asymptotia.

by Clifford at April 10, 2019 01:37 PM

Clifford V. Johnson - Asymptotia

Event!

Well, I’m off to get six hours of sleep before the big announcement tomorrow! The Event Horizon Telescope teams are talking about an announcement of “groundbreaking” results tomorrow at 13:00 CEST. Given that they set out to “image” the event horizon of a black hole, this suggests (suggests) that they … Click to continue reading this post

The post Event! appeared first on Asymptotia.

by Clifford at April 10, 2019 07:06 AM

April 09, 2019

CERN Bulletin

CERN Cricket Club

The CERN Cricket Club (CCC) was founded in 1965, and is the second oldest Club in Switzerland. CERN CC was a founder member of the Swiss Cricket Association in 1980 (which became Cricket Switzerland in 2014). In the early years, CERN did not have a ground and played on various green spaces around the Geneva area (football pitches, hockey pitches etc.). Net practice at that time was held on the Meyrin site, initially near the kindergarten and then next to the road where Building 40 is now located. It was only in the mid-70s, when the CERN SPS was constructed that CERN CC, at long last, found a home ground behind building 867 on the new Prevessin site. The pitch was initially at 90 degrees to the current pitch and behind the hut until one season CERN decided to construct a storage area on the ground and a new strip had to be laid in an emergency mid-season. The strip was a 6 foot wide nylon cricket mat laid on grass, which had to be rolled before each game. It was used until the 2001 season when a permanent strip was laid on a concrete and tarmac base.  In 2014, the club replaced the wicket by a wider one, installed a new practice strip and re-seeded parts of the ground damaged during the Bosons & More concert in September 2013.

Currently

Matches are played almost every Sunday during the season against clubs in Switzerland in the Cricket Switzerland 40 over league and T20 competitions, as well as friendlies against teams from France (Rhone CC, Riviera CC and Strasbourg Strollers CC) and the UK (Trafford Solicitors CC).

Cricket nets take place at the ground on Thursday evenings starting at 18:00 until around 20:00, from mid-April to late September, weather permitting.  For 2019 the first net practice is on April 11th and the first match on April 14th.

Everyone is welcome to join the club and we have players of many nationalities and various degrees of expertise.

Full details of how to join the club, the fixtures, match results etc. can be found at http://cern.ch/cricket/.

April 09, 2019 05:04 PM

CERN Bulletin

Interfon

Coopérative des fonctionnaires internationaux. Découvrez l'ensemble de nos avantages et remises auprès de nos fournisseurs sur notre site internet www.interfon.fr ou à notre bureau au bâtiment 504 (ouvert tous les jours de 12h30 à 15h30).

April 09, 2019 05:04 PM

CERN Bulletin

28 May 2019: Ordinary General Assembly of the Staff Association!

In the first semester of each year, the Staff Association (SA) invites its members to attend and participate in the Ordinary General Assembly (OGA).

This year the OGA will be held on Tuesday, 28 May 2019 from 14.00 to 16.00, Council Chamber, Meyrin (503-1-001).

During the Ordinary General Assembly, the activity and financial reports of the SA are presented and submitted for approval to the members. This is the occasion to get a global view on the activities of the SA, its management, and an opportunity to express your opinion, particularly by taking part in votes. Other items are listed on the agenda, as proposed by the Staff Council.

Who can vote?

Ordinary members (MPE) of the SA can take part in all votes. Associated members (MPA) of the SA and/or affiliated pensioners have a right to vote on those topics that are of direct interest to them.

Who can give their opinion, and how?

The Ordinary General Assembly is also the opportunity for members of the SA to express themselves through the addition of discussion points to the agenda. For these points to be subjected to a vote, the request must be introduced in writing to the President of the Staff Association, at least 20 days before the General Assembly, and by at least 20 members of the SA. Additionally, members of the SA can ask the OGA to have a discussion on a specific point, after expiration of the agenda, but no decision shall be taken based on these discussions.

Can we contest the decisions?

Any decision taken by the Ordinary General Assembly can be contested through a referendum as defined in the Statute of the Staff Association.

Do not hesitate, take part in your Ordinary General Assembly on 28 May 2019. Come and make your voice count, and seize this occasion to exchange with your staff delegates!

The draft amendment of the Statutes will be communicated to you at the same time as the agenda.

In the meantime, please find on the link below the Staff Association Statutes currently in force: http://staff-association.web.cern.ch/sites/staff-association.web.cern.ch/files/Docs/SA_Statutes_EN.pdf.

April 09, 2019 05:04 PM

CERN Bulletin

The Ombuds does not tell you everything!

In a recent article, the Ombuds advertises his own role in informal conflict resolution. Why not? Well, we can object to his article for at least three reasons.

Firstly, the realm of conflict resolution is not limited to formal procedures (review procedure or internal appeal) on the one hand and the Ombuds’ own informal process on the other hand. Recently a poster published by HR Department clearly showed that other mechanisms do exist to provide informal channels for conflict resolution, notably: HR services (HRAs, Social services …) and Staff Association (Delegates, Commission des cas particuliers …). And then, of course, the hierarchy also has a role to play. Clearly, the conflict resolution realm is colourful and diverse, with differences in approaches and participants that allow members of the personnel to find a channel that best suits the situation and their frame of mind. Is the conflict resolution realm black (formal) and white (Ombuds) as the Ombuds paints it? No, it is not!

Secondly, an administrative decision can only be challenged through a review procedure or internal appeal, two formal procedures. Going to see the Ombuds when your request to change home station has been denied or when you are denied a promotion will never result in the corresponding decision being changed, let alone being overturned. So, is the Ombuds right when he writes “I therefore encourage you to first try to resolve your dispute informally”? No; informal mechanisms cannot resolve all disputes.

Thirdly, formal procedures are not necessarily as the Ombuds caricatures them to be. For instance. As described in Administrative Circular 6 (AC6), the review procedure is “quicker and less cumbersome than the internal appeal” and it “enables the Administration to check the legal conditions in which the challenged decision was taken without convening the Joint Advisory Appeals Board”. Also, and regrettably often forgotten, the person who initiates a review procedure can “call upon a mediator to take part in it” (see AC6, section III “Review procedure with mediation”). The role of the mediator is “to try to settle the dispute between the parties concerned, in particular by promoting a mutual understanding of their different points of view, and to express an opinion, if so required, on the request for review.” Is this “leav[ing] it to the Organization to resolve the dispute” or does this put the “relationship between the two parties […] at great risk”? No; clearly not !

It is not the first time that we cannot avoid having doubts and being surprised, even shocked at times, after reading a piece published in the “Ombud’s Corner”. The Ombuds is nominated by and reports directly to the Director General. Will the Director General please look into this situation?

April 09, 2019 05:04 PM

CERN Bulletin

GAC-EPA

Le GAC organise des permanences avec entretiens individuels. La prochaine permanence se tiendra le :

Mardi 7 mai de 13 h 30 à 16 h 00

Salle de réunion de l’Association du personnel

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations : http://gac-epa.org/

Formulaire de contact :

http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php

April 09, 2019 05:04 PM

Matt Strassler - Of Particular Significance

A Non-Expert’s Guide to a Black Hole’s Silhouette

[Note added April 16: some minor improvements have been made to this article as my understanding has increased, specifically concerning the photon-sphere, which is the main region from which the radio waves are seen in the recently released image. See later blog posts for the image and its interpretation.]

About fifteen years ago, when I was a professor at the University of Washington, the particle physics theorists and the astronomer theorists occasionally would arrange to have lunch together, to facilitate an informal exchange of information about our adjacent fields. Among the many enjoyable discussions, one I was particularly excited about — as much as an amateur as a professional — was that in which I learned of the plan to make some sort of image of a black hole. I was told that this incredible feat would likely be achieved by 2020. The time, it seems, has arrived.

The goal of this post is to provide readers with what I hope will be a helpful guide through the foggy swamp that is likely to partly obscure this major scientific result. Over the last days I’ve been reading what both scientists and science journalists are writing in advance of the press conference Wednesday morning, and I’m finding many examples of jargon masquerading as English, terms poorly defined, and phrasing that seems likely to mislead. As I’m increasingly concerned that many non-experts will be unable to understand what is presented tomorrow, and what the pictures do and do not mean, I’m using this post to answer a few questions that many readers (and many of these writers) have perhaps not thought to ask.

A caution: I am not an expert on this subject. At the moment, I’m still learning about the more subtle issues. I’ll try to be clear about when I’m on the edge of my knowledge, and hopefully won’t make any blunders [but experts, please point them out if you see any!]

Which black holes are being imaged?

The initial plan behind the so-called “Event Horizon Telescope” (the name deserves some discussion; see below) has been to make images of two black holes at the centers of galaxies (the star-cities in which most stars are found.) These are not black holes formed by the collapse of individual stars, such as the ones whose collisions have been observed through their gravitational waves. Central galactic black holes are millions or even billions of times larger!  The ones being observed are

  1. the large and `close’ black hole at the center of the Milky Way (the giant star-city we call home), and
  2. the immense but much more distant back hole at the center of M87 (a spectacularly big star-megalopolis.)
MilkyWayAndM87.jpg

Left: the Milky Way as seen in the night sky; we see our galaxy from within, and so cannot see its spiral shape directly.  The brightest region is toward the center of the galaxy, and deep within it is the black hole of interest, as big as a large star but incredibly small in this image.  Right: the enormous elliptically-shaped galaxy M87, which sports an enormous black hole (but again, incredibly small in this image) at its center.  The blue stripe is a jet of material hurled at near-light-speed from the region very close to the black hole.

Why go after both of these black holes at the same time? Just as the Sun and Moon appear the same size in our sky because of an accident — the Sun’s greater distance is almost precisely compensated by its greater size — these two black holes appear similarly sized, albeit very tiny, from our vantage point.

Our galaxy’s central black hole has a mass of about four million Suns, and its size is about twenty times wider than our Sun (in contrast to the black holes whose gravitational waves were observed, which are the size of a human city.) But from the center of our galaxy, it takes light tens of thousands of years to reach Earth, and at such a great distance, this big black hole appears as small as would a tiny grain of sand from a plane at cruising altitude. Try to see the sand grains on a beach from a plane window!

Meanwhile, although M87 lies about two thousand times further away, its central black hole has a mass and radius about two thousand times larger than our galaxy’s black hole. Consequently it appears roughly the same size on the sky.

We may get our first images of both black holes at Wednesday’s announcement, though it is possible that so far only one image is ready for public presentation.

How can one see something as black as a black hole?!?

First of all, aren’t black holes black? Doesn’t that mean they don’t emit (or reflect) any light? Yes, and yes. [With a caveat — Hawking radiation — but while that’s very important for extremely small black holes, it’s completely irrelevant for these ones.]

So how can we see something that’s black against a black night sky? Well, a black hole off by itself would indeed be almost invisible.  [Though detectable because its gravity bends the light of objects behind it, proving it was a black hole and not a clump of, say, dark matter would be tough.]

But the centers of galaxies have lots of `gas’ — not quite what we call `gas’ in school, and certainly not what we put in our cars. `Gas’ is the generalized term astronomers use for any collection of independently wandering atoms (or ions), atomic nuclei, and electrons; if it’s not just atoms it should really be called plasma, but let’s not worry about that here. Some of this `gas’ is inevitably orbiting the black hole, and some of it is falling in (and, because of complex interactions with magnetic fields, some is being blasted outward  before it falls in.) That gas, unlike the black hole, will inevitably glow — it will emit lots of light. What the astronomers are observing isn’t the black hole; they’re observing light from the gas!

And by light, I don’t only mean what we can see with human eyes. The gas emits electromagnetic waves of all frequencies, including not only the visible frequencies but also much higher frequencies, such as those we call X-rays, and much lower frequencies, such as those we call radio waves. To detect these invisible forms of light, astronomers build all sorts of scientific instruments, which we call them `telescopes’ even though they don’t involve looking into a tube as with traditional telescopes.

Is this really a “photograph” of [the gas in the neighborhood of] a black hole?

Yes and (mostly) no.  What you’ll be shown is not a picture you could take with a cell-phone camera if you were in a nearby spaceship.  It’s not visible light that’s being observed.  But it is invisible light — radio waves — and since all light, visible and not, is made from particles called `photons’, technically you could still say is a “photo”-graph.

As I said, the telescope being used in this doesn’t have a set of mirrors in a tube like your friendly neighbor’s amateur telescope. Instead, it uses radio receivers to detect electromagnetic waves that have frequencies above what your traditional radio or cell phone can detect [in the hundred gigahertz range, over a thousand times above what your FM radio is sensitive to.]  Though some might call them microwaves, let’s just call them radio waves; it’s just a matter of definition.

So the images you will see are based on the observation of electromagnetic waves at these radio frequencies, but they are turned into something visible for us humans using a computer. That means the color of the image is inserted by the computer user and will be arbitrary, so pay it limited attention. It’s not the color you would see if you were nearby.  Scientists will choose a combination of the color and brightness of the image so as to indicate the brightness of the radio waves observed.

If you were nearby and looked toward the black hole, you’d see something else. The gas would probably appear colorless — white-hot — and the shape and brightness, though similar, wouldn’t be identical.

If I had radios for eyes, is this what I would see?

Suppose you had radio receivers for eyes instead of the ones you were born with; is this image what you would see if you looked at the black hole from a nearby planet?

Well, to some extent — but still, not exactly.  There’s another very important way that what you will see is not a photograph. It is so difficult to make this measurement that the image you will see is highly processed — that is to say, it will have been cleaned up and adjusted using special mathematics and powerful computers. Various assumptions are key ingredients in this image-processing. Thus, you will not be seeing the `true’ appearance of the [surroundings of the] black hole. You will be seeing the astronomers’ best guess as to this appearance based on partial information and some very intelligent guesswork. Such guesswork may be right, but as we may learn only over time, some of it may not be.

This guesswork is necessary.  To make a nice clear image of something so tiny and faint using radio waves, you’d naively need a radio receiver as large as the Earth. A trick astronomers use when looking at distant objects it to build gigantic arrays of large radio receivers, which can be combined together to make a `telescope’ much larger and more powerful than any one receiver. The tricks for doing this efficiently are beyond what I can explain here, but involve the term `interferometry‘. Examples of large radio telescope arrays include ALMA, the center part of which can be seen in this image, which was built high up on a plateau between two volcanoes in the Atacama desert.

But even ALMA isn’t up to the task. And we can’t make a version of ALMA that covers the Earth. So the next best thing is to use ALMA and all of its friends, which are scattered at different locations around the Earth — an incomplete array of single and multiple radio receivers, combined using all the tricks of interferometry. This is a bit like (and not like) using a telescope that is powerful but has large pieces of its lens missing. You can get an image, but it will be badly distorted.

To figure out what you’re seeing, you must use your knowledge of your imperfect lens, and work backwards to figure our what your distorted image really would have looked like if you had a perfect lens.

Even that’s not quite enough: to do this, you need to have a pretty good guess about what you were going to see. That is where you might go astray; if your assumptions are wrong, you might massage the image to look like what you expected instead of how it really ought to look.   [Is this a serious risk?   I’m not yet expert enough to know the details of how delicate the situation might be.]

It is possible that in coming days and weeks there will be controversies about whether the image-processing techniques used and the assumptions underlying them have created artifacts in the images that really shouldn’t be there, or removed features that should have been there. This could lead to significant disagreements as to how to interpret the images. [Consider these concerns a statement of general prudence based on past experience; I don’t have enough specific expertise here to give a more detailed opinion.]

So in summary, this is not a photograph you’d see with your eyes, and it’s not a complete unadulterated image — it’s a highly-processed image of what a creature with radio eyes might see.  The color is arbitrary; some combination of color and brightness will express the intensity of the radio waves, nothing more.  Treat the images with proper caution and respect.

Will the image show what [the immediate neighborhood of] a black hole looks like?

Oh, my friends and colleagues! Could we please try to be a little more careful using the phrase `looks like‘? The term has two very different meanings, and in contexts like this one, it really, really matters.

Let me put a spoon into a glass of water: on the left, I’ve drawn a diagram of what it looks like, and on the right, I’ve put a photograph of what it looks like.

GlassNSpoonBoth.png

On the left, a sketchy drawing of a spoon in water, showing what it “looks like” in truth; on the right, what it “looks like” to your eyes, distorted by the bending of the light’s path between the spoon and my camera.

You notice the difference, no doubt. The spoon on the left looks like a spoon; the spoon on the right looks like something invented by a surrealist painter.  But it’s just the effect of water and glass on light.

What’s on the left is a drawing of where the spoon is located inside the glass; it shows not what you would see with your eyes, but rather a depiction of the `true location’ of the spoon. On the right is what you will see with your eyes and brain, which is not showing you where the objects are in truth but rather is showing you where the light from those objects is coming from. The truth-depiction is drawn as though the light from the object goes straight from the object into your eyes. But when the light from an object does not take a straight path from the objects to you — as in this case, where the light’s path bends due to interaction of the light with the water and the glass — then the image created in your eyes or camera can significantly differ from a depiction of the truth.

The same issue arises, of course, with any sort of lens or mirror that isn’t flat. A room seen through a curved lens, or through an old, misshapen window pane, can look pretty strange.  And gravity? Strong gravity near a black hole drastically modifies the path of any light traveling close by!

In the figure below, the left panel shows a depiction of what we think the region around a black hole typically looks like in truth. There is a spinning disk of gas, called the accretion disk, a sort of holding station for the gas. At any moment, a small fraction of the gas, at the inner edge of the disk, has crept too close to the black hole and is starting to spiral in. There are also usually jets of material flying out, roughly aligned with the axis of the black hole’s rapid rotation and its magnetic field. As I mentioned above, that material is being flung out of the black hole’s neighborhood (not out of the black hole itself, which would be impossible.)

accretiondisk_real_apparent.jpg

Left: a depiction `in truth’ of the neighborhood of a black hole, showing the disk of slowly in-spiraling gas and the jets of material funneled outward by the black hole’s magnetic field.  The black hole is not directly shown, but is significantly smaller than the inner edge of the disk.  The color is not meaningful.  Right: a simulation by Hotaka Shiokawa of how such a black hole may appear to the Event Horizon Telescope [if its disk is tipped up a bit more than in the image at left.]  The color is arbitrary; mainly the brightness matters.  The left side of the disk appears brighter than right side due to a ​`Doppler effect’; on the left the gas is moving toward us, increasing the intensity of the radio waves, while on the right side it is moving away.  The dark area at the center is the black hole’s sort-of-silhouette; see below.

The image you will be shown, however, will perhaps look like the one on the right. That is an image of the radio waves as observed here at Earth, after the waves’ paths have been wildly bent — warped by the intense gravity near the black hole. Just as with any lens or mirror or anything similar, what you will see does not directly reveal what is truly there. Instead, you must infer what is there from what you see.

Just as you infer, when you see the broken twisted spoon in the glass, that probably the spoon is not broken or twisted, and the water and glass have refracted the light in familiar ways, so too we must make assumptions to understand what we’re really looking at in truth after we see the images tomorrow.

How serious are these assumptions?  Certainly, at their first attempt, astronomers will assume Einstein’s theory of gravity, which predicts how the light is bent around a black hole, is correct. But the details of what we infer from what we’re shown might depend upon whether Einstein’s formulas are precisely right. It also may depend on the accuracy of our understanding of and assumptions about accretion disks. Further complicating the procedure is that the rate and axis of the black hole’s rotation affects the details of the bending of the light, and since we’re not sure of the rotation yet for these black holes, that adds to the assumptions that must gradually be disentangled.

Because of these assumptions, we will not have an unambiguous understanding of the true nature of what appears in these first images.

Are we seeing the `shadow’ of a black hole?

A shadow: that’s what astronomers call it, but as far as I can tell, this word is jargon masquerading as English… the most pernicious type of jargon.

What’s a shadow, in English? Your shadow is a dark area on the ground created by you blocking the light from the Sun, which emits light and illuminates the area around you. How would I see your shadow? I’d look at the ground — not at you.

This is not what we are doing in this case. The gas is glowing, illuminating the region. The black hole is `blocking’ [caution! see below] some of the light. We’re looking straight toward the black hole, and seeing dark areas where illumination doesn’t reach us. This is more like looking at someone’s silhouette, not someone’s shadow!

SilhouetteShadow2.png

With the Sun providing a source of illumination, a person standing between you and the Sun would appear as a silhouette that blocks part of the Sun, and would also create a shadow on the ground. [I’ve drawn the shadow slightly askew to avoid visual confusion.]  In the black hole images, illumination is provided by the glowing gas, and we’ll see a sort-of-silhouette [but see below!!!] of the black hole.  There’s nothing analogous to the ground, or to the person’s shadow, in the black hole images.

That being said, it’s much more clever than a simple silhouette, because of all that pesky bending of the light that the black hole is doing.  In an ordinary silhouette, the light from the illuminator travels in straight lines, and an object blocks part of the light.   But a black hole does not block your view of what’s behind it;  the light from the gas behind it gets bent around it, and thus can be seen after all!

Still, after you calculate all the bending, you find out that there’s a dark area from which no light emerges, which I’ll informally call a quasi-silhouette.  Just outside this is a `photon-sphere’, which creates a bright ring; I’ll explain this elsewhere.   That resembles what happens with certain lenses, in contrast to the person’s silhouette shown above, where the light travels in straight lines.  Imagine that a human body could bend light in such a way; a whimsical depiction of what that might look like is shown below:

QuasiSilhouette.png

If a human body could bend light the way a black hole does, it would distort the Sun’s appearance.  The light we’d expect to be blocked would instead be warped around the edges.  The dark area, no longer a simple outline of the body, would take on a complicated shape.

Note also that the black hole’s quasi-silhouette probably won’t be entirely dark. If material from the accretion disk (or a jet pointing toward us) lies between us and the black hole, it can emit light in our direction, partly filling in the dark region.

Thus the quasi-silhouette we’ll see in the images is not the outline of the black hole’s edge, but an effect of the light bending, and is in fact considerably larger than the black hole.  In truth it may be as much as 50% larger in radius than the event horizon, and the silhouette as seen in the image may appear  more than 2.5 to 5 times larger (depending on how fast the black hole rotates) than the true event horizon — all due to the bent paths of the radio waves.

Interestingly, it turns out that the details of how the black hole is rotating don’t much affect the size of the quasi-silhouette. The black hole in the Milky Way is already understood well enough that astronomers know how big its quasi-silhouette ought to appear, even though we don’t know its rotation speed and axis.  The quasi-silhouette’s size in the image will therefore be an important test of Einstein’s formulas for gravity, even on day one.  If the size disagrees with expectations, expect a lot of hullabaloo.

What is the event horizon, and is it present in the image?

The event horizon of a black hole, in Einstein’s theory of gravity, is not an object. It’s the edge of a region of no-return, as I’ve explained here. Anything that goes past that point can’t re-emerge, and nothing that happens inside can ever send any light or messages back out.

Despite what some writers are saying, we’re not expecting to see the event horizon in the images. As is hopefully clear by now, what astronomers are observing is (a) the radio waves from the gas near the black hole, after the waves have taken strongly curved paths, and (b) a quasi-silhouette of the black hole from which radio waves don’t emerge. But as I explained above, this quasi-silhouette is considerably larger than the event horizon, both in truth and in appearance. The event horizon does not emit any light, and it sits well inside the quasi-silhouette, not at its edge.

Still, what we’ll be seeing is closer to the event horizon than anything we’ve ever seen before, which is really exciting!   And if the silhouette has an unexpected appearance, we just might get our first hint of a breakdown of Einstein’s understanding of event horizons.  Don’t bet on it, but you can hope for it.

Can we hope to see the singularity of a black hole?

No, for two reasons.

First, there probably is no singularity in the first place. It drives me nuts when people say there’s a singularity inside a black hole; that’s just wrong. The correct statement is that in Einstein’s theory of gravity, singularities (i.e. unavoidable infinities) arise in the math — in the formulas for black holes (or more precisely, in the solutions to those formulas that describe black holes.) But a singularity in the math does not mean that there’s a singularity in nature!  It usually means that the math isn’t quite right — not that anyone made a mistake, but that the formulas that we’re using aren’t appropriate, and need to be modified in some way, because of some natural phenomenon that we don’t yet understand.

A singularity in a formula implies a mystery in nature — not a singularity in nature.

In fact, historically, in every previous case where singularities have appeared in formulas, it simply meant that the formula (or solution) was not accurate in that context, or wasn’t being used correctly. We already know that Einstein’s theory of gravity can’t be complete (it doesn’t accommodate quantum physics, which is also a part of the real world) and it would be no surprise if its incompleteness is responsible for these infinities. The math singularity merely signifies that the physics very deep inside a black hole isn’t understood yet.

[The same issue afflicts the statement that the Big Bang began with a singularity; the solution to the equations has a singularity, yes, but that’s very far from saying that nature’s Big Bang actually began with one.]

Ok, so how about revising the question: is there any hope of seeing the region deep inside the black hole where Einstein’s equations have a singularity? No. Remember what’s being observed is the stuff outside the black hole, and the black hole’s quasi-silhouette. What happens inside a black hole stays inside a black hole. Anything else is inference.

[More precisely, what happens inside a huge black hole stays inside a black hole for eons.  For tiny black holes it comes out sooner, but even so it is hopelessly scrambled in the form of `Hawking radiation.’]

Any other questions?

Maybe there are some questions that are bothering readers that I haven’t addressed here?  I’m super-busy this afternoon and all day Wednesday with non-physics things, but maybe if you ask the question early enough I can get address it here before the press conference (at 9 am Wednesday, New York City time).   Also, if you find my answers confusing, please comment; I can try to further clarify them for later readers.

It’s a historic moment, or at least the first stage in a historic process.  To me, as I hope to all of you, it’s all very exciting and astonishing how the surreal weirdness of Einstein’s understanding of gravity, and the creepy mysteries of black holes, have suddenly, in just a few quick years, become undeniably real!

by Matt Strassler at April 09, 2019 01:54 PM

Jon Butterworth - Life and Physics

Observation of a previously unseen behaviour of light
Originally posted on Life and Physics:
Beams of light do not, generally speaking, bounce off each other like snooker balls. But at the high energies in the Large Hadron Collider at CERN they have just been observed doing exactly that…

by Jon Butterworth at April 09, 2019 01:50 PM

Jon Butterworth - Life and Physics

Another pentaquark (now on arXiv)
Originally posted on Life and Physics:
I was earlier into work than usual this morning after talking about a new pentaquark discovered by LHCb and reported just now in the Moriond QCD meeting. Even so, when I got in, Ryan…

by Jon Butterworth at April 09, 2019 06:09 AM

Clifford V. Johnson - Asymptotia

Chutney Time!

It was Chutney time here! Well this was a few weekends back. I’m posting late. I used up some winter-toughened tomatoes from the garden that remained and slowly developed on last year’s vines. It was time to clear them for new planting, and so off all these tomatoes came, red, … Click to continue reading this post

The post Chutney Time! appeared first on Asymptotia.

by Clifford at April 09, 2019 04:25 AM

April 08, 2019

John Baez - Azimuth

Symposium on Compositional Structures 4

There’s yet another conference in this fast-paced series, and this time it’s in Southern California!

Symposium on Compositional Structures 4, 22–23 May, 2019, Chapman University, California.

The Symposium on Compositional Structures (SYCO) is an interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language.
The first SYCO was in September 2018, at the University of Birmingham. The second SYCO was in December 2018, at the University of Strathclyde. The third SYCO was in March 2019, at the University of Oxford. Each meeting attracted about 70 participants.

We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

Submission is easy, with no format requirements or page restrictions. The meeting does not have proceedings, so work can be submitted even if it has been submitted or published elsewhere. Think creatively—you could submit a recent paper, or notes on work in progress, or even a recent Masters or PhD thesis.

While no list of topics could be exhaustive, SYCO welcomes submissions
with a compositional focus related to any of the following areas, in
particular from the perspective of category theory:

• logical methods in computer science, including classical and quantum programming, type theory, concurrency, natural language processing and machine learning;

• graphical calculi, including string diagrams, Petri nets and reaction networks;

• languages and frameworks, including process algebras, proof nets, type theory and game semantics;

• abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;

• quantum algebra, including quantum computation and representation theory;

• tools and techniques, including rewriting, formal proofs and proof assistants, and game theory;

• industrial applications, including case studies and real-world problem descriptions.

This new series aims to bring together the communities behind many previous successful events which have taken place over the last decade, including “Categories, Logic and Physics”, “Categories, Logic and Physics (Scotland)”, “Higher-Dimensional Rewriting and Applications”, “String Diagrams in Computation, Logic and Physics”, “Applied Category Theory”, “Simons Workshop on Compositionality”, and the “Peripatetic Seminar in Sheaves and Logic”.

SYCO will be a regular fixture in the academic calendar, running regularly throughout the year, and becoming over time a recognized venue for presentation and discussion of results in an informal and friendly atmosphere. To help create this community, and to avoid the need to make difficult choices between strong submissions, in the event that more good-quality submissions are received than can be accommodated in the timetable, the programme committee may choose to
defer some submissions to a future meeting, rather than reject them. This would be done based largely on submission order, giving an incentive for early submission, but would also take into account other requirements, such as ensuring a broad scientific programme. Deferred submissions can be re-submitted to any future SYCO meeting, where they would not need peer review, and where they would be prioritised for inclusion in the programme. This will allow us to ensure that speakers have enough time to present their ideas, without creating an unnecessarily competitive reviewing process. Meetings will be held sufficiently frequently to avoid a backlog of deferred papers.

Invited speakers

John Baez, University of California, Riverside: Props in network theory.

Tobias Fritz, Perimeter Institute for Theoretical Physics: Categorical probability: results and challenges.

Nina Otter, University of California, Los Angeles: A unified framework for equivalences in social networks.

Important dates

All times are anywhere-on-earth.

• Submission deadline: Wednesday 24 April 2019
• Author notification: Wednesday 1 May 2019
• Registration deadline: TBA
• Symposium dates: Wednesday 22 and Thursday 23 May 2019

Submission

Submission is by EasyChair, via the following link:

https://easychair.org/conferences/?conf=syco4

Submissions should present research results in sufficient detail to allow them to be properly considered by members of the programme committee, who will assess papers with regards to significance, clarity, correctness, and scope. We encourage the submission of work in progress, as well as mature results. There are no proceedings, so work can be submitted even if it has been previously published, or has been submitted for consideration elsewhere. There is no specific formatting requirement, and no page limit, although for long submissions authors should understand that reviewers may not be able to read the entire document in detail.

Programme Committee

• Miriam Backens, University of Oxford
• Ross Duncan, University of Strathclyde and Cambridge Quantum Computing
• Brendan Fong, Massachusetts Institute of Technology
• Stefano Gogioso, University of Oxford
• Amar Hadzihasanovic, Kyoto University
• Chris Heunen, University of Edinburgh
• Dominic Horsman, University of Grenoble
• Martti Karvonen, University of Edinburgh
• Kohei Kishida, Dalhousie University (chair)
• Andre Kornell, University of California, Davis
• Martha Lewis, University of Amsterdam
• Samuel Mimram, École Polytechnique
• Benjamin Musto, University of Oxford
• Nina Otter, University of California, Los Angeles
• Simona Paoli, University of Leicester
• Dorette Pronk, Dalhousie University
• Mehrnoosh Sadrzadeh, Queen Mary
• Pawel Sobocinski, University of Southampton
• Joshua Tan, University of Oxford
• Sean Tull, University of Oxford
• Dominic Verdon, University of Bristol
• Jamie Vicary, University of Birmingham and University of Oxford
• Maaike Zwart, University of Oxford

by John Baez at April 08, 2019 01:00 AM

April 07, 2019

Lubos Motl - string vacua and pheno

Physics knows a lot about the electron beyond the simple "Standard Model picture"
Ethan Siegel wrote a text about the electron, Ask Ethan: What Is An Electron?, which includes some fair yet simplified standard conceptual facts about the electron's being a particle and a wave, about its properties being statistically predictable, and about the sharp values of its "quantum numbers", some discrete "charges" that are either exactly or approximately conserved in various interactions.



While his statements look overwhelmingly right, there is a general theme that I expected to bother me and that bothers me: Siegel presents a frozen caricature of the particle physicists' knowledge that could be considered "a popularization of the snapshot from 1972 or so". There doesn't seem to be any added value of his text relatively to e.g. the Wikipedia article on the Standard Model. After all, the images such as the list of particles above were just taken from that article.



The final two sentences of Siegel's article suggest that he realizes the "work in progress" character of science – and particle physics is what matters here – and he supports the research:
...Why it works the way it does is still an open question that we have no satisfactory answer for. All we can do is continue to investigate, and work towards a more fundamental answer.
The only problem is that he talks the talk but doesn't walk the walk. This text he wrote – and basically everything he wrote about particle physics – indicates that he has displayed no interest whatsoever to learn something new about the electron that would go e.g. beyond the elementary undergraduate freshman factoids from his article.

The same comment can be said about virtually all popular writers about particle physics these days. They either try to sell pseudoscientific theories that were created from scratch and that have nothing to do with the accumulated knowledge of physics – they're really not capable of reproducing the Standard Model predictions and they are usually very far from passing this test.

Or they avoid such theories but then they avoid all research altogether and spread the misconception that the frozen high school caricature of particle physics from 1972 is what will be enough as "summary of physics" forever.



But the scientific truth and the scientific approach is something completely different – in some sense, it is in between the extreme approaches by the popular writers. Scientific research does take all the established knowledge into account – as compressed e.g. into the Standard Model, an effective theory – but it just didn't freeze when the last terms of the Standard Model Lagrangian were written down.

I think that he was actually asked – pretty much explicitly – about some things that transcend the simplistic picture he has summarized, but his answer included nothing to address that question whatsoever. OK, first, Siegel hasn't really explained any conceptual ideas relevant for the answer, not even at the level of the Standard Model. Note that the original question read:
Please will you describe the electron... explaining what it is, and why it moves the way it does when it interacts with a positron. If you'd also like to explain why it moves the way that it does in an electric field, a magnetic field, and a gravitational field, that would be nice. An explanation of charge would be nice too, and an explanation of why the electron has mass.
OK, Siegel responded with the tables of elementary particles and lists of quantum numbers. But I think that those really don't explain any of these matters – why the electron moves the way it moves in the electromagnetic field (and analogously gravitational field), why it may annihilate with the positron, what gives the electron the mass (the Higgs mechanism and some Yukawa couplings, and those may be approximations of something more fundamental) and more.

Even at the level of field theory, there is a lot of conceptual stuff to say. OK, all the electromagnetic interactions of the electron boil down to the term in the action\[

S_{\rm int} = \int d^4 x\, j^\mu A_\mu = e\int d^4 x\, \bar \Psi A_\mu\gamma^\mu \Psi.

\] Add your standard coefficients or signs or \(2\pi\) factors if you hate my schematic picture. OK, there's apparently some electromagnetic 4-potential \(A_\mu\) that interacts with the current – the density of the electric charge and the vector-valued density of the flux of that charge. And this term increases or decreases the particles' energy in the electromagnetic potential, bends the paths in magnetic fields, and more.

The existence of the \(A_\mu\) electromagnetic gauge field may be derived from the \(U(1)\) gauge symmetry, by requiring that the apparent freedom to change the phase of the field \(\Psi\) is extended to become the independent freedom at each spacetime point. Once you accept that the field \(A_\mu\) is needed for that and you have fields like \(\Psi\) for the electron and \(A_\mu\) for the electromagnetic field, the structure of the terms in the Lagrangian is mostly dictated by "nonzero physical content", "consistency", "Lorentz covariance", and "gauge symmetry".



Once \(\Psi,A_\mu\) are understood to be quantum fields i.e. operators, all the probabilities of all conceivable outcomes may be calculated perturbatively (at least if some good enough approximation is good for you) from expressions such as the annihilation Feynman diagram above. There is quite an interesting question for the beginning: "Why can the point-like electron and the point-like positron exactly hit each other at all?"

If you imagine that they're like point-like planets, the probability that their initial velocities exactly lead them to a head-on collision is infinitesimal i.e. zero. And with a high enough energy, it should also be impossible that they lose enough energy to spiral and fall on one another. In classical physics, the annihilation of point-like particles would be impossible.

But quantum mechanics – its subtype called quantum field theory in this case – saves the day. Quantum mechanics calculates probabilities. They're calculated as squared absolute values of probability amplitudes. And probability amplitudes of quantum field theories involve terms (something that is added with other terms by the plus signs) that are integrals over histories. Alternatively, they may be rearranged as Feynman diagrams which are integrals over the spacetime points where the \(\bar\Psi \cdot A\cdot \Psi\) interaction took place.

Because quantum mechanics allows all intermediate histories, it also allows the intermediate history in which the electron and positron precisely collide – their position is the same at some point (which is reflected by continuous electron-positron line in the diagram). This single "infinitely unlikely point" of the history space (which is located at the cubic vertex or vertices of the Feynman diagram) still contributes a finite amount because some terms are weighted by a delta-function (whose integral over the space is one). So just because the precise encounter of the electron and the positron is a possibility, quantum mechanics guarantees that it changes the final probabilities of the elastic collisions, inelastic collisions, or annihilation by a finite amount!

In classical physics, just the existence of an infinitely unlikely possible intermediate history couldn't change the predictions. Classical physicists (or critics of quantum mechanics) could passionately argue that it's impossible for the predictions to change due to infinitely fine-tuned possibilities. But quantum mechanics disagrees and says that the predictions of the probabilities of virtually any outcome are unavoidably affected if there exist some extra potential intermediate histories.

You know, I am sure that this is the kind of the conceptually important wisdom that should be explained by popular articles about similar questions – exactly because the uninformed beginner is likely to make wrong assumptions and incorrect guesses. And because these snippets are great examples showing how some general principles of quantum mechanics (e.g. summation over histories) qualitatively affect particular processes such as the annihilation (they make it possible). But it's never being done – perhaps because none of the popular writers actually understands any of these things.

Obviously, the answer to the question posed to Ethan may be a more or less detailed summary of a quantum field theory course. It makes no sense to try to compress a whole quantum field theory course into this blog post or any other single blog post. But I think that even the point "to understand these things properly, you need to learn quantum field theory well" is simply not being communicated. These laymen clearly maintain some kind of a belief that they may circumvent all the difficult stuff of QFT (and maybe even circumvent all of quantum mechanics) while properly understanding the answers to all questions about Quantum Electrodynamics. But it's simply not the case.

But the laymen – and maybe even young prospective physicists – are much more misdirected when it comes to any "thinking beyond the old-fashioned Standard Model machinery and pictures". They're deliberately indoctrinated by the completely wrong meme that nothing has changed about the physicists' knowledge or thinking about these matters since 1972, a year I semi-randomly picked. Near the beginning, Siegel wrote:
They [electrons] were the first fundamental particles discovered, and over 100 years later, we still know of no way to split electrons apart.
From an appropriate side, this is a perfectly valid statement. And I have almost certainly made the almost identical statement many times, too. The electron still looks like an elementary particle, despite its (discovery's) age over 100 years. In this sense, the electron differs from molecules, atoms, nuclei, protons, and neutrons that have been shown to have some particular internal architecture.

However, in between the lines and in the broader context, you may see that Siegel and others are conveying something stronger that actually isn't right. They want the reader to think that nothing has changed about our view that "the electron could be or should be a structureless point-like particle" from 1919 or from 1972. That is simply untrue.

After 1972, physics has understood lots of deep things that have changed the physicists' understanding of all these questions. It hasn't looked helpful to teach these things to the laymen or high school students, so the laymen and the high school students just remain largely ignorant about them and keep the naive Wikipedia-style or high-school-level pictures as frozen in 1972.

But that doesn't mean that the scientists' opinions and expectations haven't changed since 1972. They have and the popular writers who try to deny these transformations are simply creating an abyss between their readers and the actual scientific research.

If we focus on the "internal architecture of the electron", we may say that the main changes since 1972 took place in two realms:
  • renormalization group – and more generally, a deeper, non-perturbative etc. understanding of quantum field theories, what they are, what are their limits, where the parameters come from and which of them are natural etc.
  • string theory – and more generally our understanding of particular mechanisms beyond quantum field theory that clarify in what sense the Standard Model is generated or may be generated as an approximate description of a more fundamental theory
OK, Siegel – and most other popular writers – love to deny and hide both. Last week, I discussed some emotional ideas of Eric Weinstein. One of his proposals was that the task for theoretical physicists should be to popularize the renormalization group, a gem they are sitting at, and to spread it to other fields and turning it into a general knowledge.

I have explained that my side is even losing the battle to preserve \(x,y\) at the Czech elementary schools, let alone seeds of the renormalization group – and I have argued that top physicists simply shouldn't be understood as teachers or communicators because they're ultimately doing a much more selective and special kind of work. But articles such as Siegel's Forbes column are the perfect examples of the venues in which the renormalization group thinking should be promoted. And the question about the internal architecture of the electron was a perfect opportunity to make a small step for a man but a larger step for the mankind in this direction. Either because Siegel doesn't know or doesn't like the renormalization group, he hasn't used the opportunity at all. It's not just him – it's most of the mass writers. The underlying reason might be that the writers simply do not have a deeper conceptual understanding of the issues than the readers – so the writers just don't have anything substantial to teach to the readers, aside from some boring tables.

So the readers are expected to keep their belief that nothing has changed since 1972 and the electron is just point-like. The end of the story.

In real physics, while we don't have any promising quantum field theories in which the electron is composite, we do understand that the electron could very well be composite – it's a possibility that we must simply always consider viable because we know that seemingly simple theories such as the Quantum Electrodynamics with the simple point-like electrons generally arise as long-distance limits of different (sometimes simpler, sometimes more complex) theories with different degrees of freedom.

Even if "the electron" stayed point-like up to the Planck scale, the field that produces the electron is "renormalized" in between the high-energy regime which are close to the fundamental laws of Nature; and the low-energy regime that is enough for a good enough description of low-energy phenomena involving the electron.

In fact, the electroweak theory that has already been established forces you to isolate the electron's spinor fields from some doublets (in which the neutrino fields start as indistinguishable fields at high energies, before the Higgs has a vev) – and also diagonalize some mass matrix that involves three generations of charged (and neutral) leptons. We also know that the fine-structure constant \(\alpha\approx 1/137.036\) isn't really fundamental. First of all, it's a function obtained from two electroweak couplings that mix and these electroweak fine-structure constants are more fundamental than the electromagnetic one; and all these constants "run" with the energy scale, while the high-energy values of the couplings are different than the notorious \(1/137\) yet more fundamental.

My point is that Siegel deliberately tries to enforce the readers' belief that there is nothing conceptual about the electron that goes beyond Quantum Electrodynamics. Even the fermion mass matrices of the Standard Model and the doublets – some aspects of the electron beyond Quantum Electrodynamics – are being deliberately obfuscated while the beyond the Standard Model thinking is being hidden altogether.

This brings me to the second class of "hidden secrets". Siegel doesn't want the readers to embrace any insights or arguments from the renormalization group era – which also began in the mid 1970s or so. But he also wants to hide all the actual known types of "more fundamental theories" that produce the Standard Model as an approximation.

They include grand unified theories, supersymmetric theories, and – most ambitiously and rigorously – compactifications of string theory. While no example has been picked as the single, provably right theory beyond the Standard Model, they have provided us with many working proofs of the concept that theories compatible with all the known observations exist where the Standard Model terms are not the "end of the explanatory story".

In particular, the electric charge (of the electron – the lightest charged particle) – may emerge as the quantized Kaluza-Klein momentum \(p_5=Q/R\), \(Q\in\ZZ\) of a particle moving in a higher-dimensional Universe with a circular dimension \(x^5\). The electric charge may also arise as the winding number of a string, or a wrapping number of a membrane, counting how many times the string or brane is wound around a non-contractible circle or a non-contractible higher-dimensional submanifold of the manifold of extra dimensions.

Also, the electric charge may be reduced to a topological invariant in some field configurations – e.g. those involving tachyons within a higher-dimensional annihilating D-branes, along the lines initiated by Ashoke Sen. We could find a few more interesting effects in which "the electric charge has some deeper explanation or geometrization" within string theory. We could divide the stringy compactifications – heterotic, type I, type IIA brane worlds, M-theory with boundaries, M-theory on \(G_2\) manifolds, and F-theory of several subfamilies – into the groups. In each group, some particular "geometrization" of the electric charge would be more important than others.

And we could see that the electron itself is most likely to "be" either a vibrating closed string, or a vibrating open string of some kind, although it's really a shrunk wrapped membrane in some M-theory models etc. As a vibrating string, the electron is composite, after all, although the "string bits" – building blocks in a regularized picture – are just a string length away from each other, an inaccessibly tiny distance.

Again, none of them has been established as the final answer yet which is why the research of them is ongoing. But the fact that none of them has been established doesn't mean that the research is meaningless. If one had adopted this utterly stupid anti-research "logic" in the past, then all the scientific progress would have been impossible in the past. Why? Simply because every insight, however rock-solid at the end, must first be investigated by someone who isn't immediately sure that the idea is correct.

If someone spits upon research just because the conclusions aren't rock-solid yet, then he or she is spitting on all research and on science as a whole. I find it amazing that so many people seem unaware of this elementary point.

So Siegel's reader was asking a question of the type "I want to understand electrons at a deeper level" and Siegel responded with "don't ask, nothing interesting to be seen relatively to the high school summaries from 1972". If Siegel and/or his readers were this disinterested in the actual answers and possible answers to these questions, as they are being refined by actual researchers, why do they ask the questions at all? And why do they pretend to answer them?

It makes no sense and this whole question-and-answer ritual seems to be a deception. You are either interested in the deeper origin of the electric charge and the electron – which means that you want to look at some of the best papers in which the best scientists are addressing this question – or you are uninterested. You shouldn't pretend that both answers are possible at the same moment. If you're not following any ideas since 1972 about the deeper explanations of the electron or the electric charge, then you are just a superficial layman uninterested in the research of particle physics. Period. It's just fraudulent for you to pretend something else.

Real theoretical high-energy physicists are very interested and they have made a huge progress, even after 1972.

by Luboš Motl (noreply@blogger.com) at April 07, 2019 07:57 AM

April 06, 2019

Andrew Jaffe - Leaves on the Line

@TheMekons make the world alright, briefly, at the 100 Club, London.

by Andrew at April 06, 2019 10:17 AM

April 04, 2019

John Baez - Azimuth

Hidden Symmetries of the Hydrogen Atom

Here’s the math colloquium talk I gave at Georgia Tech this week:

Hidden symmetries of the hydrogen atom.

Abstract. A classical particle moving in an inverse square central force, like a planet in the gravitational field of the Sun, moves in orbits that do not precess. This lack of precession, special to the inverse square force, indicates the presence of extra conserved quantities beyond the obvious ones. Thanks to Noether’s theorem, these indicate the presence of extra symmetries. It turns out that not only rotations in 3 dimensions, but also in 4 dimensions, act as symmetries of this system. These extra symmetries are also present in the quantum version of the problem, where they explain some surprising features of the hydrogen atom. The quest to fully understand these symmetries leads to some fascinating mathematical adventures.

I left out a lot of calculations, but someday I want to write a paper where I put them all in. This material is all known, but I feel like explaining it my own way.

In the process of creating the slides and giving the talk, though, I realized there’s a lot I don’t understand yet. Some of it is embarrassingly basic! For example, I give Greg Egan’s nice intuitive argument for how you can get some ‘Runge–Lenz symmetries’ in the 2d Kepler problem. I might as well just quote his article:

• Greg Egan, The ellipse and the atom.

He says:

Now, one way to find orbits with the same energy is by applying a rotation that leaves the sun fixed but repositions the planet. Any ordinary three-dimensional rotation can be used in this way, yielding another orbit with exactly the same shape, but oriented differently.

But there is another transformation we can use to give us a new orbit without changing the total energy. If we grab hold of the planet at either of the points where it’s travelling parallel to the axis of the ellipse, and then swing it along a circular arc centred on the sun, we can reposition it without altering its distance from the sun. But rather than rotating its velocity in the same fashion (as we would do if we wanted to rotate the orbit as a whole) we leave its velocity vector unchanged: its direction, as well as its length, stays the same.

Since we haven’t changed the planet’s distance from the sun, its potential energy is unaltered, and since we haven’t changed its velocity, its kinetic energy is the same. What’s more, since the speed of a planet of a given mass when it’s moving parallel to the axis of its orbit depends only on its total energy, the planet will still be in that state with respect to its new orbit, and so the new orbit’s axis must be parallel to the axis of the original orbit.

Rotations together with these ‘Runge–Lenz transformations’ generate an SO(3) action on the space of elliptical orbits of any given energy. But what’s the most geometrically vivid description of this SO(3) action?

Someone at my talk noted that you could grab the planet at any point of its path, and move to anywhere the same distance from the Sun, while keeping its speed the same, and get a new orbit with the same energy. Are all the SO(3) transformations of this form?

I have a bunch more questions, but this one is the simplest!

by John Baez at April 04, 2019 04:00 AM

John Baez - Azimuth

The Pi Calculus: Towards Global Computing

 

Check out the video of Christian Williams’’s talk in the Applied Category Theory Seminar here at U. C. Riverside. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Abstract. Historically, code represents a sequence of instructions for a single machine. Each computer is its own world, and only interacts with others by sending and receiving data through external ports. As society becomes more interconnected, this paradigm becomes more inadequate – these virtually isolated nodes tend to form networks of great bottleneck and opacity. Communication is a fundamental and integral part of computing, and needs to be incorporated in the theory of computation.

To describe systems of interacting agents with dynamic interconnection, in 1980 Robin Milner invented the pi calculus: a formal language in which a term represents an open, evolving system of processes (or agents) which communicate over names (or channels). Because a computer is itself such a system, the pi calculus can be seen as a generalization of traditional computing languages; there is an embedding of lambda into pi – but there is an important change in focus: programming is less like controlling a machine and more like designing an ecosystem of autonomous organisms.

We review the basics of the pi calculus, and explore a variety of examples which demonstrate this new approach to programming. We will discuss some of the history of these ideas, called “process algebra”, and see exciting modern applications in blockchain and biology.

“… as we seriously address the problem of modelling mobile communicating systems we get a sense of completing a model which was previously incomplete; for we can now begin to describe what goes on outside a computer in the same terms as what goes on inside – i.e. in terms of interaction. Turning this observation inside-out, we may say that we inhabit a global computer, an informatic world which demands to be understood just as fundamentally as physicists understand the material world.” — Robin Milner

The talks slides are here.

Reading material:

• Robin Milner, The polyadic pi calculus: a tutorial.

• Robin Milner, Communicating and Mobile Systems.

• Joachim Parrow, An introduction to the pi calculus.

by John Baez at April 04, 2019 01:00 AM

April 01, 2019

Lubos Motl - string vacua and pheno

Skepticism about Standard Models in F-theory makes no sense
Four weeks ago, I discussed a quadrillion Standard Model compactifications that were constructed within F-theory by Cvetič et al. For some happy reasons, Anil at Scientific American wrote his own version of that story four days ago:
Found: A Quadrillion Ways for String Theory to Make Our Universe
I think that Scientific American hasn't been publishing this kind of articles about some proper scientific research – and Anil hasn't been writing those – for years. Some adult who works behind the scenes must have ordered this one exception, I guess. So I am pretty sure that the readers of SciAm must have experienced a cultural shock because the article is about a very different "genre" than the kind of pseudoscientific stuff that has dominated SciAm for years.



Well, there are some differences between my and Anil's comments about that article. But there exists "a very different" way of talking about these matters – a rant titled This Week's Hype (this title has been recycled about thousands of times) written by Mr Peter Woit.



OK, so he seems dissatisfied that SciAm writes about this fancy, rigorous research at all. If some people still read those vacuous anti-science tirades, Mr Peter Woit serves them the usual emotional gibberish. First, the paper and the SciAm summary are said to be "hype". Well, there is absolutely no hype in those articles. It's just a very technical research of 8-dimensional geometries that are relevant for particle physics thanks to the F-theory constructions; and a semi-popular summary of that research.

The first full sentence with a complaint says:
As usual in these things, the only physicists quoted are the authors of the article, as well as some others (Cumrun Vafa and Washington Taylor) who are enthusiastic about the prospects for getting the Standard Model out of “F-theory”.
People who have been doing research on F-theory – and especially phenomenology of F-theory – were chosen to provide their opinion on the paper by Cvetič et al. for a simple reason: They are the experts and the "only" experts. The opinions of non-string theorists would clearly be nothing else than random incoherent noise that would brutally lower the quality of the story in SciAm.

Even most string theorists could be expected to say misleading things about F-theory and its realizations of the Standard Model. It's just wrong to fill articles about a technical research where many details matter a great deal with some non-experts' emotions. These emotions wouldn't add any positive value. And you know, there is a very good reason why Cumrun Vafa was asked about his opinion. He is the father of F-theory. A detail, right? Maybe some people think that shouting "F-theory is evil" is as good an expertise as being the father of F-theory but I don't.

Wati Taylor is also highly qualified to comment; among other things, he co-authored a truly gigantic class of flux vacua (not resembling the Standard Model) in F-theory in 2015.

If someone doesn't know e.g. how to write a torus by a twelfth degree complex polynomial equation in \(x,y,z\), then he or she has almost certainly nothing useful to say about F-theory, period. And be sure that Peter Woit as well as over 99.99999% of the mankind belongs to this "not really promising" set. Science builds on evidence and calculations, not on "opinions" of the people who don't understand anything about the issue. People who can't reasonably say things like "oops, Mirjam, you forgot a term contributing to the first Chern class from a brane" should exploit the opportunity to shut their mouth because they have clearly nothing to contribute and it's terrible if some mass culture is trying to pretend otherwise. The scientific value of some knee-jerk "critics of F-theory" is exactly the same as the musical value of a drunk guy who penetrates into a concert hall and throws up on the orchestra. They should be a task for bodyguards, not researchers or musicians.

But Woit's whining gets more intense:
No one skeptical of the idea of F-theory compactifications of string theory...
A person who is "skeptical of the idea of F-theory compactifications of string theory" is exactly analogous to a person who is skeptical about the other planets in the Solar System or skeptical about the primes greater than 100. He or she is clearly a person who doesn't have the slightest idea about the topic that was discussed in the SciAm article.

The existence of F-theory compactifications of string theory is pretty much a rigorous mathematical fact. There is no "hype" or "commercial" or "exaggeration" hiding in this statement. It is really literally true. Even some 30 years after the First Superstring Revolution, the only known consistent theories of quantum gravity coupled to particle physics are the constructions linked to string theory. And they may be divided to five or so classes of compactifications – here I clump all of F-theory as one class. That's why actual physicists working on the top-down particle physics take string compactifications – and even F-theory compactifications, let's say a 20% market share of the stringy model building industry – very, very seriously.

One simply can't be both intelligent and "skeptical about them".
If such a person had been consulted, he or she might have pointed out: Models like this have been around for over two decades, see for instance this from 23 years ago.
It's nice if someone were capable of noticing that F-theory has been investigated since February 1996 (OK, is there something shocking about that timing information?) but this knowledge of a historical factoid that is extremely far from turning someone to an F-theory expert who could be reasonably "consulted" in articles about F-theory.
They have always come with claims that some sort of connection to experiment was right around the corner.
There is no comment about any "experiments around the corner" in the paper by Cvetič et al. and there is absolutely no reason why such remarks should be "mandatory" in papers that map the landscape of possibilities to get a realistic theory of particle physics from a consistent framework of quantum gravity.
This new work doesn’t even bother trying to make “predictions”. It just works backwards, trying to match the crudest aspects of Standard Model, ones determined by a small set of small integers.
There is absolutely nothing wrong about "working backwards". Indeed, the search for the right theoretical explanation of Nature is an inverse problem of a sort. To one extent or another, everyone who has ever searched for a better theory of Nature was "working backwards": Kepler, Newton, Maxwell, Einstein, Feynman, Glashow, and everyone else. It's just extremely embarrassing if someone misunderstands even such totally rudimentary facts about science.

One quadrillion Standard Models in the paper refer to one quadrillion of 8-dimensional topologies that, if used as the hidden dimensions of F-theory, produce a particle physics spectrum whose low energy part agrees with the Minimal Supersymmetric Standard Model. It is nontrivial that all the quantum numbers work – and the authors were capable of translating these conditions into geometric constraints on the 8-dimensional topology. It is an impressive piece of work whether or not an anti-physics heckler prefers to spit on it.

Also, it's laughable to describe the reproduction of the Standard Model spectrum just as "crudest aspects" of the model. All physical predictions are totally determined by the theory given the knowledge of the spectrum and the values of some continuous parameters. In this sense, the correct quantum numbers describing the spectrum are about "one-half" of all of physics (and virtually all of the "qualitative aspects" of physics), not just the "crudest aspects".
Given the huge complexity and number of choices of these F-theory constructions, that some number of them would match this set of small integers is not even slightly surprising.
One quadrillion is much more than any "package of explicitly constructed Standard Models" that was found ever before. So by the sheer size, it is surprising. Cvetič et al. deliberately tried to look for such a class and they found a quadrillion solutions. Some people could have expected more, some people could have expected less. Surprises are a subjective matter. It is meaningless to talk about "surprises" in an objective way.

What is surprising to me is the very concise way how the geometric conditions equivalent to the "Standard Model spectrum" may be written down. I find the topological condition with some "three terms" to be much more economical than the usual QFT ways to describe the Standard Model spectrum.
The authors seem to argue that it’s a wonderful thing that they have found quadrillions of complicated constructions with this kind of crude match to the SM. The problem is that you don’t want quadrillions of these things: the more you find, the less predictive the setup becomes.
These assertions are absolutely irrational. Every consistent theory of quantum gravity that also includes the Standard Model spectrum and perhaps a few more things that are needed is a viable candidate to describe Nature in detail. So until they're ruled out by a wrong detailed prediction, these quadrillion F-theory vacua are viable candidates, too. In science, one simply can't refute or eliminate possible theories by incoherent emotional rants. Only the falsification by conflicting empirical evidence may eliminate models – that's true for every element of this "set of one quadrillion vacua", too!

At this level of fineness, it is simply another mathematical fact that the number of viable candidates is at least one quadrillion. Someone could find a number smaller than one quadrillion "more philosophically pleasing" but in that case, he would simply be discarding most of the real possibilities and therefore heavily reducing the probability of finding the right theory.

Realistic models of quantum gravity coupled to the Standard Model are rather rare (string theory is the unique solution, it seems), but because of the multiplicity of the stringy vacua, they may also be considered numerous. Is the number of possibilities large or small? It depends on what you mean by "large" or "small" e.g. what you expected. At any rate, this is the relevant class one has to work with and to say otherwise means to be detached from the basic facts.

So of course it is a wonderful thing that these one quadrillion F-theory Standard Models were found and explicitly constructed.
What’s being promoted here is a calculation that not only predicts nothing, but provides evidence that this kind of thing can’t ever predict anything. A peculiar sort of progress…
These compactifications are (supersymmetric) Standard Models. So they make the same qualitative predictions – of the spectrum and the particles' interactions etc. – as the Standard Model as a QFT, or any other realization of the Standard Model within a complete theory. F-theory isn't just some numerology producing quantum numbers; it actually does include all of the QFT dynamics as a limit. So to say that the F-theory vacua make "no predictions" is as silly as saying that the Standard Model makes no predictions. But an intelligent person understands it. He doesn't have the need to talk about "predictions" all the time.

Woit's pathologically obsessive usage of the word "predict" is a sign for every intelligent reader indicating that he's just doing a propaganda for the least demanding readers, not anything that is related to science. While his short emotional rant uses the word "predict" a whopping six times, this verb doesn't appear on the 6 dense pages of the Cvetič et al. preprint at all.

Science isn't being done and cannot be done by obsessively screaming buzzwords. It is obvious to everybody with a brain why the mapping of Standard-Model-like compactifications in F-theory, 1/5 of string theory, is an important enough research. SciAm hasn't consulted "F-theory skeptics" because it realized that consulting people who are totally and completely unfamiliar with the topic would be heavily counterproductive for the quality of the resulting article.

And that's the memo.

P.S.: Some commenters realize that Woit's negative remarks are just "mean" and uninformed. But there's one other commenter who cannot understand anything about the derivation but who still feels entitled to demand a "worsening" of the title. "A quadrillion standard models in F-theory" is no good for that commenter, you know, because they're supersymmetric models and they may have various proton decay operators. So the title will hopefully be bastardized by a reviewer, the commenter hopes.

Holy cow. Every reader who has at least 1% chance to get anything useful out of the paper knows that all realistic, detailed models of particle physics incorporated in a theory of quantum gravity must be supersymmetric models. In fact, all the promising potential readers almost certainly know that all models ever described by Cvetič were always supersymmetric. So all these people simply know that the title refers to a verbally nice shortcut of the "MSSM". The MSSM really is the "standard" model in the string model building community.

Also, it's terminologically correct to use the term "Standard Model" even if e.g. the proton is much less stable there. The "Standard Model" is a standard phrase defining models that have the same qualitative low-energy spectrum as the theory we need to explain the LHC data. I sincerely hope it's still impossible for these individuals from comment sections on the Internet to corrupt the peer review process but sometimes I am no longer sure.

by Luboš Motl (noreply@blogger.com) at April 01, 2019 02:14 PM

March 31, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

My favourite conference; the Institute of Physics Spring Weekend

This weekend I attended the annual meeting of the Institute of Physics in Ireland. I always enjoy these meetings – more relaxing than a technical conference and a great way of keeping in touch with physicists from all over the country. As ever, there were a number of interesting presentations, plenty of discussions of science and philosophy over breakfast, lunch and dinner, all topped off by the annual awarding of the Rosse Medal, a highly competitive competition for physics postgraduates across the nation.

banner

The theme of this year’s meeting was ‘A Climate of Change’ and thus the programme included several talks on the highly topical subject of anthropogenic climate change. First up was ‘The science of climate change’, a cracking talk on the basic physics of climate change by Professor Joanna Haigh of Imperial College London. This was followed by ‘Climate change: where we are post the IPCC report and COP24’, an excellent presentation by Professor John Sweeney of Maynooth University on the latest results from the IPCC. Then it was my turn. In ‘Climate science in the media – a war on information?’,  I compared the coverage of climate change in the media with that of other scientific topics such as medical science and and big bang cosmology. My conclusion was that climate change is a difficult subject to convey to the public, and matters are not helped by actors who deliberately attempt to muddle the science and downplay the threat. You can find details of the full conference programme here and the slides for my own talk are here.

 

Images of my talk from IoP Ireland 

There followed by a panel discussion in which Professor Haigh, Professor Sweeney and I answered questions from the floor on climate science. I don’t always enjoy panel discussions, but I think this one was useful thanks to some excellent chairing by Paul Hardaker of the Institute of Physics.

IMG_2504 (1)

Panel discussion of the threat of anthopogenic climate change

After lunch, we were treated to a truly fascinating seminar: ‘Tropical storms, hurricanes, or just a very windy day?: Making environmental science accessible through Irish Sign Language’, by Dr Elizabeth Mathews of Dublin City University, on the challenge of making media descriptions of threats such as storms hurricanes and climate change accessible to deaf people. This was followed by a most informative talk by Dr Bajram Zeqiri of the National Physical Laboratory on the recent redefinition of the kilogram,  ‘The measure of all things: redefinition of the kilogram, the kelvin, the ampere and the mole’.

Finally, we had the hardest part of the day, the business of trying to select the best postgraduate posters and choosing a winner from the shortlist. As usual, I was blown away by the standard, far ahead of anything I or my colleagues ever produced. In the end, the Rosse Medal was awarded to Sarah Markham of the University of Limerick for a truly impressive poster and presentation.

D25jKhvXcAE8vdA

Viewing posters at the IoP 2019 meeting; image courtesy of IoP Ireland

All in all, another super IoP Spring weekend. Now it’s back to earth and back to teaching…

by cormac at March 31, 2019 08:51 PM

March 29, 2019

John Baez - Azimuth

Social Contagion Modeled on Random Networks

 

Check out the video of Daniel Cicala’s talk, the fourth in the Applied Category Theory Seminar here at U. C. Riverside. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Abstract. A social contagion may manifest as a cultural trend, a spreading opinion or idea or belief. In this talk, we explore a simple model of social contagion on a random network. We also look at the effect that network connectivity, edge distribution, and heterogeneity has on the diffusion of a contagion.

The talk slides are here.

Reading material:

• Mason A. Porter and James P. Gleeson, Dynamical systems on networks: a tutorial.

• Duncan J. Watts, A simple model of global cascades on random networks.

by John Baez at March 29, 2019 07:27 PM

John Baez - Azimuth

Problems in Symplectic Linear Algebra

Here is Jonathan Lorand‘s talk in our applied category theory seminar at U.C. Riverside:

Abstract. In this talk we will look at various examples of classification problems in symplectic linear algebra: conjugacy classes in the symplectic group and its Lie algebra, linear lagrangian relations up to conjugation, tuples of (co)isotropic subspaces. I will explain how many such problems can be encoded using the theory of symplectic poset representations, and will discuss some general results of this theory. Finally, I will recast this discussion from a broader category-theoretic perspective.

Talk slides: Problems in symplectic linear algebra.

Reading material:

• Jonathan Lorand, Classifying linear canonical relations.

• Jonathan Lorand and Alan Weinstein, Decomposition of (co)isotropic relations.

by John Baez at March 29, 2019 07:09 PM

Lubos Motl - string vacua and pheno

It's irrational to both worship and completely distrust a thinker
People like Weinstein hide their fanatical desire to silence thinkers into some "flattering" mumbo-jumbo

Peter Thiel has hired Eric Weinstein as a part-time economist, part-time talking head about science – someone who produces far-reaching and emotionally loaded statements about the value of science, its future, the relationship between scientists and the establishment and, as we will see... the need for the majority society or the rich to conquer the scientists' brains and turn the scientists into obedient slaves.

Last week, Weinstein gave an 80-minute-long very unfocused interview about music, humor, labor... (I don't have patience for all this cheesy and distracting stuff and sorry to say, it is very clear that I don't belong to the target audience – it's just talk addressed to the mass culture) and after 50:00 or so, he talks about his "love-hate relationship" with theoretical physicists.



On one hand, Weinstein sometimes seems to understand mathematical logic and the problems with logical contradictions. For example, hours ago, he tweeted


And I "liked" the tweet because indeed, there is a general or potential contradiction between transparency and privacy. Some people advocate both principles and they do so to the extent that they're running into rather sharp contradictions. I still believe that many of us have a reasonable taste where the boundary should lie and where transparency should replace privacy (the most important principle is that the more personal, individual, and non-essential for other people's lives some information is, the more privacy should be respected).

But some people don't seem to realize that they're sometimes religiously defending words that contradict each other in the zeroth approximation.



Great. So Weinstein sometimes realizes that logical contradictions are a problem. But in the interview, he says things like
There is nothing that could intellectually match or beat the theoretical physics community. They do amazing things and may produce things like the molecular biology as a small side effect of their research.

Theoretical physicists have been on the wrong track for half a century or so and they need people like me to fix it and end the epoch of failures.
Can't you see the obvious contradiction, Eric? According to proper logic, you either believe that it's constructive to allow some bright people like XY (or a vaguely defined group of folks similar to XY) to think for themselves and reach their own opinions about what is true and what is worth thinking about (related to physics); or you don't. Assuming the logical consistency, your answer just cannot be both Yes and No!

But your answer is both Yes and No, Eric. You believe that there's some "pile of mental gold" but you seem to believe that its powers may only exploited if someone like you who isn't a part of that pile of gold makes all the important decisions – and probably determines the rough conclusions that the scientists in the pile should reach. Sorry, science just cannot work like that. It makes absolutely no sense because the gold is composed of all the mental steps and decisions that you apparently want to take from them.

In Germany of the 1930s, they enjoyed the Aryan Physics. The politicians also saw some potential in the body of physicists but to optimize the usage of the potential, the physicists had to be constrained. For example, they had to be shielded from the evil Jewish and theoretical physics – starting with Einstein's relativity. Folks like Werner Heisenberg found themselves in between a rock and a hard place, seeing their patriotism collide with their scientific knowledge, passion, and integrity (Heisenberg obviously knew that relativity was right and he could mostly keep that view which indicates some tolerance of the system of that time, perhaps higher than what we are seeing today). Too bad, the "mainstream" thinking about these matters that you represent has returned to this discourse of the 1930s.

Theoretical physicists may be average or worse in tons of ordinary things. But what defines them is that they're better – or reasonably expected to be better – exactly in the kind of abstract, demanding, extraordinary things that the ordinary people are naive about. This is exactly where their freedom to think is absolutely essential for the exploitation of their intellectual potential. To suggest that they should be "led" by some outsiders or ordinary people when it comes to the big questions directly linked to the topic of the research means to say that they're really useless.

Weinstein dramatically discusses that in the mid 1980s, he could have joined theoretical physics but he didn't because he disliked string theory. Why do you discuss it so dramatically, Eric? You just never became a theoretical physicist. Many other people considered becoming astronauts but they didn't become astronauts, e.g. because NASA decided that their amputated leg was a problem, after all. What's the difference? Or consider a lad in the mid 19th century: "Dear professor, I want to become a famous physicist but I don't like thermodynamics and electrodynamics, those are useless failures. Kepler's laws were nice." What can the professor do with him? You were born in 1965 and around 1985, you were approximately 20 years old, a not very mature man, you weren't "getting" string theory and similar things that defined the field at that time, and you made a bet that string theory would be a fad that would go away. And maybe you could return to theoretical physics then.

Sadly, people born around 1965 are already leaving us. A crazy and eccentric yet privately introvert blonde Czech singer who loved tropics, delights, and plush toys, Daniel Infinite (Daniel Nekonečný) who was born in 1966 as Daniel Finite (Daniel Konečný, no kidding), and a key person in the bands "Laura And Her Tigers" and the "Roaring of the Swist", suddenly died of heart attack today (well, a few days ago, but was found today). See e.g. his I Am the Boss/Barefoot [And You Are Bosa Nova/Barefoot, a pun] or more.

Well, 34 years later, this bet still seems to be completely wrong and in these subsequent 34 years, you haven't done any – stringy or non-stringy – stuff that would be considered a valuable contribution to physics by anyone similar to Mr XY mentioned above. But you're trying to paint yourself as a hero because you didn't ever become a physicist. What sort of a hero status is it? You can see that you're just pandering to the egos of the most ordinary people who just want to hear that physicists are bad in some way, right?

You didn't become a physicist because you weren't capable of doing any research that would be considered interesting at that time. So you weren't hired by the people who really understand stuff. You could have hypothetically had some non-stringy interesting stuff but you didn't have it. In the subsequent 34-year-long era, the outcome would have probably been the same. You may be hired as a theoretical physicist by Peter Thiel who knows virtually nothing about theoretical physics. Great. Why do you think it is a reason to brag? It's not.

And now, in Weinstein's comments, there are some real gems such as:
The youngest person who has contributed to the Standard Model is Frank Wilczek now.
Right. The youngest person who has co-built the Standard Model is a rather old man now – simply because the Standard Model is a rather old theory, too. And, if you have missed it, Albert Einstein and Isaac Newton have already died. Rest in peace, Isaac and Albert. What's the big issue here? The Standard Model was really completed in the early to mid 1970s. In other words, the Standard Model hasn't been a cutting edge of theoretical physics that could pick the brightest minds for some 45 years.

Other topics have become the hot topics since the 1970s. Many questions have been basically settled while in others, theoretical physicists have found many possibilities and we don't know which of them is right if any. You don't appreciate these advances which is too bad. But the reason why you don't appreciate them is that you're just another ordinary layman. Your lack of appreciation isn't any different from the lack of appreciation for the cutting-edge science that most laymen have displayed in any other previous epoch of physics – or science.

It is frustrating that the broader society doesn't appreciate amazing things that were settled by theoretical physicists since that time – the fact that the spacetime we inhabit has some 6-8 extra dimensions, elementary building blocks are extended and may melt into each other, dualities imply that seemingly very different pictures of the Universe are actually equivalent, black holes evaporate yet preserve the information, there is AdS/CFT and its pp-wave limit and F-theory compactifications with fluxes that are at least cousins of our Universe, and so on.

And indeed, it has a personal dimension. Honestly, I also think it is utterly terrible that the broader public doesn't understand or appreciate e.g. matrix string theory and its founder! You could do better than the average member of the general public but you don't. It is very clear from the "worshiping" part of your monologue that you treat the membership in the "theoretical physics community" as a matter of one's big ego and (like in the case of a couple of other people) this ego was simply hurt when you didn't become a real theoretical physicist. So you're simply trying to revenge for that hurt ego.

There are tons of other weird statements made in the monologue. For example:
The theoretical physics also sits on some golden knowledge such as the renormalization group techniques which could be used everywhere. And theoretical physicists fail to communicate it...
Renormalization group is indeed considered a great conceptual discovery by the "likes of XY" above. But what you don't seem to understand is that a physics researcher is something else (and, within the intellectual hierarchy, much more) than a communicator or a teacher or a journalist. People do various things. Particle physicists use the renormalization group in the particle physics research. Condensed matter physicists use it in condensed matter research. And the renormalization group philosophy and techniques may also gradually penetrate to "less hard" disciplines of the human activity because it can be useful there, too. But it is probably not quite as useful and it is also harder for the people in those "softer" disciplines because the renormalization group may be too hard.

Maybe the renormalization group techniques could advance many other fields. I find it totally plausible. Maybe it's a great project for very smart people similar to the physicist XY above. But "spreading the gospel of the renormalization group" simply isn't the job for the top minds in research. They have more important things to do. To "spread the gospel", it is enough to have "less special" people to do it. If the gospel isn't being spread, it's primarily the fault of the communicators, not the researchers. You don't really seem to understand the differences between the different occupations, Eric. The top researchers with extraordinary brains should have the room to do research – and especially the room to investigate possible ideas with far-reaching, surprising, and counterintuitive conclusions because that's where their comparative advantage lies. No one should intimidate them and force them to accept a layman's opinion about the number of dimensions in the Universe, if I pick a very simple yet important example. If you hire ordinary people to decide about the big enough questions – like whether string theory is correct, you know – while you violently downgrade top researchers to some journalists who promote 40-year-old physics discoveries such as the renormalization group, you will basically destroy science as a part of our civilization. This is no detail, it's no laughing matter.

Incidentally, I would be thrilled to join as a champion of the "renormalization group for other fields". But back in the real world, I have to be one of the warriors who want to preserve variables such as \(x\) and \(y\) in the elementary schools, among similar things, and we're still apparently losing even this battle! How do you want to increase the knowledge of the renormalization group by folks in other fields in a society that is increasingly hostile towards science (and mathematics)? And you, Eric, are contributing to this anti-science hostility – much more than you have ever contributed to science itself.

There is nothing generally wrong about theoretical physics since the 1970s and all the propaganda claiming otherwise is just the postmodern version of the Aryan Physics or Aryan Physics v2.0. Everyone who participates in it should be deeply ashamed. What is actually wrong are the external political pressures acting against the scientific environment and the scholars' very freedom of thought.

And don't make a mistake about it: Superstring/M-theory is the language in which God wrote the ten-dimensional world.

Amen to that – no one else says this for me, either. ;-)



A rant against the relevance of quantum computers
A bonus example showing how journalists are serving anti-science sentiments everywhere

Reader P.F. has enjoyed an article Quantum No Threat to Supercomputing As We Know it. It's quite annoying because that text is almost certainly the worst demagogic text I read about quantum computers in years.

The girl who wrote it created a heroic story from the proposition "Cray, a supercomputer company, hasn't joined efforts to build a quantum computer". Great, it hasn't but what's so wonderful about it? Just a small number of companies did – and those are more interesting here, aren't they? Most companies in the world didn't. Exxon, Tesla, Nestle and other not-really-high-tech companies don't work on their quantum computer. If you really appreciate that passivity, Peter, let me say that I don't own any experimental lab for quantum computing, either! ;-)

And indeed, quantum computers aren't "direct competitors" to supercomputers. Like Philip Morris International, Cray will probably do fine commercially without any quantum computing platform. But the research of quantum computers isn't just some business as usual. It's a disruptive activity - to exploit a buzzword that the journalists who love to spread hype choose in tons of wrong contexts but don't pick e.g. here where it's appropriate – meant to create a whole new industry. Some companies that have worked in adjacent industries are working on it. The current stage is a gradual transition from applied physics to commercially feasible products. It's not yet an established industry which is why it's wrong to look at this activity from a "business as usual" perspective. The companies who invest into it should better not overpay. But that doesn't mean that there aren't wonderful reasons to join these efforts.

Supercomputers and quantum computers are like the companies producing high-caffeinated sodas (Kofola) and alcoholic beverages (Stock Spirits) – choose which of them is the high-energy novel field LOL. They just don't "directly" compete because the beverages are qualitatively different and have different audiences and contexts when they're consumed. Or a more technical analogy: producers of tanks vs anti-tank missiles. They don't directly compete with each other except that the products sometimes do fight against each other. An anti-tank missile may do a very special task – like a quantum computer – a task that rips a tank apart.

Are anti-tank missiles good or bad for the producers of tanks? They're probably good. When tanks are being eliminated by missiles, a straightforward solution is to replete the reservoir of tanks. So tank companies produce more and have higher profits. It's analogous with supercomputers whose applications may be "ripped apart" by some quantum algorithms. When some codes are broken by quantum computers, the first defense strategy will probably be to make the codes harder by using more of the ordinary supercomputer tricks, won't it?

So the claim that supercomputers and quantum computers don't directly compete is probably true – but it's not new at all. And everything else that is being served to the reader is just some kind of misleading delusion. In between the lines, the readers is being served that quantum computers can't have far-reaching implications. They may very well have very far-reaching (and perhaps dangerous) implications for our IT world that depends on cryptography. The reader is served that quantum computers aren't a big deal or they're not very new and fancy applied science. They are a very big deal, surely from a scientific viewpoint. Unlike all the stuff that average journalists love to hype – climate change or electric cars, for example (which aren't new or scientifically interesting at all) – quantum computers are both novel and deep.

And the readers are being persuaded that it's heroic for Cray to ignore this potentially emerging field, quantum computing. There is nothing heroic about it at all. And there's nothing intelligent about praising similar would-be high-brow articles written by girls who don't have a clue about the things that actually matter. By the way, this article is what the "politically correct" articles about quantum computing will look like – hostile rants by authors who don't have a clue how quantum mechanics works but who present themselves as important pundits by spitting on its tangible applications.

by Luboš Motl (noreply@blogger.com) at March 29, 2019 04:09 PM

Robert Helling - atdotde

Proving the Periodic Table
The year 2019 is the International Year of the Periodic Table celebrating the 150th anniversary of Mendeleev's discovery. This prompts me to report on something that I learned in recent years when co-teaching "Mathematical Quantum Mechanics" with mathematicians in particular with Heinz Siedentop: We know less about the mathematics of the periodic table) than I thought.



In high school chemistry you learned that the periodic table comes about because of the orbitals in atoms. There is Hundt's rule that tells you the order in which you have to fill the shells in and in them the orbitals (s, p, d, f, ...). Then, in your second semester in university, you learn to derive those using Sehr\"odinger's equation: You diagonalise the Hamiltonian of the hyrdrogen atom and find the shells in terms of the main quantum number $n$ and the orbitals in terms of the angular momentum quantum number $L$ as $L=0$ corresponds to s, $L=1$ to p and so on. And you fill the orbitals thanks to the Pauli excursion principle. So, this proves the story of the chemists.

Except that it doesn't: This is only true for the hydrogen atom. But the Hamiltonian for an atom nuclear charge $Z$ and $N$ electrons (so we allow for ions) is (in convenient units)
$$ a^2+b^2=c^2$$

$$ H = -\sum_{i=1}^N \Delta_i -\sum_{i=1}^N \frac{Z}{|x_i|} + \sum_{i\lt j}^N\frac{1}{|x_i-x_j|}.$$

The story of the previous paragraph would be true if the last term, the Coulomb interaction between the electrons would not be there. In that case, there is no interaction between the electrons and we could solve a hydrogen type problem for each electron separately and then anti-symmetrise wave functions in the end in a Slater determinant to take into account their Fermionic nature. But of course, in the real world, the Coulomb interaction is there and it contributes like $N^2$ to the energy, so it is of the same order (for almost neutral atoms) like the $ZN$ of the electron-nucleon potential.

The approximation of dropping the electron-electron Coulomb interaction is well known in condensed matter systems where there resulting theory is known as a "Fermi gas". There it gives you band structure (which is then used to explain how a transistor works)


Band structure in a NPN-transistor
Also in that case, you pretend there is only one electron in the world that feels the periodic electric potential created by the nuclei and all the other electrons which don't show up anymore in the wave function but only as charge density.

For atoms you could try to make a similar story by taking the inner electrons into account by saying that the most important effect of the ee-Coulomb interaction is to shield the potential of the nucleus thereby making the effective $Z$ for the outer electrons smaller. This picture would of course be true if there were no correlations between the electrons and all the inner electrons are spherically symmetric in their distribution around the nucleus and much closer to the nucleus than the outer ones.  But this sounds more like a day dream than a controlled approximation.

In the condensed matter situation, the standing for the Fermi gas is much better as there you could invoke renormalisation group arguments as the conductivities you are interested in are long wave length compared to the lattice structure, so we are in the infra red limit and the Coulomb interaction is indeed an irrelevant term in more than one euclidean dimension (and yes, in 1D, the Fermi gas is not the whole story, there is the Luttinger liquid as well).

But for atoms, I don't see how you would invoke such RG arguments.

So what can you do (with regards to actually proving the periodic table)? In our class, we teach how Lieb and Simons showed that in the $N=Z\to \infty$ limit (which in some sense can also be viewed as the semi-classical limit when you bring in $\hbar$ again) that the ground state energy $E^Q$ of the Hamiltonian above is in fact approximated by the ground state energy $E^{TF}$ of the Thomas-Fermi model (the simplest of all density functional theories, where instead of the multi-particle wave function you only use the one-particle electronic density $\rho(x)$ and approximate the kinetic energy by a term like $\int \rho^{5/3}$ which is exact for the three fermi gas in empty space):

$$E^Q(Z) = E^{TF}(Z) + O(Z^2)$$

where by a simple scaling argument $E^{TF}(Z) \sim Z^{7/3}$. More recently, people have computed more terms in these asymptotic which goes in terms of $Z^{-1/3}$, the second term ($O(Z^{6/3})= O(Z^2)$ is known and people have put a lot of effort into $O(Z^{5/3})$ but it should be clear that this technology is still very very far from proving anything "periodic" which would be $O(Z^0)$. So don't hold your breath hoping to find the periodic table from this approach.

On the other hand, chemistry of the periodic table (where the column is supposed to predict chemical properties of the atom expressed in terms of the orbitals of the "valence electrons") works best for small atoms. So, another sensible limit appears to be to keep $N$ small and fixed and only send $Z\to\infty$. Of course this is not really describing atoms but rather highly charged ions.

The advantage of this approach is that in the above Hamiltonian, you can absorb the $Z$ of the electron-nucleon interaction into a rescaling of $x$ which then let's $Z$ reappear in front of the electron-electron term as $1/Z$. Then in this limit, one can try to treat the ugly unwanted ee-term perturbatively.

Friesecke (from TUM) and collaborators have made impressive progress in this direction and in this limit they could confirm that for $N < 10$ the chemists' picture is actually correct (with some small corrections). There are very nice slides of a seminar talk by Friesecke on these results.

Of course, as a practitioner, this will not surprise you (after all, chemistry works) but it is nice to know that mathematicians can actually prove things in this direction. But it there is still some way to go even 150 years after Mendeleev.

by Unknown (noreply@blogger.com) at March 29, 2019 11:02 AM

March 28, 2019

Lubos Motl - string vacua and pheno

Some reasons why the West won't stop building colliders
Many reasons why it's right to keep on building larger, more powerful colliders are often described in rather mainstream articles. But I happen to think that some of them, while fundamentally true, sound like clichés, politically correct astroturf theses. Like the correct statement that the scientific research is a universal value that unites nations – and people from different nations peacefully cooperate on something that boils down to the same humanity inside both. Just to be sure, I totally believe it and it's important, too!

As you know, my emphasis is a bit different... and I want to start with the reasons that are related to the "competition between civilizations". The first assumption of mine that you need to share is that the decisions in the West and the decisions in Asia are done very independently and they may have very different motivations. In particular, the anti-collider activists in the West influence the thinking of the VIPs in China about as much as the P*ssy Riot group does. They're just another strange aspect of the Western mass culture.

China – as the place of the CEPC, a planned future collider – has its own discussions about the colliders but only big shots seem to matter in those. Chen-Ning Yang, a physics titan (Lee-Yang and Yang-Mills), turned out to be the most prominent antagonist. Yang's reasons are really social. He thinks China is too poor and should pay for the people's bread instead. Well, ironically enough, this social thinking won't necessarily be decisive for the leaders in the communist party. Shing-Tung Yau – a top mathematician who comes from a pretty poor family – is among the numerous champions of the Chinese collider.



OK, the pride of the West can't be hurt for too long: astronauts' precedent

Approximately like the space research, high energy physics is one of the major litmus tests that determine which country or continent or civilization is technologically ahead – ahead when it comes to the high-brow newest types of scientific and technological power and/or brute force.

There's one recent precedent that can already teach us something. In the 1960s, America was catching up with the Soviet Union in the outer space. Sputnik, Lajka, and Gagarin have really humiliated America and a response was needed. After a decade of huge investments (around 5% of the GDP was the peak annual fraction), America arguably became #1 in most "disciplines" and the 12 men on the Moon were among the most visible cherries on the pie that supports my assertion. But just to be sure, the manned spaceflights are the "mass culture-oriented" part of the space program.

Numerous scientific experiments deployed by NASA are probably more important, at least from scientists' viewpoint, and the U.S. is well ahead of Russia there, I think.



However, a decade ago or so, already during the reign of George Bush Jr, the U.S. was drifting towards a new long-term strategy that ultimately resulted in America's inability to send people into space. Lots of science experiments were still administered by NASA but America had to beg its friend, Russia, to send the people into outer space.

This situation continued for a couple of years – I am sure that many of you know the dates in detail, I would have to search and remind myself – but it was a very humiliating situation, indeed, especially with the growing American-Russian political tensions in the background. So the situation – which was nothing else than the "1960s V2.0 Lite" just wasn't sustainable.

SpaceX, Elon Musk's cosmic company, became famous – and the only "miracle" that the company did was to partially revive some activity that the U.S. government previously deliberately shut down. SpaceX basically argued "but we still want to send the people in space, don't we?" and did it privately, while recycling some 20-year-old tricks with recycled rockets. And regardless of the government decision to shut down the U.S. manned spaceflights, there was still a clear demand and SpaceX earned lots of money from the government contracts, mainly because of its being politically close to the source of money. Once other companies were allowed to compete with SpaceX, the importance of that company began to drop.

Trump is already talking about the returning of the people to the Moon before 2025 and other things. It makes sense: If you want to make America great again, you should better make sure that it's capable of sending people to the Moon again, too. The message of this story is that even if a fad may lead to the abolition of such a high-tech activity in a country like America, it can't last in the long term. And the main reason for the ongoing or looming revival of U.S. manned spaceflights may be called pride.

Now, no one cares much about the small "competition" between America and Europe because these two worlds are almost "united", they are living together – especially their physicists do. So the Tevatron was shut down and the LHC took over. There are American physicists working there and even those who aren't there now that it is "our" experiment. But the competition between the West and Asia or the East is a different level.

Brain drift: from the West to Asia

The reasons for the U.S. or Europe not to remain inferior for a long time are not only about pride. There are more practical considerations involved. One of the most important ones is the brain drift. Just a century ago, the U.S. was just a "potential" superpower. The British Empire and Germany were "stronger" in most respects.

As you know, that dramatically changed from 1945. America became the world's main – or only – superpower. It wasn't damaged by the Second World War. In many respects, the U.S. has benefited from the war. And the brains were among the important things that the U.S. could acquire at that time. Ironically enough, America's science and technology was strengthened both by the Jews and by the ex-Nazi scientists and engineers. Wernher von Braun is an example of the second group.

You might dispute the importance of the transferred scientific and technological elite for the American leadership. I think it's important. The post-war brain drift was important – and the brains drifting to the American universities, research centers, and corporations are important for the current American technological edge, too. America has a healthier "job market" in these high-tech matters than any other country. It can pay the promising people well – so they often go to America.

According to some, China may become the leader in an analogous way in the future. I think that these rosy predictions for China are a bit exaggerated but it's possible. That country's ability to take over the scientific and technological talent is a likely part of the potentially successful Chinese efforts to become a real superpower.

Now, particle and fundamental physicists represent a certain fraction, perhaps 5% or 10%, of the world's 1 million smartest folks. The reason is understandable: a significant percentage of the smart folks simply are curious about important enough questions – and particle and fundamental physics are top examples.

From some broader perspective, these people are analogous to other smart folks who do other things than particle and fundamental physics. But my explanation assumes that we define particle and fundamental physicists as those smart enough folks (here I am assuming that a meritocratic process actually chooses them as smart, not that they and their allies just describe themselves as smart!) who can be lured to physics research because they naturally consider it important.

At any rate, by becoming the new headquarters of particle physics, China (or even someone else?) can attract a very large portion of the particle physicists – i.e. not really a negligible percentage of the world's top talent. This could have implications for China because, as Eric Weinstein noted, these people may also invent things such as the World Wide Web, molecular biology, and transistors in their leisure time.

America has usually respected the researchers' freedom, it was very professional. In late 2017, I discussed the likely hypothesis that China would try to shape these people's behavior a little bit more assertively, for them to serve China's interests. I don't want to be very specific about the possible exploitation of these people – but I am sure most of you will agree and have good enough fantasy to find examples.

Some apparent fads in the mass culture – e.g. the ordinary people's hobby to say "we hate fundamental physics now" – aren't just inconsequential fads. They may have far-reaching consequences for the global balance of power. Some people have had jobs in science they didn't deserve which is why they consider scientists' jobs to be on par with welfare – it is a correct description in their own case. But if some people got a job easily and without meritocratic reasons, it doesn't mean that all scientists did. Real scientists who deserve the job aren't on welfare (they are often picked from 50+ candidates) and they're likely enough to influence the world in tangible ways, whether people on welfare want to deny this fact or not.

Most people in the West actually understand the debt to the physicists

The U.S. physicists have built the first nuclear weapons which helped to persuade an extremely stubborn enemy, Japan, to surrender. The bombardment of Hiroshima and Nagasaki was tragic and scary and some physicists didn't feel well about it. On the other hand, it's very likely that it shortened the war by a year or years – and therefore saved millions of lives. Equally importantly, the bombs made sure that America wouldn't be a loser in that war.

"The shorter war" and "the victorious war" were gifts worth at least a trillion dollars if not ten trillion. The success of this nuclear research ignited the massive funding of high energy physics in the U.S. Many young researchers were surely surprised why they are paid from grants of the "Department of Energy". Why is that? Yes, "energy" sounds a bit physical but the details don't add up, do they? Well, they do if you think about the whole nuclear prehistory of that research.

There are several ways to look at the funding for the fundamental, pure physics research from the financial sources that care about very practical matters. One of them is that e.g. the theoretical physicists and purely curiosity-driven experimental particle physicists are helping to establish fields that are adjacent to the practical nuclear research – and even for practical reasons, it's good for the foundations to be firm.

The previous paragraph elaborates on the perspective that the government represents the people's interests, it is in charge of things, it knows what it's doing, and it is rationally funding similar things that were helpful in the past. The previous paragraph pretends that the government "creates" the physicists by hiring them.

Well, it's not really the most civilized or most ethical perspective because the scientists' passions aren't – or at least shouldn't be – dictated by the government. These passions exist independently of the government. A different perspective is that the results of the Manhattan Project (aside from other practical things) were "gifts" that a subset of the population defined by some traits has given to the overall U.S. population. The traits that define this subset of the people are their 1) intelligence and 2) curiosity to do the fundamental research of the Universe.

None of the people from the Manhattan Project are very active in the physics research today. So if you formulated that "debt" too personally, you would end up with nothing. But you know, even nations and companies sometimes owe money and things to each other for a very long time – even when the original borrowers are already dead. There may be a "debt" in between nations or their parts as "abstract sets of people" whose identity lasts longer than the human lifetimes.

I think that most of the civilized people actually understand this kind of a "debt" – which is enough to build dozens or hundreds of LHC-like colliders. Japan wasn't defeated by "all people in the U.S. equally". The kind of people who have contributed more deserve some lasting compensation. At some level, this compensation isn't too different from the lasting salaries for the U.S. troops – and especially the compensation for the veterans (and sometimes their widows). I focused on the nuclear bombs but similar comments apply to transistors, molecular biology, the World Wide Web, and more.

Reasonable people know that concentrated, ambitious projects are needed to avoid the universal waste of money

The world's annual GDP approaches $100 trillion – which is equal to the price of 5,000 FCC colliders. The people earn an amount comparable to $100 trillion a year – and they spend it. For a very long time, people were able to produce more than they needed to survive. They did various things with the surplus. They invested it. They built cathedrals. Or colliders. But they could also buy a greater amount of more expensive cigarettes.

Again, I am not saying that "cathedrals" are really the same king of expenses as "colliders". The differences between science and religion are profound – but so are some basic similarities. Instead, my main point is that there exists a basic difference between "diluted spending" and "concentrated spending". And it's generally true that whenever the "diluted spending" completely dominates, the society stagnates because the money is being spent for consumption, not investment – e.g. for cigarettes.

Unlike cigarettes, the construction of a new collider requires some real work that won't be done automatically, overcoming of some hurdles that can't be overcome without some focused work. Even the very fact that the constructors of the collider – and those who maintain it and use it to do experiments – need to do some mental exercise is nontrivial. Without concentrated, ambitious projects, the skills of these groups of people may fade away.

If things get really bad, the society should at least preserve the patent offices that could be stimulating for a new Einstein. If you only allow truly low-brow occupations, you may really lose the potential for discoveries. People can invent or discover amazing things – but these discoveries and inventions become very unlikely if the people spend all their time with much dumber activities.

Higgs and top can't become abstract myths from old, dusty books

Let's optimistically assume that in 2050, the texts (from 2023 or older) revealing the existence of the Higgs boson or the top quark won't be burned and banned yet. And imagine that between 2023 and 2050, there won't be any high-energy collision that would produce these particles with masses above \(100\GeV\). What will the scientifically literate people think about these texts?

They will be surrounded by some nice technology that mostly depends on Quantum Electrodynamics only – and many condensed-matter-physics applications of it. Will the smart young physicists actually believe that the Higgs boson and the top quark exist? Will they trust the texts – that will look historical because they will be over 25 years old?

Some people will surely believe it. Even though the LHC tunnel will be used to grow mushrooms, they will rightfully think it's a conspiracy theory to suggest that all the texts up to 2023 talking about the production of the top quark and the Higgs boson at some colliders were just "old myths". On the other hand, there will also be a real widespread doubt about the very existence of the particles they will haven't produced for more than 25 years.

Unless all the people in 2050 lose their interest in the laws of the Universe altogether, there will be a big enough reason to build some new collider, anyway. At least another repetition of the LHC – which will be much better than nothing.

High-energy physics wasn't meant to be a temporary stunt. It's a long-term discipline of physics in which the people's understanding of the basic laws of the Universe is known to get deeper as the center-of-mass collision energy gets higher – and this basic relationship works all the way to the Planck energy (where the growing black holes change the rules of the game). Our colliders are nowhere close to the Planck energy (and they arguably never will be) which is the simplest reason why there's no reason to stop pushing the energy frontier.

Intimidation always ends at some point and people realize that new physics may be found

It's conceivable that new colliders may only produce the particles we already know – and study their interactions at higher energies, perhaps with a better precision. Well, it may happen. It wouldn't be the first time. One could argue that even the tallest cathedrals have failed to persuade God to climb down from Heaven to Earth and personally visit the believers.

But new physics – beyond the latest, 2012 discovery of the Higgs boson – may also materialize at higher energies. It's possible that a discovery is waiting in the LHC data that have already been collected (in the run that ended in late 2018) but haven't been analyzed yet. If there's no discovery in that dataset, a new possibility exists when the LHC probes a higher amount of data (integrated luminosity). Or – especially – when new colliders with a higher collision energy are launched.

These days, it's fashionable to say that there won't be a new discovery. And even scream at the people who would dare to suggest that a \(2\TeV\) gluino is totally possible within the LHC 2018 data, among other things. Some people allow to be intimidated in this way. But you know, this intimidation cannot last indefinitely because it fundamentally makes no sense.

If and when colliders and detectors perform and analyze collisions at energies that are higher than ever before, it's always possible for them to discover a new particle or effect that was previously unknown to the experimenters. Science clearly doesn't know any valid argument that would exclude such new discoveries – or even arguments that would make this scenario very unlikely. The bullies may scare the physicists for a while but they ultimately run out of energy because what they say isn't backed by anything that makes any sense.

The discovery of new physics at higher, previously untested energies is always possible and it is always an important natural reason for people to want the new gadget. We don't know of really solid derivations that would tell us what these discoveries are going to be but that's just another reason why the experiments are desirable.

High energy collisions are the most agnostic way to look for new phenomena

One may think about many smaller experiments and phrase them as "competitors" of the next colliders. Well, first, this suggestion that "you may only have this or that" is just wrong. People have built Superkamiokande and the LHC in a similar epoch. They have done various types of research simultaneously and the overall cost was still small relatively to the countries' GDP – one or two percent of the GDP goes to research.

Second, even if there were some "real competition", it's very likely that the colliders would win the meritocratic contest because the energy is the most useful variable to parameterize physical effects. People have known the concept of energy for centuries and since the early 20th century, it's been really helpful in particle physics.

But the importance of energy for the classification of knowledge has increased further, in the 1970s, with the birth of the renormalization group thinking – by Ken Wilson and others. Since that time, physicists are well aware that much of their knowledge about the world around us is phrased in terms of "effective theories" that are optimized for phenomena with a particular value of energy – an order-of-magnitude estimate of the energies in the process.

You may attach various particles and phenomena to the energy axis. In this sense, walking along the energy axis produces qualitatively new particles and physical phenomena. So this looking at "many energy scales" gives you a bigger picture than just trying to measure one particular parameter of Nature, like the proton lifetime, more accurately than before. If you walk from one energy scale to another, all the possible phenomena and their parameters emerge and disappear. So the increase of the energies – which allows you to study matter at a more fundamental level – is a "more general way of looking" for new possibilities.

On the other hand, the somewhat cheaper experiments are more specialized. They will only discover something new if they're really lucky – if the only new hypothesized physical phenomenon that they are testing happens to be realized in Nature, with the values of parameters that are accessible by the experiment. That chance should be expected to be smaller than the change to find any new physics in some new range of energies.

Most people will realize that the anti-collider folks are Luddites dragging us back to the Middle Ages real fast

Professional physicists solve lots of technical questions. For example, they decide whether one version of a cutting-edge theory – that has only been partially proven, like inflationary cosmology – is more viable than another. The physicists and sponsors who plan the new experiments are deciding which of the two experiments – or two possible designs of the same kind of an experiment – is more economically feasible or scientifically useful.

Such research – and disagreements – are subtle and only experts really understand most of them.

However, it seems very clear to me that the contemporary anti-collider and anti-theoretical-physics fad has almost nothing to do with the nuances and careful research. It is a movement represented by the people who have no respect towards science and research in general. They have no respect towards science that has already been found, science that is being found or proposed, no respect towards the proposed theories, no respect towards the experiments that play the role of the judges that help one theory over another, no respect for the curiosity, patience, intelligence, and other character traits that describe great scientists.

Up to some moment, a similar populist movement may grow. But later, it reaches a point where the growth stops. New people will stop joining the anti-scientific movement simply because they will realize that they're better. At some moment, the qualitative difference between the "two camps" will become obvious. They will ask: Do I really want to be similar to these Luddites? To the people who just sling mud on everything that is fancier than some superficial laymen's sentiments? Am I not closer to this fancier pro-physics camp instead? And most of these people will just respond to themselves: I am way better than these folks (even if it won't be true in some cases – but it will be a better choice for their image). The people will suddenly say: I actually do have some respect for knowledge accumulated by the mankind, science, the process of accumulating new knowledge, impartiality, integrity (similar to the scientific one), and plans, dreams, and expenses that transcend the everyday life.

Once this "peak of the anti-science movement" is reached, and it may be very soon, the trend will reverse and people will start to enjoy talking openly about the sexiness of science as well as the vices of the anti-science activists who prefer the dark ages.

by Luboš Motl (noreply@blogger.com) at March 28, 2019 10:53 AM

March 27, 2019

ZapperZ - Physics and Physicists

How Do You Make Neutrino Beam?
This new Don Lincoln's video is related to the one he did previously on the PIP-II upgrade at Fermilab. This time, he tells you how they make neutrino beams at Fermilab.



Zz.

by ZapperZ (noreply@blogger.com) at March 27, 2019 12:20 PM

March 25, 2019

ZapperZ - Physics and Physicists

CP Violation in D Meson Decay
LHCb is reporting the first evidence of CP violation in the decay of D meson.

The D0 meson is made of a charm quark and an up antiquark. So far, CP violation has only been observed in particles containing a strange or a bottom quark. These observations have confirmed the pattern of CP violation described in the Standard Model by the so-called Cabibbo-Kobayashi-Maskawa (CKM) mixing matrix, which characterises how quarks of different types transform into each other via weak interactions. The deep origin of the CKM matrix, and the quest for additional sources and manifestations of CP violation, are among the big open questions of particle physics. The discovery of CP violation in the D0 meson is the first evidence of this asymmetry for the charm quark, adding new elements to the exploration of these questions.

If confirmed, this will be another meson that has exhibited such CP violation, and adds to the argument that such symmetry violation could be the source of our matter-antimatter asymmetry in this universe.

CP violation is an essential feature of our universe, necessary to induce the processes that, following the Big Bang, established the abundance of matter over antimatter that we observe in the present-day universe. The size of CP violation observed so far in Standard Model interactions, however, is too small to account for the present-day matter–antimatter imbalance, suggesting the existence of additional as-yet-unknown sources of CP violation.

Zz.

by ZapperZ (noreply@blogger.com) at March 25, 2019 09:02 PM

Clifford V. Johnson - Asymptotia

Revocation

The petition to revoke article 50 (stopping the UK jumping off a Brexit cliff) passed 5 million signatures sometime on Sunday! My colleague Nick Warner provided the screen shot of the counter just after it passed 5M (2019-03-24 at 3.18.25 PM, Parisian time). Thanks Nick! (Click for larger view.) I … Click to continue reading this post

The post Revocation appeared first on Asymptotia.

by Clifford at March 25, 2019 07:00 AM

March 21, 2019

Alexey Petrov - Symmetry factor

CP-violation in charm observed at CERN

 

There is a big news that came from CERN today. It was announced at a conference called Recontres de Moriond, one of the major yearly conferences in the field of particle physics. One of the CERN’s experiments, LHCb, reported an observation — yes, observation, not an evidence for, but an observation, of CP-violation in charmed system. Why is it big news and why should you care?

You should care about this announcement because it has something to do with how our Universe looks like. As you look around, you might notice an interesting fact: everything is made of matter. So what about it? Well, one thing is missing from our everyday life: antimatter.

As it turns out, physicists believe that the amount of matter and antimatter was the same after the Universe was created. So, the $1,110,000 question is: what happened to antimatter? According to Sakharov’s criteria for baryonogenesis (a process of creating  more baryons, like protons and neutrons, than anti-baryons), one of the conditions for our Universe to be the way it is would be to have matter particles interact slightly differently from the corresponding antimatter particles. In particle physics this condition is called CP-violation. It has been observed in beauty and strange quarks, but never in charm quarks. As charm quarks are fundamentally different from both beauty and strange ones (electrical charge, mass, ways they interact, etc.), physicists hoped that New Physics, something that we have not yet seen or predicted, might be lurking nearby and can be revealed in charm decays. That is why so much attention has been paid to searches for CP-violation in charm.

Now there are indications that the search is finally over: LHCb announced that they observed CP-violation in charm. Here is their announcement (look for a news item from 21 March 2019). A technical paper can be found here, discussing how LHCb extracted CP-violating observables from time-dependent analysis of D -> KK and D-> pipi decays.

The result is generally consistent with the Standard Model expectations. However, there are theory papers (like this one) that predict the Standard Model result to be about seven times smaller with rather small uncertainty.  There are three possible interesting outcomes:

  1. Experimental result is correct but the theoretical prediction mentioned above is not. Well, theoretical calculations in charm physics are hard and often unreliable, so that theory paper underestimated the result and its uncertainties.
  2. Experimental result is incorrect but the theoretical prediction mentioned above is correct. Maybe LHCb underestimated their uncertainties?
  3. Experimental result is correct AND the theoretical prediction mentioned above is correct. This is the most interesting outcome: it implies that we see effects of New Physics.

What will it be? Time will show.

More technical note on why it is hard to see CP-violation in charm.

Once reason that CP-violating observables are hard to see in charm is because they are quite small, at least in the Standard Model.  All final/initial state quarks in the D -> KK or D -> pi pi transition belong to the first two generations. The CP-violating asymmetry that arises when we compare time-dependent decay rates of D0 to a pair of kaons or pions to the corresponding decays of anti-D0 particle can only happen if one reaches the weak phase taht is associated with the third generation of quarks (b and t), which is possible via penguin amplitude. The problem is that the penguin amplitude is small, as Glashow-Illiopulos -Maiani (GIM) mechanism makes it to be proportional to m_b^2 times tiny CKM factors. Strong phases needed for this asymmetry come from the tree-level decays and (supposedly) are largely non-perturbative.

Notice that in B-physics the situation is exactly the opposite. You get the weak phase from the tree-level amplitude and the penguin one is proportional to m_top^2, so CP-violating interference is large.

Ask me if you want to know more!

by apetrov at March 21, 2019 06:45 PM

Matt Strassler - Of Particular Significance

LHCb experiment finds another case of CP violation in nature

The LHCb experiment at the Large Hadron Collider is dedicated mainly to the study of mesons [objects made from a quark of one type, an anti-quark of another type, plus many other particles] that contain bottom quarks (hence the `b’ in the name).  But it also can be used to study many other things, including mesons containing charm quarks.

By examining large numbers of mesons that contain a charm quark and an up anti-quark (or a charm anti-quark and an up quark) and studying carefully how they decay, the LHCb experimenters have discovered a new example of violations of the transformations known as CP (C: exchange of particle with anti-particle; P: reflection of the world in a mirror), of the sort that have been previously seen in mesons containing strange quarks and mesons containing bottom quarks.  Here’s the press release.

Congratulations to LHCb!  This important addition to our basic knowledge is consistent with expectations; CP violation of roughly this size is predicted by the formulas that make up the Standard Model of Particle Physics.  However, our predictions are very rough in this context; it is sometimes difficult to make accurate calculations when the strong nuclear force, which holds mesons (as well as protons and neutrons) together, is involved.  So this is a real coup for LHCb, but not a game-changer for particle physics.  Perhaps, sometime in the future, theorists will learn how to make predictions as precise as LHCb’s measurement!

by Matt Strassler at March 21, 2019 11:52 AM

March 16, 2019

Robert Helling - atdotde

Nebelkerze CDU-Vorschlag zu "keine Uploadfilter"
Sorry, this one of the occasional posts about German politics and thus in German. This is my posting to a German speaking mailing lists discussing the upcoming EU copyright directive (must be stopped in current from!!! March 23rd international protest day) and now the CDU party has proposed how to implement it in German law, although so unspecific that all the problematic details are left out. Here is the post.

Vielleicht bin ich zu doof, aber ich verstehe nicht, wo der genaue Fortschritt zu dem, was auf EU-Ebene diskutiert wird, sein soll. Ausser dass der CDU-Vorschlag so unkonkret ist, dass alle internen Widersprüche im Nebel verschwinden. Auch auf EU-Ebene sagen doch die Befuerworter, dass man viel lieber Lizenzen erwerben soll, als filtern. Das an sich ist nicht neu.

Neu, zumindest in diesem Handelsblatt-Artikel, aber sonst habe ich das nirgends gefunden, ist die Erwähnung von Hashsummen („digitaler Fingerabdruck“) oder soll das eher sowas wie ein digitales Wasserzeichen sein? Das wäre eine echte Neuerung, würde das ganze Verfahren aber sofort im Keim ersticken, da damit nur die Originaldatei geschützt wäre (das waere ja auch trivial festzustellen), aber jede Form des abgeleiteten Werkes komplett durch die Maschen fallen würde und man durch eine Trivialänderung Werke „befreien“ könnte. Ansonsten sind wir wieder bei den zweifelhaften, auf heute noch nicht existierender KI-Technologie beruhenden Filtern.

Das andere ist die Pauschallizenz. Ich müsste also nicht mehr mit allen Urhebern Verträge abschliessen, sondern nur noch mit der VG Internet. Da ist aber wieder die grosse Preisfrage, für wen die gelten soll. Intendiert sind natürlich wieder Youtube, Google und FB. Aber wie formuliert man das? Das ist ja auch der zentrale Stein des Anstoßes der EU-Direktive: Eine Pauschallizenz brauchen all, ausser sie sind nichtkommerziell (wer ist das schon), oder (jünger als drei Jahre und mit wenigen Benutzern und kleinem Umsatz) oder man ist Wikipedia oder man ist GitHub? Das waere wieder die „Internet ist wie Fernsehen - mit wenigen grossen Sendern und so - nur eben anders“-Sichtweise, wie sie von Leuten, die das Internet aus der Ferne betrachten so gerne propagiert wird. Weil sie eben alles andere praktisch platt macht. Was ist denn eben mit den Foren oder Fotohostern? Müssten die alle eine Pauschallizenz erwerben (die eben so hoch sein müsste, dass sie alle Film- und Musikrechte der ganzen Welt pauschal abdeckt)? Was verhindert, dass das am Ende ein „wer einen Dienst im Internet betreibt, der muss eben eine kostenpflichtige Internetlizenz erwerben, bevor er online gehen kann“-Gesetz wird, das bei jeder nichttrivialen Höhe der Lizenzgebühr das Ende jeder gras roots Innovation waere?

Interessant waere natuerlich auch, wie die Einnahmen der VG Internet verteilt werden. Ein Schelm waere, wenn das nicht in großen Teilen zB bei Presseverlegern landen würde. Das waere doch dann endlich das „nehmt denjenigen, die im Internet Geld verdienen dieses weg und gebt es und, die nicht mehr so viel Geld verdienen“-Gesetz. Dann müsste die Lizenzgebühr am besten ein Prozentsatz des Umsatz sein, am besten also eine Internet-Steuer.

Und ich fange nicht damit an, wozu das führt, wenn alle europäischen Länder so krass ihre eigene Umsetzungssuppe kochen.

Alles in allem ein ziemlich gelungener Coup der CDU, der es schaffen kann, den Kritikern von Artikel 13 in der öffentlichen Meinung den Wind aus den Segeln zu nehmen, indem man es alles in eine inkonkrete Nebelwolke packt, wobei die ganzen problematischen Regelungen in den Details liegen dürften.

by Unknown (noreply@blogger.com) at March 16, 2019 09:43 AM

March 13, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

RTE’s Brainstorm; a unique forum for public intellectuals

I have an article today on RTE’s ‘Brainstorm’ webpage, my tribute to Stephen Hawking one year after his death.

"Hawking devoted a great deal of time to science outreach, unusual for a scientist at this level"

I wasn’t aware of the RTE brainstorm initiative until recently, but I must say it is a very interesting and useful resource. According to the mission statement on the website“RTÉ Brainstorm is where the academic and research community will contribute to public debate, reflect on what’s happening in the world around us and communicate fresh thinking on a broad range of issues”.  A partnership between RTE, University College Cork, NUI Galway, University of Limerick, Dublin City University, Ulster University, Maynooth University and the Technological University of Dublin, the idea is to provide an online platform for academics and other specialists to engage in public discussions of interesting ideas and perspectives in user-friendly language.  You can find a very nice description of the initiative in The Irish Times here .

I thoroughly approve of this initiative. Many academics love to complain about the portrayal of their subject (and a lot of other subjects) in the media; this provides a simple and painless method for such people to reach a wide audience. Indeed, I’ve always liked the idea of the public intellectual. Anyone can become a specialist in a given topic; it’s a lot harder to make a meaningful contribution to public debate. Some would say this is precisely the difference between the academic and the public intellectual. Certainly, I enjoy engaging in public discussions of matters close to my area of expertise and I usually learn something new.  That said, a certain humility is an absolute must – it’s easy to forget that detailed knowledge of a subject does not automatically bestow the wisdom of Solomon. Indeed, there is nothing worse than listing to an specialist use their expertise to bully others into submission – it’s all about getting the balance right and listening as well as informing….

by cormac at March 13, 2019 07:28 PM

March 12, 2019

ZapperZ - Physics and Physicists

PIP-II Upgrade At Fermilab
Don Lincoln explains why the PIP-II upgrade at Fermilab will take the accelerator facility to the next level.



The video actually explains a bit about how particle accelerator works, and the type of improvement that is being planned for.

Zz.

by ZapperZ (noreply@blogger.com) at March 12, 2019 01:04 PM

March 06, 2019

Robert Helling - atdotde

Challenge: How to talk to a flat earther?
Further down the rabbit hole, over lunch I finished watching "Behind the Curve", a Netflix documentary on people believing the earth is a flat disk. According to them, the north pole is in the center, while Antarctica is an ice wall at the boundary. Sun and moon are much closer and flying above this disk while the stars are on some huge dome like in a planetarium. NASA is a fake agency promoting the doctrine and airlines must be part of the conspiracy as they know that you cannot directly fly between continents on the southern hemisphere (really?).

These people are happily using GPS for navigation but have a general mistrust in the science (and their teachers) of at least two centuries.

Besides the obvious "I don't see curvature of the horizon" they are even conducting experiments to prove their point (fighting with laser beams not being as parallel over miles of distance as they had hoped for). So at least some of them might be open to empirical disprove.

So here is my challenge: Which experiment would you conduct with them to convince them? Warning: Everything involving stuff disappearing at the horizon (ships sailing away, being able to see further from a tower) are complicated by non-trivial diffraction in the atmosphere which would very likely turn this observation inconclusive. The sun being at different declination (height) at different places might also be explained by being much closer and a Foucault pendulum might be too indirect to really convince them (plus it requires some non-elementary math to analyse).

My personal solution is to point to the observation that the declination of Polaris (around which I hope they can agree the night sky rotates) is given my the geographical latitude: At the north pole it is right above you but is has to go down the more south you get. I cannot see how this could be reconciled with a dome projection.

How would you approach this? The rules are that it must only involve observations available to everyone, no spaceflight, no extra high altitude planes. You are allowed to make use of the phone, cameras, you can travel (say by car or commercial flight but you cannot influence the flight route). It does not involve lots of money or higher math.


by Unknown (noreply@blogger.com) at March 06, 2019 02:24 PM

February 24, 2019

Michael Schmitt - Collider Blog

Miracles when you use the right metric

I recommend reading, carefully and thoughtfully, the preprint “The Metric Space of Collider Events” by Patrick Komiske, Eric Metodiev, and Jesse Thaler (arXiv:1902.02346). There is a lot here, perhaps somewhat cryptically presented, but much of it is exciting.

First, you have to understand what the Earth Mover’s Distance (EMD) is. This is easier to understand than the Wasserstein Metric of which it is a special case. The EMD is a measure of how different two pdfs (probability density functions) are and it is rather different than the usual chi-squared or mean integrated squared error because it emphasizes separation rather than overlap. The idea is look at how much work you have to do to reconstruct one pdf from another, where “reconstruct” means transporting a portion of the first pdf a given distance. You keep track of the “work” you do, which means the amount of area (i.e.,”energy” or “mass”) you transport and how far you transport it. The Wikipedia article aptly makes an analogy with suppliers delivering piles of stones to customers. The EMD is the smallest effort required.

The EMD is a rich concept because it allows you to carefully define what “distance” means. In the context of delivering stones, transporting them across a plain and up a mountain are not the same. In this sense, rotating a collision event about the beam axis should “cost” nothing – i.e, be irrelevant — while increasing the energy or transverse momentum should, because it is phenomenologically interesting.

The authors want to define a metric for LHC collision events with the notion that events that come from different processes would be well separated. This requires a definition of “distance” – hence the word “metric” in the title. You have to imagine taking one collision event consisting of individual particle or perhaps a set of hadronic jets, and transporting pieces of it in order to match some other event. If you have to transport the pieces a great distance, then the events are very different. The authors’ ansatz is a straight forward one, depending essentially on the angular distance θij/R plus a term than takes into account the difference in total energies of the two events. Note: the subscripts i and j refer to two elements from the two different events. The paper gives a very nice illustration for two top quark events (read and blue):

Transformation of one top quark event into another

The first thing that came to mind when I had grasped, with some effort, the suggested metric, was that this could be a great classification tool. And indeed it is. The authors show that a k-nearest neighbors algorithm (KNN), straight out of the box, equipped with their notion of distance, works nearly as well as very fancy machine learning techniques! It is crucial to note that there is no training here, no search for a global minimum of some very complicated objection function. You only have to evaluate the EMD, and in their case, this is not so hard. (Sometimes it is.) Here are the ROC curves:

ROC curves. The red curve is the KNN with this metric, and the other curves close by are fancy ML algorithms. The light blue curve is a simple cut on N-subjettiness observables, itself an important theoretical tool


I imagine that some optimization could be done to close the small gap with respect to the best performing algorithms, for example in improving on the KNN.

The next intriguing idea presented in this paper is the fractal dimension, or correlation dimension, dim(Q), associated with their metric. The interesting bit is how dim(Q) depends on the mass/energy scale Q, which can plausibly vary from a few GeV (the regime of hadronization) up to the mass of the top quark (173 GeV). The authors compare three different sets of jets from ordinary QCD production, from W bosons decaying hadronically, and from top quarks, because one expects the detailed structure to be distinctly different, at least if viewed with the right metric. And indeed, the variation of dim(Q) with Q is quite different:

dim(Q) as a function of Q for three sources of jets


(Note these jets all have essentially the same energy.) There are at least three take-away points. First, the dim(Q) is much higher for top jets than for W and QCD jets, and W is higher than QCD. This hierarchy reflects the relative complexity of the events, and hints at new discriminating possibilities. Second, they are more similar at low scales where the structure involves hadronication, and more different at high scales which should be dominated by the decay structure. This is born out by they decay products only curves. Finally, there is little difference in the curves based on particles or on partons, meaning that the result is somehow fundamental and not an artifact of hadronization itself. I find this very exciting.

The authors develop the correlation distance dim(Q) further. It is a fact that a pair of jets from W decays boosted to the same degree can be described by a single variable: the ratio of their energies. This can be mapped onto an annulus in a abstract dimensional space (see the paper for slightly more detail). The interesting step is to look at how the complexity of individual events, reflected in dim(Q), varies around the annulus:

Embedding of W jets and how dim(Q) varies around the annulus and inside it


The blue events to the lower left are simple, with just a single round dot (jet) in the center, while the red events in the upper right have two dots of nearly equal size. The events in the center are very messy, with many dots of several sizes. So morphology maps onto location in this kinematic plane.

A second illustration is provided, this time based on QCD jets of essentially the same energy. The jet masses will span a range determined by gluon radiation and the hadronization process. Jets at lower mass should be clean and simple while jets at high mass should show signs of structure. This is indeed the case, as nicely illustrated in this picture:

How complex jet substructure correlates with jet mass


This picture is so clear it is almost like a textbook illustration.

That’s it. (There is one additional topic involving infrared divergence, but since I do not understand it I won’t try to describe it here.) The paper is short with some startling results. I look forward to the authors developing these studies further, and for other researchers to think about them and apply them to real examples.

by Michael Schmitt at February 24, 2019 05:16 PM

ZapperZ - Physics and Physicists

Brian Greene on Science, Religion, Hawking, and Trump
I've only found this video recently, even though it is almost a year old already, but it is still interesting, and funny. And strangely enough, he shares my view on religion, especially the fact that people seem to ignore that there are so many of them, each claiming to be the "truth". They all can't be, and thus, the biggest threat and challenge against a religion is the existence of another religion.



Zz.

by ZapperZ (noreply@blogger.com) at February 24, 2019 03:03 PM

February 22, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

The joys of mid term

Thank God for mid-term, or ‘reading week’ as it is known in some colleges. Time was I would have spent the week on the ski slopes, but these days I see the mid-term break as a precious opportunity to catch up – a nice relaxed week in which I can concentrate on correcting assessments, preparing teaching notes and setting end-of-semester exams. There is a lot of satisfaction in getting on top of things, if only temporarily!

Then there’s the research. To top the week off nicely, I heard this morning that my proposal to give a talk at the forthcoming Authur Eddington conference  in Paris has been accepted; this is great news as the conference will mark the centenary of Eddington’s measurement of the bending of starlight by the sun, an experiment that provided key evidence in support Einstein’s general theory of relativity. To this day, some historians question the accuracy of Eddington’s result, while most physicists believe his findings were justified, so it should make for an interesting conference .

Eddinton

 

by cormac at February 22, 2019 04:45 PM

February 12, 2019

Robert Helling - atdotde

Bohmian Rapsody

Visits to a Bohmian village


Over all of my physics life, I have been under the local influence of some Gaul villages that have ideas about physics that are not 100% aligned with the main stream views: When I was a student in Hamburg, I was good friends with people working on algebraic quantum field theory. Of course there were opinions that they were the only people seriously working on QFT as they were proving theorems while others dealt with perturbative series only that are known to diverge and are thus obviously worthless. Funnily enough they were literally sitting above the HERA tunnel where electron proton collisions took place that were very well described by exactly those divergent series. Still, I learned a lot from these people and would say there are few that have thought more deeply about structural properties of quantum physics. These days, I use more and more of these things in my own teaching (in particular in our Mathematical Quantum Mechanics and Mathematical Statistical Physics classes as well as when thinking about foundations, see below) and even some other physicists start using their language.

Later, as a PhD student at the Albert Einstein Institute in Potsdam, there was an accumulation point of people from the Loop Quantum Gravity community with Thomas Thiemann and Renate Loll having long term positions and many others frequently visiting. As you probably know, a bit later, I decided (together with Giuseppe Policastro) to look into this more deeply resulting in a series of papers there were well received at least amongst our peers and about which I am still a bit proud.

Now, I have been in Munich for over ten years. And here at the LMU math department there is a group calling themselves the Workgroup Mathematical Foundations of Physics. And let's be honest, I call them the Bohmians (and sometimes the Bohemians). And once more, most people believe that the Bohmian interpretation of quantum mechanics is just a fringe approach that is not worth wasting any time on. You will have already guessed it: I did so none the less. So here is a condensed report of what I learned and what I think should be the official opinion on this approach. This is an informal write up of a notes paper that I put on the arXiv today.

Bohmians don't like about the usual (termed Copenhagen lacking a better word) approach to quantum mechanics that you are not allowed to talk about so many things and that the observer plays such a prominent role by determining via a measurement what aspect is real an what is not. They think this is far too subjective. So rather, they want quantum mechanics to be about particles that then are allowed to follow trajectories.

"But we know this is impossible!" I hear you cry. So, let's see how this works. The key observation is that the Schrödinger equation for a Hamilton operator of the form kinetic term (possibly with magnetic field) plus potential term, has  a conserved current

$$j = \bar\psi\nabla\psi - (\nabla\bar\psi)\psi.$$

So as your probability density is $\rho=\bar\psi\psi$, you can think of that being made up of particles moving with a velocity field

$$v = j/\rho = 2\Im(\nabla \psi/\psi).$$

What this buys you is that if you have a bunch of particles that is initially distributed like the probability density and follows the flow of the velocity field it will also later be distributed like $|\psi |^2$.

What is important is that they keep the Schrödinger equation in tact. So everything that you can do with the original Schrödinger equation (i.e. everything) can be done in the Bohmian approach as well.  If you set up your Hamiltonian to describe a double slit experiment, the Bohmian particles will flow nicely to the screen and arrange themselves in interference fringes (as the probability density does). So you will never come to a situation where any experimental outcome will differ  from what the Copenhagen prescription predicts.

The price you have to pay, however, is that you end up with a very non-local theory: The velocity field lives in configuration space, so the velocity of every particle depends on the position of all other particles in the universe. I would say, this is already a show stopper (given what we know about quantum field theory whose raison d'être is locality) but let's ignore this aesthetic concern.

What got me into this business was the attempt to understand how the set-ups like Bell's inequality and GHZ and the like work out that are supposed to show that quantum mechanics cannot be classical (technically that the state space cannot be described as local probability densities). The problem with those is that they are often phrased in terms of spin degrees of freedom which have Hamiltonians that are not directly of the form above. You can use a Stern-Gerlach-type apparatus to translate the spin degree of freedom to a positional but at the price of a Hamiltonian that is not explicitly know let alone for which you can analytically solve the Schrödinger equation. So you don't see much.

But from Reinhard Werner and collaborators I learned how to set up qubit-like algebras from positional observables of free particles (at different times, so get something non-commuting which you need to make use of entanglement as a specific quantum resource). So here is my favourite example:

You start with two particles each following a free time evolution but confined to an interval. You set those up in a particular entangled state (stationary as it is an eigenstate of the Hamiltonian) built from the two lowest levels of the particle in the box. And then you observe for each particle if it is in the left or the right half of the interval.

From symmetry considerations (details in my paper) you can see that each particle is with the same probability on the left and the right. But they are anti-correlated when measured at the same time. But when measured at different times, the correlation oscillates like the cosine of the time difference.

From the Bohmian perspective, for the static initial state, the velocity field vanishes everywhere, nothing moves. But in order to capture the time dependent correlations, as soon as one particle has been measured, the position of the second particle has to oscillate in the box (how the measurement works in detail is not specified in the Bohmian approach since it involves other degrees of freedom and remember, everything depends on everything but somehow it has to work since you want to produce the correlations that are predicted by the Copenhagen approach).

The trajectory of the second particle depending on its initial position


This is somehow the Bohmian version of the collapse of the wave function but they would never phrase it that way.

And here is where it becomes problematic: If you could see the Bohmian particle moving you could decide if the other particle has been measured (it would oscillate) or not (it would stand still). No matter where the other particle is located. With this observation you could build a telephone that transmits information instantaneously, something that should not exist. So you have to conclude you must not be able to look at the second particle and see if it oscillates or not.

Bohmians  tell you you cannot because all you are supposed to observer about the particles are their positions (and not their velocity). And if you try to measure the velocity by measuring the position at two instants in time you don't because the first observation disturbs the particle so much that it invalidates the original state.

As it turns out, you are not allowed to observe anything else about the particles than that they are distributed like $|\psi |^2$ because if you could, you could build a similar telephone (at least statistically) as I explain the in the paper (this fact is known in the Bohm literature but I found it nowhere so clearly demonstrated as in this two particle system).

My conclusion is that the Bohm approach adds something (the particle positions) to the wave function but then in the end tells you you are not allowed to observe this or have any knowledge of this beyond what is already encoded in the wave function. It's like making up an invisible friend.

PS: If you haven't seen "Bohemian Rhapsody", yet, you should, even if there are good reasons to criticise the dramatisation of real events.

by Unknown (noreply@blogger.com) at February 12, 2019 07:20 AM

February 07, 2019

Axel Maas - Looking Inside the Standard Model

Why there won't be warp travel in times of global crises
One of the questions I get most often at outreach events is: "What is about warp travel?", or some other wording for faster-than-light travel. Something, which makes interstellar travel possible, or at least viable.

Well, the first thing I can say is that there is nothing which excludes it. Of course, within our well established theories of the world it is not possible. Neither the standard model of particle physics, nor general relativity, when constrained to the matter we know of, allows it. Thus, whatever describes warp travel, it needs to be a theory, which encompasses and enlarges what we know. Can a quantized combination of general relativity and particle physics do this? Perhaps, perhaps not. Many people think about it really hard. Mostly, we run afoul of causality when trying.

But these are theoretical ideas. And even if some clever team comes up with a theory which allows warp travel, this does not say that this theory is actually realized in nature. Just because we can make it mathematical consistent does not guarantee that it is realized. In fact, we have many, many more mathematical consistent theories than are realized in nature. Thus, it is not enough to just construct a theory of warp travel. Which, as noted, we failed so far to do.

No, what we need is to figure out that it really happens in nature. So far, this did not happen. Neither did we observe it in any human-made experiment, nor did we have any observation in nature which unambiguously point to it. And this is what makes it real hard.

You see, the universe is a tremendous place, which is unbelievable large, and essentially three times as old as the whole planet earth. Not to mention humanity. There happen extremely powerful events out there. This starts from quasars, effectively like a whole galactic core on fire, to black hole collisions and supernovas. These events put out an enormous amount of energy. Much, much more than even our sun generates. Hence, anything short of a big bang is happening all the time in the universe. And we see the results. The earth is hit constantly by particles with much, much higher energies than we can produce in any experiment. And this since earth came into being. Incidentally, this also tells us that nothing we can do at a particle accelerator can really be dangerous. Whatever we do there has happened so often in our Earth's atmosphere, it would have killed this planet long before humanity entered the scene. Only bad thing about it, we do never know when and where such an event happens. And the rate is also not that high, it is only that earth existed already so very long. And is big. Hence, we cannot use this to make controlled observations.

Thus, whatever could happen, happens out there. In the universe. We see some things out there, which we cannot explain yet, e.g. dark matter. But by and large a lot works as expected. Especially, we do not see anything which begs warp travel to explain. Or anything else remotely suggesting something happening faster than the speed of light. Hence, if something like faster-than-light travel is possible, it is neither common nor easily happening.

As noted, this does not mean it is impossible. Only that if it is possible, it is very, very hard. Especially, this means it will be very, very hard to make an experiment to demonstrate the phenomenon. Much less to actually make it a technology, rather than a curiosity. This means, a lot of effort will be necessary to get to see it, if it is really possible.

What is a lot? Well, the CERN is a bit. But human, or even robotic, space exploration is an entire different category, some one to two orders of magnitudes more. Probably, we would need to combine such space exploration with particle physics to really get to it. Possible the best example for such an endeavor is the future LISA project to measure gravitational waves in space. It is perhaps even our current best bet to observe any hints of faster-than-light phenomena, aside from bigger particle physics experiments on earth.

Do we have the technology for such a project? Yes, we do. We have it since roughly a decade. But it will likely take at least one more decade to have LISA flying. Why not now? Resources. Or, often put equivalently, costs.

And here comes the catch. I said, it is our best chance. But this does not mean it is a good chance. In fact, even if faster-than-light is possible, I would be very surprised if we would see it with this mission. There is probably a few more generations of technology, and another order of magnitude of resources, needed, before we could see something, given of what I know how well everything currently fits. Of course, there can always be surprises with every little step further. I am sure, we will discover something interesting, possibly spectacular with LISA. But I would not bet anything valuable that it will be having to do with warp travel.

So, you see, we have to scale up, if we want to go to the stars. This means investing resources. A lot of them. But resources are needed to fix things on earth as well. And the more we damage, the more we need to fix, and the less we have to get to the stars. Right now, humanity moves into a state of perpetual crises. The damage wrought by the climate crises will require enormous efforts to mitigate, much more to stop the downhill trajectory. As a consequence of the climate crises, as well as social inequality, more and more conflicts will create further damage. Finally, isolationism, both nationally as well as socially, driven by fear of the oncoming crises, will also soak up tremendous amounts of resources. And, finally, a hostile environment towards diversity and putting individual gains above common gains create a climate which is hostile to anything new and different in general, and to science in particular. Hence, we will not be able to use our resources, or the ingenuity of the human species as a whole, to get to the stars.

Thus, I am not hopeful to see faster-than-light in my lifetime, or those of the next generation. Such a challenge, if it is possible at all, will require a common effort of our species. That would be truly one worthy endeavour to put our minds at. But right now, as a scientist, I am much more occupied with protecting a world in which science is possible, both metaphorically as well as literally.

But, there is always hope. If we rise up, and decide to change fundamentally. When we put the well-being of us as a whole in front. Then, I would be optimistic that we can get out there. Well, at least as fast as nature permits. How fast this ever will be.

by Unknown (noreply@blogger.com) at February 07, 2019 09:17 AM

January 18, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Back to school

It was back to college this week, a welcome change after some intense research over the hols. I like the start of the second semester, there’s always a great atmosphere around the college with the students back and the restaurants, shops and canteens back open. The students seem in good form too, no doubt enjoying a fresh start with a new set of modules (also, they haven’t yet received their exam results!).

This semester, I will teach my usual introductory module on the atomic hypothesis and early particle physics to second-years. As always, I’m fascinated by the way the concept of the atom emerged from different roots and different branches of science: from philosophical considerations in ancient Greece to considerations of chemistry in the 18th century, from the study of chemical reactions in the 19th century to considerations of statistical mechanics around the turn of the century. Not to mention a brilliant young patent clerk who became obsessed with the idea of showing that atoms really exist, culminating in his famous paper on Brownian motion. But did you know that Einstein suggested at least three different ways of measuring Avogadro’s constant? And each method contributed significantly to establishing the reality of atoms.

fig-2

 In 1908, the French physicist Jean Perrin demonstrated that the motion of particles suspended in a liquid behaved as predicted by Einstein’s formula, derived from considerations of statistical mechanics, giving strong support for the atomic hypothesis.  

One change this semester is that I will also be involved in delivering a new module,  Introduction to Modern Physics, to first-years. The first quantum revolution, the second quantum revolution, some relativity, some cosmology and all that.  Yet more prep of course, but ideal for anyone with an interest in the history of 20th century science. How many academics get to teach interesting courses like this? At conferences, I often tell colleagues that my historical research comes from my teaching, but few believe me!

Update

Then of course, there’s also the module Revolutions in Science, a course I teach on Mondays at University College Dublin; it’s all go this semester!

by cormac at January 18, 2019 04:15 PM

January 17, 2019

Robert Helling - atdotde

Has your password been leaked?
Today, there was news about a huge database containing 773 million email address / password pairs became public. On Have I Been Pawned you can check if any of your email addresses is in this database (or any similar one). I bet it is (mine are).

These lists are very probably the source for the spam emails that have been around for a number of months where the spammer claims they broke into your account and tries to prove it by telling you your password. Hopefully, this is only a years old LinkedIn password that you have changed aeons ago.

To make sure, you actually want to search not for your email but for your password. But of course, you don't want to tell anybody your password. To this end, I have written a small perl script that checks for your password without telling anybody by doing a calculation locally on your computer. You can find it on GitHub.

by Unknown (noreply@blogger.com) at January 17, 2019 07:43 PM

January 12, 2019

Sean Carroll - Preposterous Universe

True Facts About Cosmology (or, Misconceptions Skewered)

I talked a bit on Twitter last night about the Past Hypothesis and the low entropy of the early universe. Responses reminded me that there are still some significant misconceptions about the universe (and the state of our knowledge thereof) lurking out there. So I’ve decided to quickly list, in Tweet-length form, some true facts about cosmology that might serve as a useful corrective. I’m also putting the list on Twitter itself, and you can see comments there as well.

  1. The Big Bang model is simply the idea that our universe expanded and cooled from a hot, dense, earlier state. We have overwhelming evidence that it is true.
  2. The Big Bang event is not a point in space, but a moment in time: a singularity of infinite density and curvature. It is completely hypothetical, and probably not even strictly true. (It’s a classical prediction, ignoring quantum mechanics.)
  3. People sometimes also use “the Big Bang” as shorthand for “the hot, dense state approximately 14 billion years ago.” I do that all the time. That’s fine, as long as it’s clear what you’re referring to.
  4. The Big Bang might have been the beginning of the universe. Or it might not have been; there could have been space and time before the Big Bang. We don’t really know.
  5. Even if the BB was the beginning, the universe didn’t “pop into existence.” You can’t “pop” before time itself exists. It’s better to simply say “the Big Bang was the first moment of time.” (If it was, which we don’t know for sure.)
  6. The Borde-Guth-Vilenkin theorem says that, under some assumptions, spacetime had a singularity in the past. But it only refers to classical spacetime, so says nothing definitive about the real world.
  7. The universe did not come into existence “because the quantum vacuum is unstable.” It’s not clear that this particular “Why?” question has any answer, but that’s not it.
  8. If the universe did have an earliest moment, it doesn’t violate conservation of energy. When you take gravity into account, the total energy of any closed universe is exactly zero.
  9. The energy of non-gravitational “stuff” (particles, fields, etc.) is not conserved as the universe expands. You can try to balance the books by including gravity, but it’s not straightforward.
  10. The universe isn’t expanding “into” anything, as far as we know. General relativity describes the intrinsic geometry of spacetime, which can get bigger without anything outside.
  11. Inflation, the idea that the universe underwent super-accelerated expansion at early times, may or may not be correct; we don’t know. I’d give it a 50% chance, lower than many cosmologists but higher than some.
  12. The early universe had a low entropy. It looks like a thermal gas, but that’s only high-entropy if we ignore gravity. A truly high-entropy Big Bang would have been extremely lumpy, not smooth.
  13. Dark matter exists. Anisotropies in the cosmic microwave background establish beyond reasonable doubt the existence of a gravitational pull in a direction other than where ordinary matter is located.
  14. We haven’t directly detected dark matter yet, but most of our efforts have been focused on Weakly Interacting Massive Particles. There are many other candidates we don’t yet have the technology to look for. Patience.
  15. Dark energy may not exist; it’s conceivable that the acceleration of the universe is caused by modified gravity instead. But the dark-energy idea is simpler and a more natural fit to the data.
  16. Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity.
  17. We have a perfectly good, and likely correct, idea of what dark energy might be: vacuum energy, a.k.a. the cosmological constant. An energy inherent in space itself. But we’re not sure.
  18. We don’t know why the vacuum energy is much smaller than naive estimates would predict. That’s a real puzzle.
  19. Neither dark matter nor dark energy are anything like the nineteenth-century idea of the aether.

Feel free to leave suggestions for more misconceptions. If they’re ones that I think many people actually have, I might add them to the list.

by Sean Carroll at January 12, 2019 08:31 PM

January 09, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Physical Methods of Hazardous Wastewater Treatment

Hazardous waste comprises all types of waste with the potential to cause a harmful effect on the environment and pet and human health. It is generated from multiple sources, including industries, commercial properties and households and comes in solid, liquid and gaseous forms.

There are different local and state laws regarding the management of hazardous waste in different localities. Irrespective of your jurisdiction, the management starts from a proper hazardous waste collection from your Utah property through to its eventual disposal.

There are many methods of waste treatment after its collection using the appropriate structures recommended by environmental protection authorities. One of the most common and inexpensive ones is physical treatment. The following are the physical treatment options for hazardous wastewater.

Sedimentation

In this treatment technique, the waste is separated into a liquid and a solid. The solid waste particles in the liquid are left to settle at a container’s bottom through gravity. Sedimentation is done in a continuous or batch process.

Continuous sedimentation is the standard option and generally used for the treatment for large quantities of liquid waste. It is often used in the separation of heavy metals in the steel, copper and iron industries and fluoride in the aluminum industry.

Electro-Dialysis

This treatment method comprises the separation of wastewater into a depleted and aqueous stream. The wastewater passes through alternating cation and anion-permeable membranes in a compartment.

A direct current is then applied to allow the passage of cations and anions to opposite directions. This results in solutions with elevated concentrations of positive and negative ions and another with a low ion concentration.

Electro-dialysis is used to enrich or deplete chemical solutions in manufacturing, desalting whey in the food sector and generating potable water from saline water.

Reverse Osmosis

Man checking wastewater

This uses a semi-permeable membrane for the separation of dissolved organic and inorganic elements in wastewater. The wastewater is forced through the semi-permeable membrane by pressure, and larger molecules are filtered out by the small membrane pores.

Polyamide membranes have largely replaced polysulphone ones for wastewater treatment nowadays owing to their ability to withstand liquids with high pH. Reverse osmosis is usually used in the desalinization of brackish water and treating electroplating rinse waters.

Solvent Extraction

This involves the separation of the components of a liquid through contact with an immiscible liquid. The most common solvent used in the treatment technique is supercritical fluid (SCF) mainly CO2.

These fluids exist at the lowest temperature where condensation occurs and have a low density and fast mass ion transfer when mixed with other liquids. Solvent extraction is used for extracting oil from the emulsions used in steel and aluminum processing and organ halide pesticide from treated soil.

Superficial ethane as a solvent is also useful for the purification of waste oils contaminated with water, metals, and PCBs.

Some companies and household have tried handling their hazardous wastewater to minimize costs. This, in most cases, puts their employees at risk since the “treated” water is still often dangerous to human health, the environment and, their machines.

The physical processes above sometimes used with chemical treatment techniques are the guaranteed options for truly safe wastewater.

The post Physical Methods of Hazardous Wastewater Treatment appeared first on None Equilibrium.

by Nonequilibrium at January 09, 2019 11:35 PM

January 08, 2019

Axel Maas - Looking Inside the Standard Model

Taking your theory seriously
This blog entry is somewhat different than usual. Rather than writing about some particular research project, I will write about a general vibe, directing my research.

As usual, research starts with a 'why?'. Why does something happen, and why does it happen in this way? Being the theoretician that I am, this question often equates with wanting to have mathematical description of both the question and the answer.

Already very early in my studies I ran into peculiar problems with this desire. It usually left me staring at the words '...and then nature made a choice', asking myself, how could it? A simple example of the problem is a magnet. You all know that a magnet has a north pole and a south pole, and that these two are different. So, how does it happen which end of the magnet becomes the north pole and which the south pole? At the beginning you always get to hear that this is a random choice, and it just happens that one particular is made. But this is not really the answer. If you dig deeper than you find that originally the metal of any magnet has been very hot, likely liquid. In this situation, a magnet is not really magnetic. It becomes magnetic when it is cooled down, and becomes solid. At some temperature (the so-called Curie temperature), it becomes magnetic, and the poles emerge. And here this apparent miracle of a 'choice by nature' happens. Only that it does not. The magnet cools down not all by itself, but it has a surrounding. And the surrounding can have magnetic fields as well, e.g. the earth's magnetic field. And the decision what is south and what is north is made by how the magnet forms relative to this field. And thus, there is a reason. We do not see it directly, because magnets have usually moved since then, and thus this correlation is no longer obvious. But if we would heat the magnet again, and let it cool down again, we could observe this.

But this immediately leaves you with the question of where did the Earth's magnetic field comes from, and got its direction? Well, it comes from the liquid metallic core of the Earth, and aligns along or oppositely, more or less, the rotation axis of the Earth. Thus, the question is, how did the rotation axis of the Earth comes about, and why has it a liquid core? Both questions are well understood, and arise from how the Earth has formed billions of years ago. This is due to the mechanics of the rotating disk of dust and gas which formed around our fledgling sun. Which in turns comes from the dynamics on even larger scales. And so on.

As you see, whenever one had the feeling of a random choice, it was actually the outside of what we looked at so far, which made the decision. So, such questions always lead us to include more into what we try to understand.

'Hey', I now can literally hear people say who are a bit more acquainted with physics, 'does not quantum mechanics makes really random choices?'. The answer to this is yes and no in equal measures. This is probably one of the more fundamental problems of modern physics. Yes, our description of quantum mechanics, as we teach it also in courses, has intrinsic randomness. But when does it occur? Yes, exactly, whenever we jump outside of the box we describe in our theory. Real, random choice is encountered in quantum physics only whenever we transcend the system we are considering. E.g. by an external measurement. This is one of the reasons why this is known as the 'measurement problem'. If we stay inside the system, this does not happen. But at the expense that we are loosing the contact to things, like an ordinary magnet, which we are used to. The objects we are describing become obscure, and we talk about wave functions and stuff like this. Whenever we try to extend our description to also include the measurement apparatus, on the other hand, we again get something which is strange, but not as random as it originally looked. Although talking about it becomes almost impossible beyond any mathematical description. And it is not really clear what random means anymore in this context. This problem is one of the big ones in the concept of physics. While there is a relation to what I am talking about here, this question can still be separated.

And in fact, it is not this divide what I want to talk about, at least not today. I just wanted to get away with this type of 'quantum choice'. Rather, I want to get to something else.

If we stay inside the system we describe, then everything becomes calculable. Our mathematical description is closed in the sense that after fixing a theory, we can calculate everything. Well, at least in principle, in practice our technical capabilities may limit this. But this is of no importance for the conceptual point. Once we have fixed the theory, there is no choice anymore. There is no outside. And thus, everything needs to come from inside the theory. Thus, a magnet in isolation will never magnetize, because there is nothing which can make a decision about how. The different possibilities are caught in an eternal balanced struggle, and none can win.

Which makes a lot of sense, if you take physical theories really seriously. After all, one of the basic tenants is that there is no privileged frame of reference: 'Everything is relative'. If there is nothing else, nothing can happen which creates an absolute frame of reference, without violating the very same principles on which we found physics. If we take our own theories seriously, and push them to the bitter end, this is what needs to come about.

And here I come back to my own research. One of the driving principles has been to really push this seriousness. And ask what it implies if one really, really takes it seriously. Of course, this is based on the assumption that the theory is (sufficiently) adequate, but that is everyday uncertainty for a physicist anyhow. This requires me to very, very carefully separate what is really inside, and outside. And this leads to quite surprising results. Essentially most of my research on Brout-Englert-Higgs physics, as described in previous entries, is coming about because of this approach. And leads partly to results quite at odds with common lore, often meaning a lot of work to convince people. Even if the mathematics is valid and correct, interpretation issues are much more open to debate when it comes to implications.

Is this point of view adequate? After all, we know for sure that we are not yet finished, and our theories do not contain all there is, and there is an 'outside'. However it may look. And I agree. But, I think it is very important that we very clearly distinguish what is an outside influence, and what is not. And as a first step to ensure what is outside, and thus, in a sense, is 'new physics', we need to understand what our theories say if they are taken in isolation.

by Unknown (noreply@blogger.com) at January 08, 2019 10:15 AM

January 06, 2019

Jaques Distler - Musings

TLS 1.0 Deprecation

You have landed on this page because your HTTP client used TLSv1.0 to connect to this server. TLSv1.0 is deprecated and support for it is being dropped from both servers and browsers.

We are planning to drop support for TLSv1.0 from this server in the near future. Other sites you visit have probably already done so, or will do so soon. Accordingly, please upgrade your client to one that supports at least TLSv1.2. Since TLSv1.2 has been around for more than a decade, this should not be hard.

by Jacques Distler at January 06, 2019 06:12 AM

The n-Category Cafe

TLS 1.0 Deprecation

You have landed on this page because your HTTP client used TLSv1.0 to connect to this server. TLSv1.0 is deprecated and support for it is being dropped from both servers and browsers.

We are planning to drop support for TLSv1.0 from this server in the near future. Other sites you visit have probably already done so, or will do so soon. Accordingly, please upgrade your client to one that supports at least TLSv1.2. Since TLSv1.2 has been around for more than a decade, this should not be hard.

by Jacques Distler at January 06, 2019 06:12 AM

January 05, 2019

The n-Category Cafe

Applied Category Theory 2019 School

Dear scientists, mathematicians, linguists, philosophers, and hackers:

We are writing to let you know about a fantastic opportunity to learn about the emerging interdisciplinary field of applied category theory from some of its leading researchers at the ACT2019 School. It will begin February 18, 2019 and culminate in a meeting in Oxford, July 22–26. Applications are due January 30th; see below for details.

Applied category theory is a topic of interest for a growing community of researchers, interested in studying systems of all sorts using category-theoretic tools. These systems are found in the natural sciences and social sciences, as well as in computer science, linguistics, and engineering. The background and experience of our community’s members is as varied as the systems being studied.

The goal of the ACT2019 School is to help grow this community by pairing ambitious young researchers together with established researchers in order to work on questions, problems, and conjectures in applied category theory.

Who should apply

Anyone from anywhere who is interested in applying category-theoretic methods to problems outside of pure mathematics. This is emphatically not restricted to math students, but one should be comfortable working with mathematics. Knowledge of basic category-theoretic language—the definition of monoidal category for example—is encouraged.

We will consider advanced undergraduates, PhD students, and post-docs. We ask that you commit to the full program as laid out below.

Instructions for how to apply can be found below the research topic descriptions.

Senior research mentors and their topics

Below is a list of the senior researchers, each of whom describes a research project that their team will pursue, as well as the background reading that will be studied between now and July 2019.

Miriam Backens

Title: Simplifying quantum circuits using the ZX-calculus

Description: The ZX-calculus is a graphical calculus based on the category-theoretical formulation of quantum mechanics. A complete set of graphical rewrite rules is known for the ZX-calculus, but not for quantum circuits over any universal gate set. In this project, we aim to develop new strategies for using the ZX-calculus to simplify quantum circuits.

Background reading:

  1. Matthes Amy, Jianxin Chen, Neil Ross. A finite presentation of CNOT-Dihedral operators.
  2. Miriam Backens. The ZX-calculus is complete for stabiliser quantum mechanics.

Tobias Fritz

Title: Partial evaluations, the bar construction, and second-order stochastic dominance

Description: We all know that 2+2+1+1 evaluates to 6. A less familiar notion is that it can partially evaluate to 5+1. In this project, we aim to study the compositional structure of partial evaluation in terms of monads and the bar construction and see what this has to do with financial risk via second-order stochastic dominance.

Background reading:

  1. Tobias Fritz and Paolo Perrone. Monads, partial evaluations, and rewriting.
  2. Maria Manuel Clementino, Dirk Hofmann, George Janelidze. The monads of classical algebra are seldom weakly cartesian.
  3. Todd Trimble. On the bar construction.

Pieter Hofstra

Title: Complexity classes, computation, and Turing categories

Description: Turing categories form a categorical setting for studying computability without bias towards any particular model of computation. It is not currently clear, however, that Turing categories are useful to study practical aspects of computation such as complexity. This project revolves around the systematic study of step-based computation in the form of stack-machines, the resulting Turing categories, and complexity classes. This will involve a study of the interplay between traced monoidal structure and computation. We will explore the idea of stack machines qua programming languages, investigate the expressive power, and tie this to complexity theory. We will also consider questions such as the following: can we characterize Turing categories arising from stack machines? Is there an initial such category? How does this structure relate to other categorical structures associated with computability?

Background reading:

  1. J.R.B. Cockett and P.J.W. Hofstra. Introduction to Turing categories. APAL, Vol 156, pp. 183-209, 2008.
  2. J.R.B. Cockett, P.J.W. Hofstra and P. Hrubes. Total maps of Turing categories. ENTCS (Proc. of MFPS XXX), pp. 129-146, 2014.
  3. A. Joyal, R. Street and D. Verity. Traced monoidal categories. Mat. Proc. Cam. Phil. Soc. 3, pp. 447-468, 1996.

Bartosz Milewski

Title: Traversal optics and profunctors

Description: In functional programming, optics are ways to zoom into a specific part of a given data type and mutate it. Optics come in many flavors such as lenses and prisms and there is a well-studied categorical viewpoint, known as profunctor optics. Of all the optic types, only the traversal has resisted a derivation from first principles into a profunctor description. This project aims to do just this.

Background reading:

  1. Bartosz Milewski. Profunctor optics, categorical view.
  2. Craig Pastro, Ross Street. Doubles for monoidal categories.

Mehrnoosh Sadrzadeh

Title: Formal and experimental methods to reason about dialogue and discourse using categorical models of vector spaces

Description: Distributional semantics argues that meanings of words can be represented by the frequency of their co-occurrences in context. A model extending distributional semantics from words to sentences has a categorical interpretation via Lambek’s syntactic calculus or pregroups. In this project, we intend to further extend this model to reason about dialogue and discourse utterances where people interrupt each other, there are references that need to be resolved, disfluencies, pauses, and corrections. Additionally, we would like to design experiments and run toy models to verify predictions of the developed models.

Background reading:

  1. Gerhard Jager (1998): A multi-modal analysis of anaphora and ellipsis. University of Pennsylvania Working Papers in Linguistics 5(2), p. 2.
  2. Matthew Purver, Ronnie Cann, and Ruth Kempson. Grammars as parsers: meeting the dialogue challenge. Research on Language and Computation, 4(2-3):289–326, 2006.

David Spivak

Title: Toward a mathematical foundation for autopoiesis

Description: An autopoietic organization—anything from a living animal to a political party to a football team—is a system that is responsible for adapting and changing itself, so as to persist as events unfold. We want to develop mathematical abstractions that are suitable to found a scientific study of autopoietic organizations. To do this, we’ll begin by using behavioral mereology and graphical logic to frame a discussion of autopoeisis, most of all what it is and how it can be best conceived. We do not expect to complete this ambitious objective; we hope only to make progress toward it.

Background reading:

  1. Brendan Fong, David Jaz Myers, David Spivak. Behavioral mereology.
  2. Brendan Fong, David Spivak. Graphical regular logic.
  3. Luhmann. Organization and Decision, CUP. (Preface)

School structure

All of the participants will be divided up into groups corresponding to the projects. A group will consist of several students, a senior researcher, and a TA. Between January and June, we will have a reading course devoted to building the background necessary to meaningfully participate in the projects. Specifically, two weeks are devoted to each paper from the reading list. During this two week period, everybody will read the paper and contribute to discussion in a private online chat forum. There will be a TA serving as a domain expert and moderating this discussion. In the middle of the two week period, the group corresponding to the paper will give a presentation via video conference. At the end of the two week period, this group will compose a blog entry on this background reading that will be posted to the n-category cafe.

After all of the papers have been presented, there will be a two-week visit to Oxford University, 15–26 July 2019. The second week is solely for participants of the ACT2019 School. Groups will work together on research projects, led by the senior researchers.

The first week of this visit is the ACT2019 Conference, where the wider applied category theory community will arrive to share new ideas and results. It is not part of the school, but there is a great deal of overlap and participation is very much encouraged. The school should prepare students to be able to follow the conference presentations to a reasonable degree.

To apply

To apply please send the following to act2019school@gmail.com by January 30th, 2019:

  • Your CV
  • A document with:
    • An explanation of any relevant background you have in category theory or any of the specific projects areas
    • The date you completed or expect to complete your Ph.D and a one-sentence summary of its subject matter.
  • Order of project preference
  • To what extent can you commit to coming to Oxford (availability of funding is uncertain at this time)
  • A brief statement (~300 words) on why you are interested in the ACT2019 School. Some prompts:
    • how can this school contribute to your research goals?
    • how can this school help in your career?

Also have sent on your behalf to act2019school@gmail.com a brief letter of recommendation confirming any of the following:

  • your background
  • ACT2019 School’s relevance to your research/career
  • your research experience

Questions?

For more information, contact either

  • Daniel Cicala. cicala (at) math (dot) ucr (dot) edu

  • Jules Hedges. julian (dot) hedges (at) cs (dot) ox (dot) ac (dot) uk

by john (baez@math.ucr.edu) at January 05, 2019 10:54 PM

January 04, 2019

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A Christmas break in academia

There was a time when you wouldn’t catch sight of this academic in Ireland over Christmas – I used to head straight for the ski slopes as soon as term ended. But family commitments and research workloads have put paid to that, at least for a while, and I’m not sure it’s such a bad thing. Like many academics, I dislike being away from the books for too long and there is great satisfaction to be had in catching up on all the ‘deep roller’ stuff one never gets to during the teaching semester.

me

The professor in disguise in former times 

The first task was to get the exam corrections out of the way. This is a job I quite enjoy, unlike most of my peers. I’m always interested to see how the students got on and it’s the only task in academia that usually takes slightly less time than expected. Then it was on to some rather more difficult corrections – putting together revisions to my latest research paper, as suggested by the referee. This is never a quick job, especially as the points raised are all very good and some quite profound. It helps that the paper has been accepted to appear in Volume 8 of the prestigious Einstein Studies series, but this is a task that is taking some time.

Other grown-up stuff includes planning for upcoming research conferences – two abstracts now in the post, let’s see if they’re accepted. I also spent a great deal of the holidays helping to organize an international conference on the history of physics that will be hosted in Ireland in 2020. I have very little experience in such things, so it’s extremely interesting, if time consuming.

So there is a lot to be said for spending Christmas at home, with copious amounts of study time uninterrupted by students or colleagues. An interesting bonus is that a simple walk in the park or by the sea seems a million times more enjoyable after a good morning’s swot.  I’ve never really holidayed well and I think this might be why.

img_2233

A walk on Dun Laoghaire pier yesterday afternoon

As for New Year’s resolutions, I’ve taken up Ciara Kelly’s challenge of a brisk 30-minute walk every day. I also took up tennis in a big way a few months ago – now there’s a sport that is a million times more practical in this part of the world than skiing.

 

by cormac at January 04, 2019 08:56 PM

January 03, 2019

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Getting the Most Out of Your Solar Hot Water System

Solar panels and hot water systems are great ways to save some serious cash when it comes to your energy bills. You make good use of the sun, which means that you are helping with maximizing the use of natural resources rather than creating unnatural ones, which can usually harm the Earth in the long run.

However, solar panel systems should be properly used and maintained to make sure that you are making the most out of it. Most users and owners of the system do not know how to properly use it, which is a huge waste of energy and money.

Here, we will talk about the things you can do to make sure you get the most out of your solar panels after you are done with your solar PV installation.

Make Use of Boiler Timers and Solar Controllers

Ask your solar panel supplier if they can provide you with boiler timers and solar controllers. This is to make sure that the water will only be heated by the backup heating source, which is most likely after the water is heated by the sun to the maximum extent. It usually happens after the solar panels are not directly exposed to the sun, which means that this usually takes place late in the afternoon or whenever the sun changes its position.

You should also see to it that the cylinder has enough cold water for the sun to heat up after you have used up all of the hot water. This is to ensure that you will have hot water to use for the next day, which is especially important if you use hot water in the morning.

Check the Cylinder and Pipes Insulation

After having the solar panels and hot water system installed on your home, you should see to it that the cylinder and pipes are properly insulated. Failure to do so will result in inadequate hot water, making the system inefficient.

Solar panel systems that do not have insulated cylinders will not heat up your water enough, so make sure to ask the supplier and the people handling the installation about this to make the most out of your system.

Do Not Overfill the Storage

Man checking hot water system

Avoid filling the hot water vessel to the brim, as doing so can make the system inefficient. Aside from not getting the water as hot as you want it to be, you will risk the chance of having the system break down sooner than you expect.

Ask the supplier or the people installing the system to install a twin coil cylinder. This will allow the solar hot water system to heat up only one section of the coil cylinder, which is usually what the solar collector or thermal store is for.

In cases wherein the dedicated solar volume is not used, the timing of the backup heating will have a huge impact on the solar hot water system’s performance. This usually happens in systems that do not require the current cylinder to be changed.

Knowing how to properly use and maintain your solar hot water system is a huge time and money saver. It definitely would not hurt to ask questions from your solar panel supplier and installer, so make sure to ask them the questions that you have in mind. Enjoy your hot water and make sure to have your system checked every once in a while!

The post Getting the Most Out of Your Solar Hot Water System appeared first on None Equilibrium.

by Bertram Mortensen at January 03, 2019 06:05 PM