Particle Physics Planet

February 15, 2019

Emily Lakdawalla - The Planetary Society Blog

NASA just got its best budget in decades
After months of unrelated political turmoil, multiple stop-gap spending bills, and an unprecedented government shutdown, NASA's 2019 budget was finally signed into law.

February 15, 2019 10:00 PM

Peter Coles - In the Dark

Subaru and Cosmic Shear

Up with the lark this morning I suddenly remembered I was going to do a post about a paper which actually appeared on the arXiv some time ago. Apart from the fact that it’s a very nice piece of work, the first author is Chiaki Hikage who worked with me as a postdoc about a decade ago. This paper is extremely careful and thorough, which is typical of Chiaki’s work. Its abstract is here:

The work described uses the Hyper-Suprime-Cam Subaru Telescope to probe how the large-scale structure of the Universe has evolved by looking at the statistical effect of gravitational lensing – specifically cosmic shear – as a function of redshift (which relates to look-back time). The use of redshift binning as demonstrated in this paper is often called tomography. Gravitational lensing is sensitive to all the gravitating material along the line of sight to the observer so probes dark, as well as luminous, matter.

Here’s a related graphic:

The article that reminded me of this paper is entitled New Map of Dark Matter Spanning 10 Million Galaxies Hints at a Flaw in Our Physics. Well, no it doesn’t really. Read the abstract, where you will find a clear statement that these results `do not show significant evidence for discordance’. Just a glance at the figures in the paper will convince you that is the case. Of course, that’s not to say that the full survey (which will be very much bigger; the current paper is based on just 11% of the full data set) may not reveal such discrepancies, just that analysis does not. Sadly this is yet another example of misleadingly exaggerated science reporting. There’s a lot of it about.

Incidentally, the parameter S8 is a (slightly) rescaled version of the more familiar parameter σ8  – which quantifies the matter-density fluctuations on a scale of 8 h-1 Mpc – as defined in the abstract; cosmic shear is particularly sensitive to this parameter.

Anyway, if this is what can be done with just 11%, the full survey should be a doozy!

by telescoper at February 15, 2019 07:19 AM

February 14, 2019

Christian P. Robert - xi'an's og

undecidable learnability

“There is an unknown probability distribution P over some finite subset of the interval [0,1]. We get to see m i.i.d. samples from P for m of our choice. We then need to find a finite subset of [0,1] whose P-measure is at least 2/3. The theorem says that the standard axioms of mathematics cannot be used to prove that we can solve this problem, nor can they be used to prove that we cannot solve this problem.”

In the first issue of the (controversial) nature machine intelligence journal, Ben-David et al. wrote a paper they present a s the machine learning equivalent to Gödel’s incompleteness theorem. The result is somewhat surprising from my layman perspective and it seems to only relate to a formal representation of statistical problems. Formal as in the Vapnik-Chervonenkis (PAC) theory. It sounds like, given a finite learning dataset, there are always features that cannot be learned if the size of the population grows to infinity, but this is hardly exciting…

The above quote actually makes me think of the Robbins-Wasserman counter-example for censored data and Bayesian tail prediction, but I am unsure the connection is anything more than sheer fantasy..!

by xi'an at February 14, 2019 11:19 PM

Emily Lakdawalla - The Planetary Society Blog

The Mars Exploration Rovers Update Special Report: NASA Declares Opportunity and MER Mission “Complete”
At around 8 pm February 12, 2019, Pacific Standard Time (PST), the final commands were transmitted to Opportunity, the Mars Exploration Rover that defied all odds

February 14, 2019 04:28 PM

Peter Coles - In the Dark

Copenhagen Yet Again

Once again I find myself in the wonderful city of Copenhagen. As far as I’m concerned, at least, my wavefunction has collapsed (along with the rest of me into a definite location: Ibsen’s Hotel, in fact. Henrik Ibsen isn’t here: he checked out many years ago.

The hotel management, being Danes, are refreshingly honest in their description of my room:

Usually hotel rooms this size are described as `standard’…

After a very enjoyable but rather tiring day yesterday I was up early this morning to get from Loughborough to Luton Airport. What I thought would be the reasonable way of making the trip – train from Loughborough to Luton Airport Parkway and shuttle bus from there – turned out to be inconvenient in terms of timing and cost, so the kind people of Loughborough University just booked me a cab all the way there. I had to leave at 7am, though, so missed the hotel breakfast but I got to the airport in good time to have something there.

My second flight with Ryanair this week was also on time and Copenhagen’s excellent public transport system got me to this hotel very quickly. It’s a good few degrees colder here than in England.

When I checked in the receptionist asked me if I had stayed here before. I said yes, but couldn’t remember when. She said it was 2012, as I was still on their system. I did actually post about it then. The hotel hasn’t changed at all from what I remember last time. I must remember to get to breakfast in good time.

The flight from Luton Airport carried a large contingent of Chelsea supporters. Their team is playing  Malmö this evening in the UEFA Europa League. Malmö is easily reachable from Copenhagen by train over the Øresund Bridge. Fortunately I was heading into Copenhagen on the Metro so parted company with the supporters as soon as I left the airport.

Anyway, I’m in Copenhagen again as one of the External Examiners for a thesis defence at the Niels Bohr Institute tomorrow morning and then I’ll be returning directly to Dublin on Saturday afternoon. I’m missing today’s Computational Physics lecture and laboratory in Maynooth, but the students are being well looked after in my absence by John and Aaron who have all the notes and lab scripts.


by telescoper at February 14, 2019 02:37 PM

Christian P. Robert - xi'an's og

O’Bayes 19: registration and travel support

An update about the O’Bayes 19 conference next June-July:  the early registration period has now opened. And there should be funds for supporting early-career researchers, thanks to Google and CNRS sponsorships, as detailed below:

Early-career researchers less than four years from PhD, are invited to apply for early-career scholarships. If you are a graduate student, postdoctoral researcher or early-career academic and you are giving a poster, you are eligible to apply. Female researchers and underrepresented minorities are especially encouraged to apply. Selected applicants will receive up to £450, which can be used for any combination of fees, travel and accommodation costs, subject to receipts.

The deadline for applying is the 15th of March (which is also the deadline to submit the abstract for the poster) and it has to be done at the registration phase via the dedicated page. Those who have submitted an abstract before this information on scholarships was made available (11 Feb.) and applying for travel support should contact the organisers.

by xi'an at February 14, 2019 01:18 PM

February 13, 2019

John Baez - Azimuth

Exploring New Technologies

I’ve got some good news! I’ve been hired by Bryan Johnson to help evaluate and explain the potential of various technologies to address the problem of climate change.

Johnson is an entrepreneur who sold his company Braintree for $800M and started the OS Fund in 2014, seeding it with $100M to invest in the hard sciences so that we can move closer towards becoming proficient system administrators of our planet: engineering atoms, molecules, organisms and complex systems. The fund has invested in many companies working on synthetic biology, genetics, new materials, and so on. Here are some writeups he’s done on these companies.

As part of my research I’ll be blogging about some new technologies, asking questions and hoping experts can help me out. Stay tuned!

by John Baez at February 13, 2019 11:36 PM

Christian P. Robert - xi'an's og

I’m getting the point

A long-winded X validated discussion on the [textbook] mean-variance conjugate posterior for the Normal model left me [mildly] depressed at the point and use of answering questions on this forum. Especially as it came at the same time as a catastrophic outcome for my mathematical statistics exam.  Possibly an incentive to quit X validated as one quits smoking, although this is not the first attempt

by xi'an at February 13, 2019 11:19 PM

Peter Coles - In the Dark

R.I.P. Gordon Banks (1937-2019)

It’s been a hectic couple of days during which I somehow missed the very sad news of the passing of legendary goalkeeper Gordon Banks, who died yesterday (12th February 2019) at the age of 81.

Gordon Banks made 628 appearances during a 15-year career in the Football League, and won 73 caps for England, highlighted by starting every game of the 1966 World Cup campaign. He will however be best remembered for one amazing save in the 1970 World Cup, so by way of a short tribute here is a rehash of a post I wrote some years ago about that.


I’ve posted a few times about science and sport, but this bit of action seems to defy the laws of physics. I remember watching this match, a group game at Guadalajara (Mexico) between England and Brazil from the 1970 World Cup, live on TV when I was seven years old. The Brazil team of 1970 was arguably the finest collection of players ever to grace a football field and the names of Jairzinho, Carlos Alberto, Rivelino and, of course, Pelé, were famous even in our school playground. The England team of 1970 was also very good, but they were made to look very ordinary that day – with one notable exception.

The only thing I remember well about the game itself  was this save – the best of many excellent stops – by the great goalkeeper Gordon Banks. I’ve seen it hundreds of times since, and still can’t understand how he managed to block this header from Pelé. You can tell from Bobby Moore’s reaction (No. 6, on the line) that he also thought Brazil had scored…

Here’s the description of this action from wikipedia:

Playing at pace, Brazil were putting England under enormous pressure and an attack was begun by captain Carlos Alberto who sent a fizzing low ball down the right flank for the speedy Jairzinho to latch on to. The Brazilian winger sped past left back Terry Cooper and reached the byline. Stretching slightly, he managed to get his toes underneath the fast ball and deliver a high but dipping cross towards the far post. Banks, like all goalkeepers reliant on positional sensibility, had been at the near post and suddenly had to turn on his heels and follow the ball to its back post destination.

Waiting for the ball was Pelé, who had arrived at speed and with perfect timing. He leapt hard at the ball above England right back Tommy Wright and thundered a harsh, pacy downward header towards Banks’ near post corner. The striker shouted “Goal!” as he connected with the ball. Banks was still making his way across the line from Jairzinho’s cross and in the split-second of assessment the incident allowed, it seemed impossible for him to get to the ball. He also had to dive slightly backwards and down at the same time which is almost physically impossible. Yet he hurled himself downwards and backwards and got the base of his thumb to the ball, with the momentum sending him cascading to the ground. It was only when he heard the applause and praise of captain Bobby Moore and then looked up and saw the ball trundling towards the advertising hoardings at the far corner, that he realised he’d managed to divert the ball over the bar – he’d known he got a touch but still assumed the ball had gone in. England were not being well received by the locals after cutting comments made about Mexico prior to the tournament by Ramsey, but spontaneous applause rang around the Guadalajara, Jalisco stadium as Banks got back into position to defend the resulting corner. Pelé, who’d begun to celebrate a goal when he headed the ball, would later describe the save as the greatest he’d ever seen.

Here is Gordon Banks describing it in his own words.

Brazil deservedly went on to win the game, but only by a single goal. Without Gordon Banks, England would have been well and truly hammered.

Rest in peace, Gordon Banks (1937-2019).

by telescoper at February 13, 2019 09:24 PM

Lubos Motl - string vacua and pheno

Matrix theory: objects' entanglement entropy from local curvature tensor
I want to mention two papers that were released today. A Czech one and an Armenian one. In the Czech paper,
Hierarchy and decoupling,
Michal Malinský (senior co-author) and Matěj Hudec (also from a building where I spent a significant part of my undergrad years) exploit the new relaxed atmosphere in which everyone can write things about naturalness that would be agreed to be very dumb just some years ago. ;-) OK, so they don't see a problem with the unnaturalness of the Higgs potential in the Standard Model.

Harvey Mudd College, CA

If they nicely ban all the high-energy parameters and efforts to express physics as their functions, they may apply the perturbation theory to prove things like\[

m_H^2 \sim \lambda v^2

\] to all orders. The Higgs mass is always linked to the Higgs vev and no one can damage this relationship, assuming that you ban all the players that could damage it. ;-) OK, it's nice, I am probably missing something but their claim seems vacuous or circular. Of course if you avoid studying the dependence of the theory on the more fundamental parameters, e.g. the parameters of a quantum field theory expressed relatively to a high energy scale, you won't see a problematic unnatural dependence or fine-tuning. But such a ban of the high-energy independent parameters is tantamount to the denial of reductionism.

I believe them that they don't have a psychological problem with naturalness of the Higgs potential but I still have one.

That was a hep-ph paper. On hep-th, I regularly search for the words "string", "entan", "swamp", and "matrix" (although the list is sometimes undergoing revisions), not to overlook some papers whose existence should be known to me. So today, "matrix" and "entan" converged to the same paper by an author whom I am fortunate to know, Vače Sahakian (or Vatche Սահակեան, if you find it more comprehensible):
On a new relation between entanglement and geometry from M(atrix) theory
He has sent the preprint from a muddy college in California which might immediately become one of the interesting places in fundamental physics. ;-)

OK, Vače assumes we have two objects in our beloved BFSS matrix model which are, as the matrix paradigm dictates, described by a block diagonal matrix. The upper left block describes the structure of the first object, the lower right block describes the second object, and the off-diagonal (generally rectangular) blocks are almost zero but these degrees of freedom are responsible for the interactions between the two objects.

Vače allows the non-center-of-mass degrees of freedom of both blocks to optimize to the situation, he sort of traces over them, and wants to calculate the entanglement entropy of the center-of-mass degrees of freedom (which are the coefficients in front of the two blocks' identity matrices). He finds out that the von Neumann entropy depends of the derivatives of the gravitational potential, \(\partial_i \partial_j V\).

By a process of "covariantization", he translates the gravitational potential and its derivatives to the variables that are more natural in Einstein's general relativity, such as the Riemann tensor, which leads him to a somewhat hypothetical form of the entanglement entropy \[

S_{ent} = -\gamma^2 {\rm Tr} \zav { \frac{{\mathcal R}^2}{4} \ln \frac{{\mathcal R}^2}{4} }

\] which is finite, concise, and elegant. Here, the \({\mathcal R}\) object is the Riemann tensor contracted with some expressions (partly involving matrices) that are either necessary for kinematic or geometric reasons or because of the embedding into the matrix model.

Aside from the finiteness, conciseness, and elegance, I still don't understand why this particular – not quite trivial – form of the result should make us happy or why it should look trustworthy or confirming some expectations that may be obtained by independent methods. "Something log something" is the usual form of the von Neumann entropy which has terms like \(-p_i\ln p_i\), as you know, but the probabilities should be replaced by a squared Riemann tensor. If it is true, I don't know what it means.

At the end, if a result like that were right, it could be possible to determine some entropy of black holes or wormholes or holographic deviations from locality (from the independence of regions) or something like that in Matrix theory but I have no idea why. It may be because I don't have a sufficient intuitive understanding of the entanglement entropy in general. At any rate, this is a kind of a combination of Matrix theory and the entanglement-is-glue duality that should be studied by many more people than one Vače Sahakian.

Incidentally, after a 7-month-long hiatus, Matt Strassler wrote a blog post about their somewhat innovative search for dimuon resonances.

by Luboš Motl ( at February 13, 2019 07:11 PM

Matt Strassler - Of Particular Significance

Breaking a Little New Ground at the Large Hadron Collider

Today, a small but intrepid band of theoretical particle physicists (professor Jesse Thaler of MIT, postdocs Yotam Soreq and Wei Xue of CERN, Harvard Ph.D. student Cari Cesarotti, and myself) put out a paper that is unconventional in two senses. First, we looked for new particles at the Large Hadron Collider in a way that hasn’t been done before, at least in public. And second, we looked for new particles at the Large Hadron Collider in a way that hasn’t been done before, at least in public.

And no, there’s no error in the previous paragraph.

1) We used a small amount of actual data from the CMS experiment, even though we’re not ourselves members of the CMS experiment, to do a search for a new particle. Both ATLAS and CMS, the two large multipurpose experimental detectors at the Large Hadron Collider [LHC], have made a small fraction of their proton-proton collision data public, through a website called the CERN Open Data Portal. Some experts, including my co-authors Thaler, Xue and their colleagues, have used this data (and the simulations that accompany it) to do a variety of important studies involving known particles and their properties. [Here’s a blog post by Thaler concerning Open Data and its importance from his perspective.] But our new study is the first to look for signs of a new particle in this public data. While our chances of finding anything were low, we had a larger goal: to see whether Open Data could be used for such searches. We hope our paper provides some evidence that Open Data offers a reasonable path for preserving priceless LHC data, allowing it to be used as an archive by physicists of the post-LHC era.

2) Since only had a tiny fraction of CMS’s data was available to us, about 1% by some count, how could we have done anything useful compared to what the LHC experts have already done? Well, that’s why we examined the data in a slightly unconventional way (one of several methods that I’ve advocated for many years, but has not been used in any public study). Consequently it allowed us to explore some ground that no one had yet swept clean, and even have a tiny chance of an actual discovery! But the larger scientific goal, absent a discovery, was to prove the value of this unconventional strategy, in hopes that the experts at CMS and ATLAS will use it (and others like it) in future. Their chance of discovering something new, using their full data set, is vastly greater than ours ever was.

Now don’t all go rushing off to download and analyze terabytes of CMS Open Data; you’d better know what you’re getting into first. It’s worthwhile, but it’s not easy going. LHC data is extremely complicated, and until this project I’ve always been skeptical that it could be released in a form that anyone outside the experimental collaborations could use. Downloading the data and turning it into a manageable form is itself a major task. Then, while studying it, there are an enormous number of mistakes that you can make (and we made quite a few of them) and you’d better know how to make lots of cross-checks to find your mistakes (which, fortunately, we did know; we hope we found all of them!) The CMS personnel in charge of the Open Data project were enormously helpful to us, and we’re very grateful to them; but since the project is new, there were inevitable wrinkles which had to be worked around. And you’d better have some friends among the experimentalists who can give you advice when you get stuck, or point out aspects of your results that don’t look quite right. [Our thanks to them!]

All in all, this project took us two years! Well, honestly, it should have taken half that time — but it couldn’t have taken much less than that, with all we had to learn. So trying to use Open Data from an LHC experiment is not something you do in your idle free time.

Nevertheless, I feel it was worth it. At a personal level, I learned a great deal more about how experimental analyses are carried out at CMS, and by extension, at the LHC more generally. And more importantly, we were able to show what we’d hoped to show: that there are still tremendous opportunities for discovery at the LHC, through the use of (even slightly) unconventional model-independent analyses. It’s a big world to explore, and we took only a small step in the easiest direction, but perhaps our efforts will encourage others to take bigger and more challenging ones.

For those readers with greater interest in our work, I’ll put out more details in two blog posts over the next few days: one about what we looked for and how, and one about our views regarding the value of open data from the LHC, not only for our project but for the field of particle physics as a whole.

by Matt Strassler at February 13, 2019 01:43 PM

Emily Lakdawalla - The Planetary Society Blog

Planetary Radio: 10 Must-Listen Episodes About Space Exploration
The Planetary Society staff has selected our top ten favorite episodes of Planetary Radio. Listen now.

February 13, 2019 01:00 PM

Peter Coles - In the Dark

Loughborough Pride in STEM Research Showcase

So here I am then, in Burleigh Court (a hotel on the campus of Loughborough University), having just had a fine breakfast, preparing for the start of today’s Pride in STEM Research Showcase, which I am very much looking forward to. I’m giving the keynote talk at the end of the day’s events and will be here for the whole day. I’m very grateful to the organizers for inviting me and especially to Claudia Eberlein, Dean of Science at Loughborough University for greeting me when I arrived at Burleigh Court.

Some readers may recall that I worked with Claudia Eberlein at the University of Sussex a few years ago – she was Head of the Department of Physics & Astronomy for a time, but last year she moved to her new role at Loughborough. It was nice to have a beer and share some gossip about goings-on at the old place. It seems quite a few of the people I worked with at Sussex until 2016 have moved on to pastures new. Perhaps I’d better not comment further.

Anyway, I travelled yesterday evening from Dublin via the dreaded Ryanair who operate the only direct flights from Dublin to East Midlands Airport. In fairness, though, it was a very pleasant experience: we departed and arrived on time, where I was met on arrival by a driver who took me to Burleigh Court by taxi.

Well, I had better get my act together for the start of the meeting. Toodle-pip!

by telescoper at February 13, 2019 08:28 AM

Emily Lakdawalla - The Planetary Society Blog

Touchdown for InSight's Heat Probe
InSight has gone two for two, placing the second of its instruments gently on the Martian ground.

February 13, 2019 12:11 AM

February 12, 2019

Christian P. Robert - xi'an's og

a pen for ABC

Among the flury of papers arXived around the ICML 2019 deadline, I read on my way back from Oxford a paper by Wiqvist et al. on learning summary statistics for ABC by neural nets. Pointing out at another recent paper by Jiang et al. (2017, Statistica Sinica) which constructed a neural network for predicting each component of the parameter vector based on the input (raw) data, as an automated non-parametric regression of sorts. Creel (2017) does the same but with summary statistics. The current paper builds up from Jiang et al. (2017), by adding the constraint that exchangeability and partial exchangeability features should be reflected by the neural net prediction function. With applications to Markovian models. Due to a factorisation theorem for d-block invariant models, the authors impose partial exchangeability for order d Markov models by combining two neural networks that end up satisfying this factorisation. The concept is exemplified for one-dimension g-and-k distributions, alpha-stable distributions, both of which are made of independent observations, and the AR(2) and MA(2) models, as in our 2012 ABC survey paper. Since the later is not Markovian the authors experiment with different orders and reach the conclusion that an order of 10 is most appropriate, although this may be impacted by being a ble to handle the true likelihood.

by xi'an at February 12, 2019 11:19 PM

CERN Bulletin

Delegates' Corner

Everyone has certainly heard of the CERN Staff Association at least once in his or her career! An official CERN body whose primary purpose is to defend the economic, social, professional and moral interests of its personnel.

But the Staff Association is above all women and men, elected by YOU as staff delegates who represent YOU.

We have decided to give them the floor so that they can explain in their own words who they are and what they do.

Today we give the floor to Lynda Meichtry.

- Hello Lynda, could you introduce yourself to our readers?

Yes, my name is Lynda Meichtry, I am in charge of the general administration (Departmental Administrative Officer - DAO) and the training (Departmental Training Officer- DTO) of the section’s personnel within the Director General’s Unit but I also work for CERN's Translation & Minutes Department (TMC).

I joined CERN in 2004 in the Human Resources (HR) department, which gave me the opportunity to acquire knowledge in the areas of Staff Rules and Regulations and CERN administrative procedures.

- What does the Staff Association represent for you?

For me, the Association is above all the body that guarantees the protection of the conditions of employment of the members of personnel, which requires concertation with CERN and its Member States.

- When and how did you join the Staff Association?

I joined the Staff Association as a member in 2004 durning  the Induction session in which everyone participates upon the arrival to CERN.

- What is the point of being a member of the Staff Association?

That's a good question! The Staff Association is here to represent us, so it needs as many members as possible in order to be the most representative possible of the personnel. It also provides access to certain private services and benefits such as exclusive partnerships with CERN clubs, leisure centres, banks, insurance companies, etc.

- What  is the purpose of being a staff delegate? How is this useful?

Being a staff delegate consists of being informed, involved and being one of the interlocutors of the members of personnel of your department. This enables you to feel useful to your colleagues on issues other than your daily work.

There are also many different commissions working on various issues. It requires motivated and competent people to make progress on such issues but also to train new delegates.

We learn a lot in areas that we do not always get to know in our own work and it is very rewarding!

- How do you feel about this mission of a staff delegates?

I am learning new things every day and even if sometimes the amount of work seems heavy, as one team, we are stronger! For me it is really an exciting experience to be a delegate.

Having the support of staff members is essential because it is very motivating for delegates and reinforces their professional and often personal investment.

- How much time does it take you to fulfil the mission as a delegate?

It depends on whether we limit ourselves to the most important meetings such as the Staff Council or whether we decide to take part in commissions or committees. This varies according to the urgency of the work at CERN as well. On average, this can represent 10% of working time or more spread over one year.

- We can see that this mission is time-consuming, how does it work to juggle with your daily work at CERN?

It is not always easy to participate as much as we would like, but we are trying to do our best to make as much progress as possible. In the end, the time spent on these activities is useful to the Staff Association and therefore to CERN's members of personnel!

- The final word?

The Staff Association office is located in the main building (64/R-010). Do not hesitate to come and meet us, the secretariat will be happy to provide any information you require.

You can also find as much information as you can by visiting our website:

Thank you Lynda for accepting to be interviewed!

We hope that thanks to Lynda you have been able to discover the Staff Association from another perspective,

In our next edition of the "Delegates' Corner" we will have an interview with a young delegate elected in the last elections of 2017!


February 12, 2019 05:02 PM

CERN Bulletin


Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois.

La prochaine permanence se tiendra le :

Mardi 26 février de 13 h 30 à 16 h 00

Salle de réunion de l’Association du personnel

Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite.

Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires.

Informations :

Formulaire de contact :


February 12, 2019 05:02 PM

CERN Bulletin

CERN Bulletin


Nous rendons un grand hommage à Michel Boffard qui nous a malheureusement quitté le 8 janvier 2019 à l’âge de 78 ans. Sa disparition a profondément touché tous ceux qui l’ont côtoyé et fréquenté au cours de sa très longue carrière au CERN. Mais c’est à ses qualités d’homme que nous rendons hommage, celles qu’il a mises au service de l’Association du personnel et du Groupement des Anciens. Un portrait lui sera consacré dans la prochaine parution de l’ECHO.

February 12, 2019 05:02 PM

CERN Bulletin


CERN MicroClub needs your help!

In the early 1970s, with a number of staff members, we started looking for ways to buy computers for private use at affordable prices. We contacted various manufacturers of the time (Atari, Commodore, Apple II, Olivetti, ...) and their Swiss or French representatives. We received the same answer several times: "We will be ready to offer you special conditions if you create an internal structure at CERN to manage purchases ".

As a result, by the end of 1983, we had the idea of creating the CERN MICRO CLUB (CMC) under the aegis of the Staff Association.

At first, the club held its meetings in the offices of the first committee

We discussed the choice of computers, the first purchases to make, the club coordination but also the conditions of purchase to negotiate with the manufacturers.

Very quickly by word of mouth, the number of members increased so much after one year we were more than 50 registered!

In 1984, with the arrival of the first Mac and the very interesting conditions granted by Apple, the number of members continued to increase. We had to find new premises but already at that time it was not easy. At the beginning of the 90s, we finally succeeded: we obtained a part of Bldg. 555!

The Club's internal organization was at that time divided into sections according to the type of equipment used: Atari, Commodore, Apple II, MS DOS, games library and technical library.

A secretariat was set up to manage membership requests and fees, place material orders, pay invoices, create and send newsletters to members.

The club continued to grow and in 1993, a new Video/Photo section was created.

Our functioning is simple; one member is appointed management head of each new section who is a “de facto” member of the committee, in addition to the four statutory members: president, vice-president, secretary and treasurer.

At that time, with the agreement of CERN Management and the Staff Association, we were able to open the club to other International Organizations in Geneva and to professors from Universities and colleges such as EPFL or EPFZ.

The club continued to grow and organized workshops, conferences, presentations of new equipment by manufacturers and training courses on different software. Several pieces of equipment were provided to our members: slide scanners, A3 printers, devices for digitizing of Super 8 films.

Members could also subscribe to the "TeleSupport" remote support service. (editor's note: Currently only available for Mac.)

The club could also make official repairs of Apple, Dell, Lenovo and Brother devices.

In the early 2000s, given the large number of members and the multitude of activities, we moved to the current premises at Bldg. 567. Access for external members is easier because it avoids site access control.

A new Robotics section was then created. It brought in  a new generation of members interested in microprocessors and automation.

From the history of our club you can see the constant evolution over the last few years. The club is doing well but the committee, which has now been in place for some time, is getting older.

We must ensure the sustainability of the club in order not to lose this beautiful heritage. We would like to meet some good volunteers, ready to give a little of their time, and share their innovative ideas to allow the club to continue its evolution and last for many years to come!

If you are interested in this adventure, do not hesitate any longer!

Contact us on the following email address:


February 12, 2019 05:02 PM

Peter Coles - In the Dark

Mumps and Mumpsimusses

I noticed that there has been an outbreak of mumps among students in the Dublin area (including a case in Maynooth). I had mumps when I was a kid and I can tell you it was no fun at all. I had thought mumps had been virtually eradicated by vaccination; the MMR vaccine was brought into use in the UK in 1988, and I had mumps long before that. I suppose one can lay the blame for the current outbreak at the door of the anti-vaxxers.

That brings me to one of my favourite words – yet another that I found out while doing a crossword – mumpsimus. Here is (part of) the OED entry:

Wikipedia gives “traditional custom obstinately adhered to however unreasonable it may be”, which is in the OED further down the page.

It seems to me that belief in idea that one’s children should not be protected against mumps is a mumpsimus, and people who cling to that belief are mumpsimusses.


by telescoper at February 12, 2019 11:56 AM

Robert Helling - atdotde

Bohmian Rapsody

Visits to a Bohmian village

Over all of my physics life, I have been under the local influence of some Gaul villages that have ideas about physics that are not 100% aligned with the main stream views: When I was a student in Hamburg, I was good friends with people working on algebraic quantum field theory. Of course there were opinions that they were the only people seriously working on QFT as they were proving theorems while others dealt with perturbative series only that are known to diverge and are thus obviously worthless. Funnily enough they were literally sitting above the HERA tunnel where electron proton collisions took place that were very well described by exactly those divergent series. Still, I learned a lot from these people and would say there are few that have thought more deeply about structural properties of quantum physics. These days, I use more and more of these things in my own teaching (in particular in our Mathematical Quantum Mechanics and Mathematical Statistical Physics classes as well as when thinking about foundations, see below) and even some other physicists start using their language.

Later, as a PhD student at the Albert Einstein Institute in Potsdam, there was an accumulation point of people from the Loop Quantum Gravity community with Thomas Thiemann and Renate Loll having long term positions and many others frequently visiting. As you probably know, a bit later, I decided (together with Giuseppe Policastro) to look into this more deeply resulting in a series of papers there were well received at least amongst our peers and about which I am still a bit proud.

Now, I have been in Munich for over ten years. And here at the LMU math department there is a group calling themselves the Workgroup Mathematical Foundations of Physics. And let's be honest, I call them the Bohmians (and sometimes the Bohemians). And once more, most people believe that the Bohmian interpretation of quantum mechanics is just a fringe approach that is not worth wasting any time on. You will have already guessed it: I did so none the less. So here is a condensed report of what I learned and what I think should be the official opinion on this approach. This is an informal write up of a notes paper that I put on the arXiv today.

Bohmians don't like about the usual (termed Copenhagen lacking a better word) approach to quantum mechanics that you are not allowed to talk about so many things and that the observer plays such a prominent role by determining via a measurement what aspect is real an what is not. They think this is far too subjective. So rather, they want quantum mechanics to be about particles that then are allowed to follow trajectories.

"But we know this is impossible!" I hear you cry. So, let's see how this works. The key observation is that the Schrödinger equation for a Hamilton operator of the form kinetic term (possibly with magnetic field) plus potential term, has  a conserved current

$$j = \bar\psi\nabla\psi - (\nabla\bar\psi)\psi.$$

So as your probability density is $\rho=\bar\psi\psi$, you can think of that being made up of particles moving with a velocity field

$$v = j/\rho = 2\Im(\nabla \psi/\psi).$$

What this buys you is that if you have a bunch of particles that is initially distributed like the probability density and follows the flow of the velocity field it will also later be distributed like $|\psi |^2$.

What is important is that they keep the Schrödinger equation in tact. So everything that you can do with the original Schrödinger equation (i.e. everything) can be done in the Bohmian approach as well.  If you set up your Hamiltonian to describe a double slit experiment, the Bohmian particles will flow nicely to the screen and arrange themselves in interference fringes (as the probability density does). So you will never come to a situation where any experimental outcome will differ  from what the Copenhagen prescription predicts.

The price you have to pay, however, is that you end up with a very non-local theory: The velocity field lives in configuration space, so the velocity of every particle depends on the position of all other particles in the universe. I would say, this is already a show stopper (given what we know about quantum field theory whose raison d'être is locality) but let's ignore this aesthetic concern.

What got me into this business was the attempt to understand how the set-ups like Bell's inequality and GHZ and the like work out that are supposed to show that quantum mechanics cannot be classical (technically that the state space cannot be described as local probability densities). The problem with those is that they are often phrased in terms of spin degrees of freedom which have Hamiltonians that are not directly of the form above. You can use a Stern-Gerlach-type apparatus to translate the spin degree of freedom to a positional but at the price of a Hamiltonian that is not explicitly know let alone for which you can analytically solve the Schrödinger equation. So you don't see much.

But from Reinhard Werner and collaborators I learned how to set up qubit-like algebras from positional observables of free particles (at different times, so get something non-commuting which you need to make use of entanglement as a specific quantum resource). So here is my favourite example:

You start with two particles each following a free time evolution but confined to an interval. You set those up in a particular entangled state (stationary as it is an eigenstate of the Hamiltonian) built from the two lowest levels of the particle in the box. And then you observe for each particle if it is in the left or the right half of the interval.

From symmetry considerations (details in my paper) you can see that each particle is with the same probability on the left and the right. But they are anti-correlated when measured at the same time. But when measured at different times, the correlation oscillates like the cosine of the time difference.

From the Bohmian perspective, for the static initial state, the velocity field vanishes everywhere, nothing moves. But in order to capture the time dependent correlations, as soon as one particle has been measured, the position of the second particle has to oscillate in the box (how the measurement works in detail is not specified in the Bohmian approach since it involves other degrees of freedom and remember, everything depends on everything but somehow it has to work since you want to produce the correlations that are predicted by the Copenhagen approach).

The trajectory of the second particle depending on its initial position

This is somehow the Bohmian version of the collapse of the wave function but they would never phrase it that way.

And here is where it becomes problematic: If you could see the Bohmian particle moving you could decide if the other particle has been measured (it would oscillate) or not (it would stand still). No matter where the other particle is located. With this observation you could build a telephone that transmits information instantaneously, something that should not exist. So you have to conclude you must not be able to look at the second particle and see if it oscillates or not.

Bohmians  tell you you cannot because all you are supposed to observer about the particles are their positions (and not their velocity). And if you try to measure the velocity by measuring the position at two instants in time you don't because the first observation disturbs the particle so much that it invalidates the original state.

As it turns out, you are not allowed to observe anything else about the particles than that they are distributed like $|\psi |^2$ because if you could, you could build a similar telephone (at least statistically) as I explain the in the paper (this fact is known in the Bohm literature but I found it nowhere so clearly demonstrated as in this two particle system).

My conclusion is that the Bohm approach adds something (the particle positions) to the wave function but then in the end tells you you are not allowed to observe this or have any knowledge of this beyond what is already encoded in the wave function. It's like making up an invisible friend.

PS: If you haven't seen "Bohemian Rhapsody", yet, you should, even if there are good reasons to criticise the dramatisation of real events.

by Robert Helling ( at February 12, 2019 07:20 AM

Lubos Motl - string vacua and pheno

Nima, the latest target of the critics of physics
One week ago, we looked at Sabine Hossenfelder's unfriendly sentiments towards Lisa Randall.

Randall is famous for some clever and (even now) intriguing scenarios in particle physics while Hossenfelder is famous for persuading crackpots that physics is bad. That's a difference that Hossenfelder and her readers couldn't forgive to Randall and if Lisa Sundrum (as they romantically renamed her) were capable of giving a damn about what a bunch of irrelevant aßholes write on the Internet, they would have given her a hard time.

As you must agree, it would be a discrimination if the female big shot Lisa Randall were the only target. So Peter Woit has secured the minimum amount of fairness and political correctness when he (along with his readers) chose Nima Arkani-Hamed as a man who deserves some criticism today:
Where in the World are SUSY and WIMPs?
Woit compared two Nima's talks with the same "Where..." title: an IAS talk from July 2017 (see also another talk he gave there) and a January 2019 edition of the "Where..." talk.

In Summer 2017, Nima pointed out that some algorithms looking for the global maximum of a function (such as the maximum of the accuracy of a physical theory) that are based on "small adjustments" may fail because they get stuck around a wrong local maximum – which is not a global maximum, however (e.g. the correct theory) – and a bolder jump towards the right basin of attraction is actually needed to find the correct solution.

In plain English, bold thinkers and courageous steps are sometimes necessary for paradigm shifts that may be needed, too.

Nima surely still agrees with the comments as described above – and I would guess that he still thinks that the observations may be relevant for the search for a better (or final) theory in fundamental physics. After all, many of us have played with these algorithms to search for the global maximum. One typical approach is to jump around and prefer the jumps that increase the function (which looks like an improvement) – the jump is more likely to be "approved" if we see an apparent improvement; or we may jump in the direction of some gradient – but they also add some noise which plays the role of the "experiments" that give us a chance to jump into another basin. Once we're sufficiently sure that we're near the right basin, we may reduce the noise – we may reduce the "temperature" that acts as a variable in this algorithm – and we may quickly converge to the global extremum.

Incidentally, I want to emphasize that "too high temperature" – jumping everywhere almost randomly – is no good, either. If you don't care about the local improvements at all, you can't converge to the truth, either – you are just randomly jumping in the whole configuration space which is probably very large and the probability of hitting the maximum is infinitesimal.

In plain English, the infinite courage (or suicidal behavior) isn't a magic solution to all difficult problems in science (or outside science).

But you know, Woit was annoyed that Nima "changed his mind" in 2019. Around 1:09:00 of the January 2019 talk given in front of some very bright young folks in Princeton, Nima said that the explanations why we should continue to do research of certain classes of theories of new (particle) physics may look like excuses of a paradigm that has failed. In fact, even the theories that weren't confirmed by the LHC may already be considered to be tweaks or excuses of some simpler earlier theories that had failed.

However, Nima says, people shouldn't give up trying to tweak what they have because almost no promising theories of the same type have ever completely failed in the history of physics. Some tweaks or reinterpretations were what was needed when the theory really looked promising. Nima suggested that the people may be inclined to be bottom-up physics builders or top-down theorists – and especially the latter should keep on tweaking, combining, and recombining the toolkit that they have developed or mastered.

As you can see, this is an immensely wise recommendation.

The process of finding and establishing better theories of particle physics resembles the boring of a new tunnel. It's a tunnel between the everyday life of the doable experiments such as those on the LHC or the FCC on one side; and the nearly philosophical, Platonic, idealist realm of very precise, principled, and powerful equations, mathematical structures, and ideas that most naturally work in the regime that is experimentally inaccessible.

To one extent or another, all fundamental physicists who care about the empirical truth at all are digging a tunnel between the \(1\TeV\) energy scale of the LHC and the \(10^{16}\TeV\) Planck (energy) scale. The tunnel is being dug from two directions and people on both sides must have some idea where they want to get. It's plausible that the two teams of workers will meet in the middle. It's also plausible that one of the two teams will be almost useless and the other, successful team will just dig the whole tunnel from their side to the other side! ;-)

Boring is a boring activity and you shouldn't imagine that just like the Ejpovice tunnel was extended by 15 meters every day, physicists add one order of magnitude to the energy every month (or year). Instead, the construction of the tunnel may be very non-uniform in time. In particular, the top-down theorists have prepared some potentially promising thermonuclear bombs that may help to dig a hole going in the right direction within milliseconds.

We don't really know – and we have never known – which of the teams is more promising to dig the whole damn tunnel. And there are some obvious differences between the two teams. The team digging from low energies, i.e. from the \(1\TeV\) LHC scale, cares about the ongoing experiments a lot and this team is affected by the results of those experiments. The other team – that mentally lives near the Planck scale – doesn't care about some latest experimental twists too much. They need to care about the problems with the rocks at the Planckian side of the mountain, and some approximate aiming needed to get to the LHC throat of the mountain.

People following or contributing to the hep-th archive – such as string theorists – are those on the top-down or Planckian side of the tunnel; people following or contributing to the hep-ph archive live on the low-energy, bottom-up, LHC side of the future tunnel.

OK, the tunnel is being built inside a mountain that rather clearly has some precious metals in it such as the gold of supersymmetry. It's almost fair to say that the gold of supersymmetry – I mean the supersymmetry breaking scale – is hiding somewhere in the bulk of the mountain. The bottom-up and top-down people have a different way of thinking about the location of that gold. Needless to say, the existence of the two approaches (and archives) – which was hinted at by Nima's comments – was completely overlooked by Woit and the other cranks. They probably don't understand the concept of hep-th and hep-ph at all.

For the bottom-up people who mentally live around the LHC, there are good reasons to think that the gold shouldn't be far enough. Gold is useful for circuits, golden teeth, jewels, coins, and other things – and supersymmetry is good to explain why the Higgs boson isn't much heavier than it is. So supersymmetry should be rather close, it shouldn't be too badly broken.

Well, the top-down people also understand that but they mentally live at much higher energies and \(1\TeV\) or \(10\TeV\) are rather close to each other – they're energy scales much smaller than the Planck scale (by some 15 orders of magnitude). So top-down people – well, at least your humble correspondent – were just never carried away by the idea that the superpartner masses "have to be" \(1\TeV\) instead of \(10\TeV\). The supersymmetric gold seems to be a mechanism that pushes the Higgs boson to the opposite, low-energy side of the mountain. But something must push supersymmetry itself to low enough energies as well – which is arguably "easier" and "more natural" than to make the Higgs boson light – and this mechanism is responsible for most of the lightness of the Higgs.

Well, it doesn't have to be responsible for 100% of the lightness of the Higgs. There is some physics near the \(100\GeV\) up to \(10\TeV\) energy range. The Higgs may very well be accidentally 10 times lighter than the average Standard Model superpartner. That's been my view for a long time which is why, in 2007, I estimated the probability of the SUSY discovery at the LHC to be 50%. I still made a bet against Adam Falkowski who felt sure it was just 1% or less but 50% indicated my agnosticism that was surely more widespread among the top-down people.

The fine-structure constant is \(\alpha\approx 1/137.036\), it's also rather far from one. Now, yes, I can give you some explanations why the constant defined in this way isn't quite of order one. But it's possible that we don't quite understand the right logic (the right formulae based on the relevant mechanism of supersymmetry breaking) to estimate the ratio of the Higgs and gluino masses and if that ratio were comparable to something like \(1/137.036\) as well, I wouldn't be "totally" shocked.

And I have always "accused" many bottom-up phenomenologists of a bias preferring early discovery and testability – which leads them to a wishful thinking. You know, if you "believe" that the superpartners are light enough to be discovered by 2018, it has the advantage that it's exciting, and if you make the exact prediction and it happens to be correct, you will also get the big prizes in 2018 or soon afterwards and you don't need to apply the discount rate too much.

Note that this bias – which is obviously "diverging away from the objective arguments for the truth" – is almost equivalent to the "increased testability" preferences. People on the hep-ph side who really care about ongoing experiments may simply prefer "more (easily) testable theories" over "less (easily) testable theories". In particular, they prefer theories with lighter new particles over theories with heavier new particles.

This bias is good for them if the particles are there – and it backfires and becomes a disadvantage when the new particles aren't observed. As a top-down theorist, I look at these developments from a distance. I don't get passionate about these hopes which are irrational. The conclusion that many superpartners should be lighter than \(1\TeV\) was never justified by terribly strong arguments. My view is that "more (easily) testable theories" simply aren't more likely to be true than "less (easily) testable theories". As long as a theory is testable in principle, it's scientifically meaningful – and only actual material (theoretical or empirical) evidence for validity, not the "ease of testability", may help theories to beat others! I think that this is implicitly how other top-down physicists think as well but I think that almost no one explains these things as clearly as I do.

The LHC has found no such new particle which proves that according to some measures, the degree of "experimentally proven" fine-tuning is already something like \(\Delta \geq 100\). That's a large number but it is not insanely large. Even if the probability predicted by naturalness and SUSY were that superpartners should have been seen by the LHC by now with the probability 95%, and I think it's less than 95%, the absence of such superpartners is still just a 2-sigma deficit of new physics! I just translated the 95% \(p\)-level to the number of standard deviations, using the usual dictionary. For 99% which we may need, we would get 2.5 sigma or so.

Well, a 2-sigma deficit may be said to be curious but it is not insanely curious. We saw a 4-sigma excess of the \(750\GeV\) diphoton and it was just a statistical fluke. So why couldn't a milder, 2- or 2.5-sigma deficit of new physics at the LHC be a fluke? Of course it can be a fluke. As far as I am concerned, nothing has dramatically changed about the reasons to expect the discovery of supersymmetry in doable experiments. It's a non-rigorous but perfectly rational reasoning, especially for a top-down theorist who mentally digs on the Planckian side of the future tunnel.

Nima says that people should keep on playing with – and tweaking and reinterpreting – the very promising models of physics beyond the Standard Model. There are two basic reasons why it's completely sensible in practice:
  • the absence of alternatives (hep-ph view)
  • the existence of many known alternatives (hep-th view)
These two reasons are perfectly complementary to each because they contradict one another! ;-) So what do I fudging mean? Well, the first reason, "the absence of alternatives", describes the fact that among the effective field theories as understood by the bottom-up phenomenologists – who see how some mysterious complete theory ultimately reduces to the Standard Model or its supersymmetric extension – the pictures that were considered most promising, such as those with the MSSM, are still most promising.

I think that good physicists are eager to jump into a better "basin of attraction" as discussed at the beginning. But for this paradigm shift to make sense, such a basin of attraction must first be found; and it must be shown that it's at least equally promising as the known one(s). This hasn't really taken place which is why it's really nonsensical in practice to expect sane physicists to completely abandon their pictures. They would have nowhere to go. Their job is not to be satisfied with the currently known approximate theory. They are trying to learn more and among the candidate theories, they simply choose the most promising one.

The second reason I mention is the opposite one – the existence of many alternatives. Well, what I actually mean is the landscape of ideas available to a top-down theorist such as a string theorist. You know, these people will also refuse to totally abandon what they have – because what they have is everything that is mathematically consistent and known to the mankind.

My point is that almost independently of events at the LHC or other experiments, folks like string theorists are constantly enriching their brains by all the theories, systems of equations etc. that make any sense and that have a chance to be relevant for fundamental high-energy physics and quantum gravity. They already work with all of them, at least as a community. The criticism that they're too narrow-minded – e.g. focusing on the same kind of vacua, models, or mathematical methods – is self-evidently wrong. They are already using extremely diverse methods, descriptions, \(10^{500}\) semi-realistic vacua in many classes, and many other things.

To summarize, of course that the absence of new physics at the LHC so far cannot lead rational people to any jump because the "destination" of such a jump is either impossible to guess, or it doesn't look better than what we have, or it's already being investigated by some theorists. Whether you find it emotionally pleasing or not, the confirmation of a null hypothesis gives us a very little amount of information and a very little reason to make any qualitative shifts.

You know, what some emotional laymen might prefer would be for physicists to say: Physics has failed, now I accept Allah or loop quantum gravity (or any other crackpottery) as my savior and surrender. But that's exactly what a competent and rational physicist won't do. Physics cannot really fail. And even the relatively big qualitative ideas – which are "less than physics" but still pretty important – haven't been falsified.

People were combining, recombining, tweaking, and reinterpreting their ideas and models before the LHC runs and they will do so now, too. There is no other rational way to proceed. And of course they will push the goal posts. That's what scientists do when they accumulate some new data – improved lower bounds on the masses etc. Improved lower bounds on masses means that the broad classes of theories and strategies have to be adjusted and goal posts have to be shifted. The latter is just a negatively sounding description of the correct fact that "a scientist should care about the empirical facts"! "Shifting the goal posts" is a phrase automatically persuading the listener that it describes a sin or a crime – but when this "shifting" is a reaction to some experimental data, it's just a synonym for "Bayesian inference"! It is a good thing.

The people at Woit's blog who dislike modern physics have understood that Woit wanted them to write variations of his own attack on Nima and they provided Woit with many copies of it. For example, Marshall Eubanks wrote:
... Phlogiston was abandoned too soon? What history of physics is he talking about? ...
It's very funny but if you listen to Nima's actual talk, you will see that he is aware of the phlogiston – because he explicitly discussed it and some other examples. It's very obvious that Nima simply considers the existing promising pictures such as split SUSY and/or MSSM to be analogous to the theories that we already know to be successful (although they needed time, work, and tweaks to get fully mature – e.g. atomic hypothesis or the continental drift), and not to the likes of the phlogiston that have been refuted.

Nima also mentioned Ptolemy who "wasn't too far from wrong". Maybe he wanted to say "from right", maybe not. At any rate, I am sure he wanted to say that even the Copernican viewpoint may be viewed as a "twist" or "tweak" to the Ptolemaic astronomy and I surely agree with it. Epicycles are an analogy of a parameterization of the Fourier series for the orbits (which is always possible assuming the periodicity) and Copernicus, Brahe, Kepler, and Newton gradually developed a framework to predict relationships between the Ptolemaic Fourier coefficients, while allowing the orbits to change from one year to another, and while encouraging to switch to a more modern, heliocentric system of coordinates. Copernicus and his followers faced huge troubles with the Church but that doesn't mean that physics itself started from the blank slate (the Church has harassed people because they were inconvenient for its religious framework, not because of precisely quantified differences in physical theories). Copernicus, Brahe, Kepler, and Newton didn't have to declare Ptolemy a "failed loser". Real physicists may always see how they built something on the shoulders of giants (thanks to Newton for these words) and I've also read essays by Einstein who painted his own "revolutionary" work as a twist on top of Newton's, Maxwell's, and Lorentz's work.

The question is what are the legitimate analogies for the theories that (BSM) particle physicists work with today. No analogy is perfect and no one can even rigorously prove that some analogy is right. If you could find reliable analogies between cooking of a lunch and supersymmetric models, cooks would be enough to answer all important questions about supersymmetry – and they could join kooks who already think that they are doing so. ;-)

So you know, there is a Not Even Wrong happening involving "monster minds" such as Peter Woit, David Levitt, Bob, Sabine Hossenfelder, Ayloka, Marshall Eubanks, Quentin Ruyant, and RGT. (RGT probably mostly agrees with Nima, see the comments, sorry for being in this list.) All of them share the general point which leads Peter Woit to a conclusion:
There seems to be a consensus that Arkani-Hamed’s argument from history doesn’t hold up…
That would be a great lesson if important questions could be answered in this way. The only problem with this methodology is that consensus between a bunch of brain-dead crackpots is uncorrelated to the truth in fundamental physics and if the correlation coefficient is nonzero, its sign is negative. Why don't you focus all your limited intellectual abilities and manage to notice, Frau and Gentlemen, that you're just kooks whose opinions are completely worthless relatively to Nima's?

Thank you in advance!

I just quoted from Woit's most recent comment as of now. But the last paragraph of his actual blog post above says:
If you had to pick the single most influential theorist out there on these issues, it would probably be Arkani-Hamed. This kind of refusal to face reality is I think a significant factor in what has caused Sabine Hossenfelder to go on her anti-new-collider campaign. While I disagree with her and would like to see a new collider project, the prospect of having to spend the decades of my golden years listening to the argument “we were always right about SUSY, it just needs a tweak, and we’ll see it at the FCC” is almost enough to make me change my mind…
Note that he has only "almost" changed his opinion about the FCC. Whether he "fully" changes it probably depends on his getting at least as a nice treatment as he received from the Inference journal.

At any rate, just think about the "logic" that led Woit to "almost change" his opinions about the FCC. Woit basically brags that his opinions about the FCC (and it's totally analogous in the case of theoretical physics) aren't determined by any arguments revolving around physics, its knowledge, or the collider itself. Instead, he would like to use the survival or cancellation of a collider (or a whole subfield of physics) as a tool to say "f*ck you" to Nima or someone else. Everything that Mr Woit has ever written was driven by his desire to revenge and to calm his inferiority complex. He is just a malicious man and I despise everyone who has some tolerance for him.

At least he could entertain us by the phrase about the "decades of his golden years". Even "minutes when he was more than a pile of waste" would be too much to ask.

by Luboš Motl ( at February 12, 2019 07:07 AM

February 11, 2019

Jon Butterworth - Life and Physics

What to focus on. Where to look for the science.
“Broken Symmetries” is an art exhibition at FACT in Liverpool. Spread over galleries on two levels, it provides an audio and visual immersion in a strange frontier of knowledge and its echoes and resonances in wider culture. The artwork is … Continue reading

by Jon Butterworth at February 11, 2019 10:08 AM

February 08, 2019

Emily Lakdawalla - The Planetary Society Blog

Looking Back at MU69
A crescent view of MU69 reveals its bizarre shape. Let's look at lots of other fun-shaped space crescents.

February 08, 2019 08:49 PM

February 07, 2019

Lubos Motl - string vacua and pheno

Can the FCC tunnel(s) become much thinner?
Are you a hardcore theorist who sometimes loves to play the game that he (or she, Ann and Anna) is a game-changing inventor dealing with the practical life issues and construction, nevertheless? I am and I do. ;-)

Electric cars with batteries suck because 1 kg of a battery only stores 2% of what 1 kg of petrol does. Recharging is slow and some of these parameters won't get much better. But why don't we add wires to all our highways and switch to personal trolleybuses everywhere? The electric cars could have batteries just for a few miles of being off the grid. What's your objection, grumpy reader? :-)

Why don't we fill the land with personal trolleybuses? No batteries, no refueling anymore. The Pilsner model above is only designed for speeds up to 65 kph but it could be improved, I guess.

Or why don't we have nuclear-powered aircraft? You can invent such ideas and Google search for them. You will usually find out that it's been discussed and there are some usual problems that are immediately presented as fatal. For example, the nuclear-powered airplanes suck because the people can't be nicely protected against the radiation.

When I saw the proposals to build the next \(100\TeV\) collider at CERN, the FCC, I was impressed how surprisingly cheap the project is claimed to be (although it's not guaranteed that the final price wouldn't be much higher – it often is). Well, \(100\TeV\) is more than 7 times \(13\TeV\) but €21 billion is less than 5 times $5 billion, the price of the LHC, and it does include the new tunnel which the LHC inherited from LEP for free.

And those €21 billion are "cheaper euros" due to some 15 years of inflation – maybe by some 30% – than the "LHC euros". The cost only grows like the square root of the collision energy, it seems! Every person who has at least some relationship to science agrees that even €21 billion is peanuts for the most extreme and far-reaching science experiment that is being built just once in 20 years at the current speed.

However, in the decomposition of the expenses to the basic parts, I was rather annoyed by the expensive tunnels. Well, those €21 billion are composed of:
€5 billion for the 100-kilometer-long tunnel,
€4 billion for the lepton collider magnets etc.,
€12 billion for the later upgrade, hadron collider magnets.
I think it doesn't make sense to be more precise than that because the final numbers can't be estimated too accurately. Great. €5 billion for the tunnels looks like a lot. The percentage of the price that is consumed by the tunnel, the most low-brow part of the project, seems to be going up.

In a calculation in my article about Musk's proposed discount (which is ludicrous because his Boring Company is doing the same as competitors), I saw that by the volume and the proportionality law, using the previous colliders, the new tunnel should only cost €2 billion, not €5 billion. But a part of the increase is explained by inflation. A part may be due to a somewhat thicker tunnel. And the boring costs may grow faster than the general inflation, who knows. Maybe the rocks in the FCC area are less friendly, too.

To get higher collision energies, you need a greater curvature radius of the tunnels to keep the particles in the pipes – well, except for the magnets' getting stronger but the improvements have their limits. That implies that the tunnel has to be long. But it could arguably be thinner and therefore cheaper because the boring costs are almost proportional to the volume of the rock (and therefore to the cross section area, assuming a fixed length of the tunnel).

The cross section of the FCC tunnel is said to be 23.76 square meters. By saying it equals \(\pi d^2/4\), you will get the diameter \(d=5.5\,{\rm meters}\). Wow, that's a pretty thick tunnel, indeed. Is that really needed?

Why wouldn't I ask the people behind the FCC? The key people behind a €21 billion project surely don't have anything better to do than to chat with the laymen on Twitter – and I was right. ;-) So I asked:

The hyperlink goes to the Wikipedia page about "microtunneling".
The answer arrived almost immediately.

I surely did suspect that I would be immediately thrown at some usual excuses why they can't get below 24 square meters. But I think that the FCC folks hopefully do suspect that I won't give up this easily! ;-)

The FCC proponents weren't careless, of course:

And, to make things worse:

We mustn't forget:


And some extra niceties with an offer to explain things by the e-mail.

OK, there are clearly some extra "veins" that go through the tunnel, on top of the 1.2-meter-in-diameter cylinder with magnets. But this is a €5 billion tunnel – it might be a good idea to save some of those 23.76 square meters in the cross section, to miniaturize things a little bit, right?

We need the main pipe with the particles and magnets; cryogenic lines with another meter of space in diameter; space through which the magnet is transported during installation (to avoid "LIFO" deconstruction of the whole collider during repairs); cables and cooling pipes plus a space for a person to get there. I omitted the extra comments unrelated to the content of the large intestine.

What do you think my reaction should be?

I did know that there are things on top of the main tube, of course, and one doesn't want to deconstruct the whole collider during repairs. But I think that several thin tunnels could replace the extremely thick one. Let's count the square meters that we really need.

The main cylinder with the magnets and particles in the middle could be 1 square meter. These magnets could be transported there through another thin tunnel which is another 1 square meter, and these tunnels (and all other tunnels in the plan below) could be fully connected e.g. on one hundred 50-meter-long segments each 1 kilometer of the circumference. On each kilometer, all the magnets in the row would be taken out if one of them had to be repaired.

Another 1 square meter is the cryogenic line, another 1 square meter is some wires and extra cooling, and 1 additional square meter is enough for a CERN employee to physically get there. If Elon Musk kindly allowed, the employee would be a British diver who is not a pedo and doesn't suffer from claustrophobia. He could easily climb through a tube of diameter 1 meter. Each 100 meters, there would be some small holes in between all the small tunnels so that he could look or fix the mechanisms that allow things to be moved in between all the tunnels on each 1 kilometer.

Just by eye-balling, don't you agree that at least one-half of the area of the disk-shaped cross section is wasted?

I tried to be tough and reduce the total cross section from 24 to 5 square meters. I am surely gonna be told that it's too ambitious and impossible. Maybe some merger into two tunnels of the diameter of 2 meters could be better. Maybe we could get to 12 square meters in total. But the price of the tunnel – now tunnels – could still drop by one-half or two billion Euros, I think.

Some optimization should be tried. It's a lot of money.

One of the arguments we sometimes quote as the "secondary" benefits of the collider projects is that they encourage the progress in lots of the technologies that are needed to build that huge device. We usually mean the superconducting magnets and other "hi-tech" components. But what about the damn tunnels? They're a century-long technology but some "clever tunnels for the 21st century" which minimize the cross section and allow all things to get to the right places due to some clever enough logistics should be a part of the "secondary progress" ignited by the CERN projects.

The kind FCC folks surely feel uneasy about such proposed revisions. But I do think that they should move their aß and try to do some clever optimization of the tunnels' infrastructure because the thickness of the tunnels looks wasteful – for a 100-kilometer-long tunnel whose space isn't really enjoyed by the human inhabitants – and sort of "outdated", if you appreciate that "miniaturization" is one of the trends of the relatively modern progress. Maybe as soon as they make the lepton collider €3 billion or 30% cheaper, impressed sponsors will immediately approve the project and the serious work may begin.

I also suspect that the dipole magnets themselves and many other things could be thinner than they are as well but I leave this related topic to someone else.

And I must add a medium-term shiny accelerator physics vision: the tunnels need to get longer to achieve higher collision energies but there could be an ongoing miniaturization in the thickness of all the tubes, the cross section could keep on shrinking, and the volume of all the tunnels and magnets and therefore the price could stay fixed as the people build ever stronger colliders!

Off-topic but European and geographically close to the topic: Although Macron has ludicrously declared himself to be one of the Yellow Vests, there had to be some reasons why he didn't like the Italian deputy prime minister's meeting with one of his (Macron's) bosses, leaders of the Yellow Vest movement.

So France has recalled its ambassador to Rome. Clearly, after decades of taking credit for the peace on our continent, the European Union isn't helpful in calming the passions. The video above compares the French and Italian forces in the looming Romance war.

The foes are tied in many respects, in others they are imbalanced and France has a slightly higher number of advantages but the result could be uncertain for a long time, especially because France has a big disadvantage of greater internal disagreements right now, I think.

by Luboš Motl ( at February 07, 2019 03:13 PM

Axel Maas - Looking Inside the Standard Model

Why there won't be warp travel in times of global crises
One of the questions I get most often at outreach events is: "What is about warp travel?", or some other wording for faster-than-light travel. Something, which makes interstellar travel possible, or at least viable.

Well, the first thing I can say is that there is nothing which excludes it. Of course, within our well established theories of the world it is not possible. Neither the standard model of particle physics, nor general relativity, when constrained to the matter we know of, allows it. Thus, whatever describes warp travel, it needs to be a theory, which encompasses and enlarges what we know. Can a quantized combination of general relativity and particle physics do this? Perhaps, perhaps not. Many people think about it really hard. Mostly, we run afoul of causality when trying.

But these are theoretical ideas. And even if some clever team comes up with a theory which allows warp travel, this does not say that this theory is actually realized in nature. Just because we can make it mathematical consistent does not guarantee that it is realized. In fact, we have many, many more mathematical consistent theories than are realized in nature. Thus, it is not enough to just construct a theory of warp travel. Which, as noted, we failed so far to do.

No, what we need is to figure out that it really happens in nature. So far, this did not happen. Neither did we observe it in any human-made experiment, nor did we have any observation in nature which unambiguously point to it. And this is what makes it real hard.

You see, the universe is a tremendous place, which is unbelievable large, and essentially three times as old as the whole planet earth. Not to mention humanity. There happen extremely powerful events out there. This starts from quasars, effectively like a whole galactic core on fire, to black hole collisions and supernovas. These events put out an enormous amount of energy. Much, much more than even our sun generates. Hence, anything short of a big bang is happening all the time in the universe. And we see the results. The earth is hit constantly by particles with much, much higher energies than we can produce in any experiment. And this since earth came into being. Incidentally, this also tells us that nothing we can do at a particle accelerator can really be dangerous. Whatever we do there has happened so often in our Earth's atmosphere, it would have killed this planet long before humanity entered the scene. Only bad thing about it, we do never know when and where such an event happens. And the rate is also not that high, it is only that earth existed already so very long. And is big. Hence, we cannot use this to make controlled observations.

Thus, whatever could happen, happens out there. In the universe. We see some things out there, which we cannot explain yet, e.g. dark matter. But by and large a lot works as expected. Especially, we do not see anything which begs warp travel to explain. Or anything else remotely suggesting something happening faster than the speed of light. Hence, if something like faster-than-light travel is possible, it is neither common nor easily happening.

As noted, this does not mean it is impossible. Only that if it is possible, it is very, very hard. Especially, this means it will be very, very hard to make an experiment to demonstrate the phenomenon. Much less to actually make it a technology, rather than a curiosity. This means, a lot of effort will be necessary to get to see it, if it is really possible.

What is a lot? Well, the CERN is a bit. But human, or even robotic, space exploration is an entire different category, some one to two orders of magnitudes more. Probably, we would need to combine such space exploration with particle physics to really get to it. Possible the best example for such an endeavor is the future LISA project to measure gravitational waves in space. It is perhaps even our current best bet to observe any hints of faster-than-light phenomena, aside from bigger particle physics experiments on earth.

Do we have the technology for such a project? Yes, we do. We have it since roughly a decade. But it will likely take at least one more decade to have LISA flying. Why not now? Resources. Or, often put equivalently, costs.

And here comes the catch. I said, it is our best chance. But this does not mean it is a good chance. In fact, even if faster-than-light is possible, I would be very surprised if we would see it with this mission. There is probably a few more generations of technology, and another order of magnitude of resources, needed, before we could see something, given of what I know how well everything currently fits. Of course, there can always be surprises with every little step further. I am sure, we will discover something interesting, possibly spectacular with LISA. But I would not bet anything valuable that it will be having to do with warp travel.

So, you see, we have to scale up, if we want to go to the stars. This means investing resources. A lot of them. But resources are needed to fix things on earth as well. And the more we damage, the more we need to fix, and the less we have to get to the stars. Right now, humanity moves into a state of perpetual crises. The damage wrought by the climate crises will require enormous efforts to mitigate, much more to stop the downhill trajectory. As a consequence of the climate crises, as well as social inequality, more and more conflicts will create further damage. Finally, isolationism, both nationally as well as socially, driven by fear of the oncoming crises, will also soak up tremendous amounts of resources. And, finally, a hostile environment towards diversity and putting individual gains above common gains create a climate which is hostile to anything new and different in general, and to science in particular. Hence, we will not be able to use our resources, or the ingenuity of the human species as a whole, to get to the stars.

Thus, I am not hopeful to see faster-than-light in my lifetime, or those of the next generation. Such a challenge, if it is possible at all, will require a common effort of our species. That would be truly one worthy endeavour to put our minds at. But right now, as a scientist, I am much more occupied with protecting a world in which science is possible, both metaphorically as well as literally.

But, there is always hope. If we rise up, and decide to change fundamentally. When we put the well-being of us as a whole in front. Then, I would be optimistic that we can get out there. Well, at least as fast as nature permits. How fast this ever will be.

by Axel Maas ( at February 07, 2019 09:17 AM

John Baez - Azimuth

Applied Category Theory 2019

I hope to see you at this conference, which will occur right before the associated school meets in Oxford:

Applied Category Theory 2019, July 15-19, 2019, Oxford, UK.

Applied category theory is a topic of interest for a growing community of researchers, interested in studying systems of all sorts using category-theoretic tools. These systems are found in the natural sciences and social sciences, as well as in computer science, linguistics, and engineering. The background and experience of our members is as varied as the systems being studied. The goal of the ACT2019 Conference is to bring the majority of researchers in the field together and provide a platform for exposing the progress in the area. Both original research papers as well as extended abstracts of work submitted/accepted/published elsewhere will be considered.

There will be best paper award(s) and selected contributions will be awarded extended keynote slots.

The conference will include a business showcase and tutorials, and there also will be an adjoint school, the following week (see webpage).

Important dates

Submission of contributed papers: 3 May
Acceptance/Rejection notification: 7 June


Prospective speakers are invited to submit one (or more) of the following:

• Original contributions of high quality work consisting of a 5-12 page extended abstract that provides sufficient evidence of results of genuine interest and enough detail to allow the program committee to assess the merits of the work. Submissions of works in progress are encouraged but must be more substantial than a research proposal.

• Extended abstracts describing high quality work submitted/published elsewhere will also be considered, provided the work is recent and relevant to the conference. These consist of a maximum 3 page description and should include a link to a separate published paper or preprint.

The conference proceedings will be published in a dedicated Proceedings issue of the new Compositionality journal:

Only original contributions are eligible to be published in the proceedings.

Submissions should be prepared using LaTeX, and must be submitted in PDF format. Use of the Compositionality style is encouraged. Submission is done via EasyChair:

Program chairs

John Baez (U.C. Riverside)
Bob Coecke (University of Oxford)

Program committee

Bob Coecke (chair)
John Baez (chair)
Christina Vasilakopoulou
David Moore
Josh Tan
Stefano Gogioso
Brendan Fong
Steve Lack
Simona Paoli
Joachim Kock
Kathryn Hess Bellwald
Tobias Fritz
David I. Spivak
Ross Duncan
Dan Ghica
Valeria de Paiva
Jeremy Gibbons
Samuel Mimram
Aleks Kissinger
Jamie Vicary
Martha Lewis
Nick Gurski
Dusko Pavlovic
Chris Heunen
Corina Cirstea
Helle Hvid Hansen
Dan Marsden
Simon Willerton
Pawel Sobocinski
Dominic Horsman
Nina Otter
Miriam Backens

Steering committee

John Baez (U.C. Riverside)
Bob Coecke (University of Oxford)
David Spivak (M.I.T.)
Christina Vasilakopoulou (U.C. Riverside)

by John Baez at February 07, 2019 07:35 AM

February 06, 2019

Clifford V. Johnson - Asymptotia

At the Perimeter

In case you were putting the kettle on to make tea for watching the live cast.... Or putting on your boots to head out to see it in person, my public talk at the Perimeter Institute has been postponed to tomorrow! It'll be just as graphic! Here's a link to the event's details.

-cvj Click to continue reading this post

The post At the Perimeter appeared first on Asymptotia.

by Clifford at February 06, 2019 11:16 PM

Lubos Motl - string vacua and pheno

"End of high energy physics" is silly
The newest anti-collider tirade at Backreaction, Why a larger particle collider is not currently a good investment, begins by saying that the negative statement is an uncontroversial position.

Well, as Ms Hossenfelder could have learned at Twitter where she has debated these issues with real particle physicists, her remarks are controversial, to say the least. It's much less controversial to say that she doesn't have a clue what she is talking about. Let me elaborate on this statement in some detail.

The Livingston Plot, via K. Yokoya.

High energy physics was a new name given to particle (or subnuclear) physics because the plan has been from the beginning to indefinitely raise the collision energy – and therefore the ability of the experiments to probe ever shorter distances (short distances are tied to high momenta/energies by the uncertainty principle). The rate of progress may slow down but it has always been clear that the progress could continue basically indefinitely.

In the first part of her new text, she makes it clear that she was looking for some "allies" who have questioned the future of particle accelerators just like she does. So she found a 2001 text in Physics Today by Maury Tigner, Does Accelerator-Based Particle Physics Have a Future?

Now, Tigner had been a big "design group" boss of the cancelled collider in Texas, the SSC. So what do you think was his answer to the question in his own article? Pretend that your IQ is above 70 if it is not and try to answer this question: Was Tigner, a collider boss, an anti-collider activist similar to Ms Hossenfelder?

Just to be sure, because there may be readers with the IQ below 70, I have to give a short answer to this "difficult" question: No, he wasn't.

In the very first paragraph, Ms Hossenfelder makes an extraordinary statement:
That the costs of larger particle colliders would at some point become economically prohibitive has been known for a long time. Even particle physicists could predict this.
As I have said, this statement is completely ludicrous. No physicist – and no person with the technical thinking at least at the high school level – has ever stated that "larger particle colliders would become economically prohibitive" at some point of time. The economy is generally growing, the technologies are generally improving, so of course that we may keep on building ever stronger particle colliders and that has always been the plan – that's why the field is called "high energy physics".

Maybe Sabine Hossenfelder, John Horgan, Uncle Al, and a bunch of similar "physicists" were saying something else to each other but actual physicists haven't. Of course there is no "end of physics".

She may have misunderstood the statement that some collision energy chosen on the log scale in between the LHC and Planck scale would be impossible to realize on Earth. Some energies such as \(10^{10}\GeV\) could be economically prohibitive and almost impossible on Earth. There's some order-of-magnitude estimate of the collision energy where we can't realistically get on Earth. But if you translate this "cutoff" to the moment of time or the year when particle physics should "hit the wall", you will surely not get a moment in the next 100 or 1,000 years. There is no reason for high energy physics to stop in the next millennium.

Even at the sociological level, such an "end of experimental particle physics" is as silly as the "end of sports" or "end of Olympic Games" or "end of Formula One" or "end of Miss USA" (OK, the latter has mostly occurred when the exhibitions in bikinis were replaced with contestants' left-wing political monologues). Athletes' performance is improving at a slower rate than it did in the past but it doesn't mean that we must abolish sports, does it? What's the fudging difference? Even if the rate of improvements slowed down incredibly, it would make sense to build new colliders. Although sports have been pointless for a long time, or always, some people still do similar stupid things. ;-)

And other, smarter people want to do particle physics. You can dislike baseball (I don't even like it enough to hate it LOL) but you won't prevent other people from playing or watching it. Similarly, Ms Hossenfelder may dislike particle physics but she's just a petty woman who makes Germany suck again and who can't prevent others, especially people from different nations and in a few years, from doing experimental particle physics.

Hasn't Maury Tigner, Hossenfelder's "source number one", written his own 2001 reaction to her 2019 statement about the "end of particle physicists at some point" that is "well known" and "predicted even by particle physicists"? Well, he has. This paragraph was fully focused on that claim:
The falloff in the energy frontier’s rate of advance might inspire the reader to ask whether we are approaching some inherent physical limit to the capability of accelerators, or perhaps some other limit. The answer is complex, but one thing is clear: We are not approaching a technical limit to the energies that can be achieved in the laboratory.
I added the bold face because Tigner, like any competent particle physicist, knows that there is no nearby limit. Larger tunnels and/or stronger magnets translate to higher energy collisions and the current colliders are extremely far from a limit, at least in the length of the tunnels – let's say that Earth radius could be such a limit, assuming that people won't build the colliders in outer space which they should.

Instead, Tigner – who clearly felt some responsibility for their failure to convince the U.S. Congress (and suggested that he would have been capable of "selling" a $1 billion experiment) – offers a detailed discussion about the rate of various prices. Some parts of the gadgets were getting cheaper extremely quickly, e.g. the superconducting wires, others were not. But let me post the Livingston Plot again:

Hossenfelder seems to use this plot as some kind of an argument in favor of her and Horgan's "end of science" delusions and she even wrote:
You can clearly see that the golden years of particle accelerators ended around 1990.
But only people with a severe enough eye disorder or with a brain disease may "clearly see" such a non-existent thing in the plot. Others see that the golden years are always in the future because the collision energy keeps on increasing. What she probably wanted to say is that the rate at which the collision energy was increasing per decade decreased after 1990 or so. But does it mean that "golden years of particle accelerators ended in 1990"?

This statement is exactly as true – or as false – as the statement
The golden years of the European and U.S. economy ended at the end of the 19th century.
Why? Because the average annual GDP growth was around 10% a year in the final decades of the 19th century. And we only expect some 3% today. Does it mean that the golden years of the economy stopped over a century ago? Well, if you define "golden years" as those with the highest annual growth, then yes. But no sane people do. The economy continued to grow after 1900 which is why it's just plain silly to say that the golden years of the economy occurred before 1900.

(By the way, we can debate what is behind the "disappointing" slowdown after 1900 or so. The low-hanging fruits of industrialization had been picked by 1900 or so – but I still think that the overregulation and overtaxation of the 20th and 21st century was more harmful. But I digress.)

It is even much more silly to say that the economy should have stopped producing things in 1900. And this is the actual perfect analogy of Hossenfelder's plan to give up on particle colliders. It's utterly uncontroversial that she has no idea what she is talking about.

The decadal rate of the increasing accelerator energy dropped around 1990 but the energy kept on rising and indeed, you can see that the Livingston Plot also includes a projection to the future colliders where the energy keeps on growing. The collision energy jumped by one order of magnitude each 10 years before 1990 – and the time needed for the 10-fold increase is closer to 20-30 years after 1990 (and it may be 200 years around 2300 AD). It's totally analogous to the slowed down GDP growth from 10% in the 19th century to 3% today.

But the GDP and the collision energy has no reason to stop growing.

In the following ten paragraphs, she repeats the mostly untrue statement that "colliders are damn expensive" several times while she adds some irrelevant details that have nothing to do with her basic wrong claims. At some moment, she gets to a comparison to LIGO:
Compare the expenses for CERN’s FCC plans to that of the gravitational wave interferometer LIGO. LIGO’s price tag was well below a billion US$. Still, in 1991, physicists hotly debated whether it was worth the money.
I love LIGO, I have rediscovered my gravitational waves from the raw LIGO data as well, and did lots of analyses, recommended the Nobel prize for the exact 3 men who really got it later, and so on. But it was still sensible to debate whether the gadget was worth almost one billion dollars because
the LIGO didn't and basically couldn't discover any new fundamental physics.
The LIGO detected something that is absolutely unavoidable given the general theory of relativity – even at the level at which the theory was almost perfectly understood (by the competent theorists – I don't mean by the general public). So LIGO gave us the ability to "hear" particular astrophysical events – black hole mergers and neutron star mergers so far – which means that it is giving us some new data about astrophysics and perhaps "cosmology close to astrophysics". But it is not producing new data about fundamental physics – and the chance that LIGO could have done so was virtually zero.

In that sense, it dramatically differs from the LHC (or the next colliders) that was (or will be) probing so far untested energy regime of particle physics. Every physicist understands that her suggestion that the colliders are worse than LIGO is absolutely irrational. Here is a CERN response:

Right. In her stupidity that she has enthusiastically exposed in The New York Times, she basically explicitly wrote that LIGO was nice because there was a firm prediction, the gravitational waves, and LIGO got it. On the other hand, the LHC was bad because it discovered the firmly predicted Higgs boson.

What she writes doesn't make any sense. It's nice to confirm firm predictions but if we are really certain about a prediction, then the experiment is pointless. In the case of the LHC and the Higgs boson, we got more information about fundamental physics than in the case of LIGO and gravitational waves: We have learned that the Higgs mass was about \(125\GeV\). The mass was previously unknown. We haven't learned any parameter of fundamental physics from LIGO.

Maybe her obsession with "firm predictions confirmed by experiments" is enough at school, where schoolkids learn lots of things that had been known for a very long time and where schoolgirls are more likely to be praised by their teacher for being "right" and obedient. But the scientific research is something else than the elementary school and the repetition and confirmation of scientific findings that have been known for a long time isn't enough in research!

After numerous additional boring paragraphs full of arrogance, stupidity, and irrelevant technicalities, she wraps up with the final paragraph which starts as follows:
Of course, particle physicists do have a large number of predictions for new particles within the reach of the next larger collider, but these are really fabricated for no other purpose than to rule them out. You cannot trust them. [...]
The only problem with this dumb attack against particle physicists or their work is that it logically cannot influence the benefits of a new collider. The reason is that the scientific benefits of a new collider don't depend on the trustworthiness of the predictions at all.

In fact, the very purpose of the experiment – and basically any experiment in science – is to empirically evaluate the validity of all relevant predictions. The fundamental point about science that this lady still completely misunderstands is that
experiments are not being built in order to confirm firm and guaranteed predictions, to show how trustworthy theorists or their celebrated theories are. Instead, experiments are being built to give us previously unknown or uncertain information and decide which expectations were true and which were not.
The incomplete trustworthiness of predictions not only isn't "fatal" for a meaningful experiment. It is a necessary condition for a meaningful experiment!

Because the \(100\TeV\) collider is going to tell the physicists what happens in that new energy range, whatever it is, we may even say that the scientific benefits of the collider are completely time-independent. So as far as the benefits go, the word "currently" in her title (the collider isn't a good investment) is completely irrational because the benefits for the mankind of probing that energy regime won't change if we delay the experiment by a century (well, unless all people will really turn into stupid apes, in which case the perceived benefits may drop). However, the benefits of a \(100\TeV\) collider built in 2150 AD will be zero for the currently living physicists because at that time, they will be dead. We may include this preference for an earlier collider to a discount rate. It's competing against the dropping expenses. If the expenses drop less quickly than the discount rate, it means that we should build as soon as possible! The previous sentence is an example of an actual rational argument affecting the cost-and-benefit analysis, something that Hossenfelder pretends to do but she never does.

I really find it amazing that an adult woman who has pretended to be a scientist for very many years simply doesn't get this elementary universal point about all of science – that experiments are only meaningful if and because they reduce ignorance or uncertainty.

The very last sentences say:
[...] You cannot trust them. When they tell you that a next larger collider may see supersymmetry or extra dimensions or dark matter, keep in mind they told you the same thing 20 years ago.
And that's very correct that particle physicists are making qualitatively identical statements about supersymmetry as they did 20 years ago – because nothing qualitative has changed about our knowledge about the supersymmetry in the real world around us since that time! There are good reasons to think that supersymmetry exists in Nature – and almost certainty that the superpartners don't have masses in the range of energies that have already been measured.

So indeed, instead, what should raise red flags would be if the physicists were saying something completely and qualitatively different than 20 years ago because that qualitative change would be indefensible!

The broad situation of particle physics hasn't changed – and there are certain truly universal principles about high energy physics that haven't changed in the recent 80 years and that won't change in the next 80 years, either. In particular, a more advanced civilization is capable of building ever stronger colliders that are capable of seeing increasingly massive new particles, resolve ever shorter distances, and most of the general hypotheses that have been neither proven nor falsified yet remain in the state of uncertainty. The fewer new discoveries are made each decade (the Higgs boson was discovered less than 7 years ago, just to be sure), the less quickly the wisdom in physics – and the physicists' commentaries – are changing.

It's a sad testimony to our politically correct epoch that a person who is incapable of understanding these "almost tautologies" is allowed to share her delusions in the New York Times and similar "publications".

P.S.: I realized I forgot to discuss her comment about "alternatives" like the precise electron/muon magnetic moment measurements etc.

Those are indeed cheaper and great but they don't replace the high-energy frontier. They are complementary. An obvious limitation of an anomaly in the magnetic moment that may be found (and that was already found, in the muon case) is that there is no way to attribute the discrepancy to a physical effect. It's just a number – either right or wrong number – but it can't tell us any interesting details about the causes.

More generally, she and others sometimes say "it's right to divide the FCC money to hundreds of [unnamed] experiments". To spread billions of dollars to unnamed experiments means not to care where the money goes – it's a recipe to waste the money. At the end, I think that some people's tendency to "redistribute" or "decentralize" the money is just another example of their Marxist egalitarianism.

Egalitarianism of the communist type is decimating for the economies – and its analogy may be equally devastating for science. Hundreds of such small experiments could be guaranteed to be worthless and their "principal investigators" could easily hide rubbish and unoriginal repetitiveness behind the shortage of scrutiny – because when the money is spread to lots of places, the scrutiny of each goes down considerably.

Small experiments may do interesting things but there's a rather good reason to think that unless there are some overlooked light axions or something weakly coupled in the available energy range, we may be nearly certain that none of these cheap experiments may find anything really and qualitatively new because, if I oversimplify just a little bit, we simply do know all the physics beneath \(1\TeV\). Those are good reasons to think that the money for smaller experiments is much likely to be wasted than the money for an experiment that actually pushes the energy frontier further.

Competent physicists in these fields have simply thought about the question and they can explain why they consider the investment into a higher-energy collider to be a better investment than the investment to the known named alternatives. You can be pretty sure that it's better than unnamed random projects that someone proposes (and that haven't been scrutinized at all), too.

by Luboš Motl ( at February 06, 2019 08:05 PM

February 05, 2019

John Baez - Azimuth

Fermat Primes and Pascal’s Triangle

If you take the entries Pascal’s triangle mod 2 and draw black for 1 and white for 0, you get a pleasing pattern:

The 2^nth row consists of all 1’s. If you look at the triangle consisting of the first 2^n rows, and take the limit as n \to \infty, you get a fractal called the Sierpinski gasket. This can also be formed by repeatedly cutting triangular holes out of an equilateral triangle:

Something nice happens if you interpret the rows of Pascal’s triangle mod 2 as numbers written in binary:

1 = 1
11 = 3
101 = 5
1111 = 15
10001 = 17
110011 = 51
1010101 = 85
11111111 = 255
100000001 = 257

Notice that some of these rows consist of two 1’s separated by a row of 0’s. These give the famous ‘Fermat numbers‘:

11 = 3 = 2^{2^0} + 1
101 = 5 = 2^{2^1} + 1
10001 = 17 = 2^{2^2} + 1
10000001 = 257 = 2^{2^3} + 1
1000000000000001 = 65537 = 2^{2^4} + 1

The numbers listed above are all prime. Based on this evidence Fermat conjectured that all numbers of the form 2^{2^n} + 1 are prime. But Euler crushed this dream by showing that the next Fermat number, 2^{2^5} + 1, is not prime.

Indeed, even today, no other Fermat numbers are known to be prime! People have checked all of them up to 2^{2^{32}} + 1. They’ve even checked a few bigger ones, the largest being

2^{2^{3329780}} + 1

which turns out to be divisible by

193 \times 2^{3329782} + 1

Here are some much easier challenges:

Puzzle 1. Show that every row of Pascal’s triangle mod 2 corresponds to a product of distinct Fermat numbers:

1 = 1
11 = 3
101 = 5
1111 = 15 = 3 × 5
10001 = 17
110011 = 51 = 3 × 17
1010101 = 85 = 5 × 17
11111111 = 255 = 3 × 5 × 17
100000001 = 257

and so on. Also show that every product of distinct Fermat numbers corresponds to a row of Pascal’s triangle mod 2. What is the pattern?

By the way: the first row, 1, corresponds to the empty product.

Puzzle 2. Show that the product of the first n Fermat numbers is 2 less than the next Fermat number:

3 + 2 = 5
3 × 5 + 2 = 17
3 × 5 × 17 + 2 = 257
3 × 5 × 17 × 257 + 2 = 65537

and so on.

Now, Gauss showed that we can construct a regular n-gon using straight-edge and compass if n is a prime Fermat number. Wantzel went further and showed that if n is odd, we can construct a regular n-gon using straight-edge and compass if and only if n is a product of distinct Fermat primes.

We can construct other regular polygons from these by repeatedly bisecting the angles. And it turns out that’s all:

Gauss–Wantzel Theorem. We can construct a regular n-gon using straight-edge and compass if and only if n is a power of 2 times a product of distinct Fermat primes.

There are only 5 known Fermat primes: 3, 5, 17, 257 and 65537. So, our options for constructing regular polygons with an odd number of sides are extremely limited! There are only 2^5 = 32 options, if we include the regular 1-gon.

Puzzle 3. What is a regular 1-gon? What is a regular 2-gon?

And, as noted in The Book of Numbers by Conway and Guy, the 32 constructible regular polygons with an odd number of sides correspond to the first 32 rows of Pascal’s triangle!

1 = 1
11 = 3
101 = 5
1111 = 15 = 3 × 5
10001 = 17
110011 = 51 = 3 × 17
1010101 = 85 = 5 × 17
11111111 = 255 = 3 × 5 × 17
100000001 = 257
1100000011 = 771 = 3 × 257
10100000101 = 1285 = 5 × 257
101010010101 = 3855 = 3 × 5 × 257

and so on. Here are all 32 rows, borrowed from the Online Encylopedia of Integer Sequences:

Click to enlarge! And here are all 32 odd numbers n for which we know that a regular n-gon is constructible by straight-edge and compass:

1, 3, 5, 15, 17, 51, 85, 255, 257, 771, 1285, 3855, 4369, 13107, 21845, 65535, 65537, 196611, 327685, 983055, 1114129, 3342387, 5570645, 16711935, 16843009, 50529027, 84215045, 252645135, 286331153, 858993459, 1431655765, 4294967295

So, the largest known odd n for which a regular n-gon is constructible is 4294967295. This is the product of all 5 known Fermat primes:

4294967295 = 3 × 5 × 17 × 257 × 65537

Thanks to Puzzle 2, this is 2 less than the next Fermat number:

4294967295 = 2^{2^5} - 1

We can construct a regular polygon with one more side, namely

4294967296 = 2^{2^5}

sides, because this is a power of 2. But we can’t construct a regular polygon with one more side than that, namely

4294967297 = 2^{2^5} + 1

because Euler showed this Fermat number is not prime.

So, we’ve hit the end of the road… unless someone discovers another Fermat prime.

by John Baez at February 05, 2019 05:53 PM

Lubos Motl - string vacua and pheno

Realistic fermion masses from D6-branes
The most interesting hep-ph paper today is
All Fermion Masses and Mixings in an Intersecting D-brane World
by Van Mayes of Houston. Well, it's a string phenomenology paper so it's more interesting than a dozen of average hep-ph preprints combined. Since my childhood, I wanted to calculate the "constants of Nature". It took some time to understand that one may only calculate the dimensionless ones – those don't depend on a social convention, the choice of units. Mass ratios of elementary particles were the first constants I was obsessed with – even before the fine-structure constant.

Well, at the beginning, I also failed to appreciate that the proton wasn't quite elementary so the proton-to-electron mass ratio, \(m_p/m_e\approx 1836.15\), was interesting enough. I figured out it was equal to \(6\pi^5\). Good numerology proves one's passion. ;-) I still think that the numerical agreement between this simple formula and the measured ratio is rather impressive.

OK, in more adult terms, the Standard Model has some 29 parameters or so. Most of them describe the mass matrices of the quarks and leptons. Wouldn't it be great to calculate them? String theory in principle allows you to calculate all the dimensionless constants encoded in the masses – once you insert a finite number of bits that describe the string compactification, you may calculate all such constants with an unlimited accuracy, at least in principle and after you figure out a calculational framework that really allows you any accuracy in principle.

String theory's realistic vacua require supersymmetry, for many reasons, and the maximum decompactified number of spacetime dimensions is 10, 11, or (if you promise to undo the decompactification of two soon) 12. The realistic classes of vacua in string/M/F-theory include
  • \(E_8 \times E_8\) heterotic strings on a Calabi-Yau three-fold
  • their strongly coupled limit, Hořava-Witten M-theory on a line interval times the Calabi-Yau
  • M-theory on a singular manifold of \(G_2\) holonomy
  • type IIA braneworlds with D6-branes
  • F-theory on Calabi-Yau four-folds or perhaps \(Spin(7)\) holonomy manifolds, perhaps with lots of fluxes
There are various dualities between these five groups. The second is the strong coupling limit of the first. The \(G_2\) holonomy manifolds of M-theory may be obtained from type IIA string theory with D6-branes – if those branes are replaced by Kaluza-Klein monopoles (which are ultimately smooth vacuum solutions of Einstein's equations). Also, if the \(G_2\) holonomy manifolds or the Calabi-Yau four-folds used for F-theory are written as certain fibrations, one may find a dual heterotic string theory. And there are a few more.

Mayes discusses some developments in type IIA string theory with D6-branes. There's some sense in which I love the five classes above equally. The IIA braneworlds with D6-branes are a great class of semi-realistic string compactifications. Incidentally, you may understand those braneworlds rather well from Barton Zwiebach's undergraduate textbook of string theory!

Just to be sure, I am not 100% sure that our world has to be described by a string vacuum from at least one of the five classes above (the number of correct classes may be higher than one due to dualities – equivalences across the groups). There may be other classes we have overlooked due to our incomplete understanding of string theory or string theory may be wrong as a theory of our Universe, in principle. But even if the confidence were just in "dozens of percent" that our Universe belongs to one of those groups, I would view it as a moral imperative for a sufficiently intelligent person to dedicate some time to get closer to such a TOE – or to find a viable alternative to string theory (which seems extremely unlikely to me).

Mayes uses type IIA string theory on an orbifold of the six-torus, \(T^6/\ZZ_2\times \ZZ_2\). The hidden six dimensions aren't too complicated – they are flat, in fact. You just make all six flat dimensions periodic, using a lattice. That's what a six-torus is. The orbifold means the "division by the group", in this case \(\ZZ_2\times \ZZ_2\): you identify points (or physical configurations, more generally) that are related by the geometric (or generalized) transformations representing the group elements. Here the orbifold group has \(2\times 2 =4\) elements. One of them is the identity element and the other three are "analogous to each other". So although \(\ZZ_2\times \ZZ_2\) looks like a group that is "all about the number two and its powers", there is a triplet hiding underneath it.

The group acts on the three complex coordinates labeling the six-torus, \(z_1,z_2,z_3\), by changing signs of two of the three coordinates (or by doing nothing). You may check that these operations are closed under composition and the group is isomorphic to \(\ZZ_2\times \ZZ_2\). These orbifolds were considered interesting since the mid 1980s, the First Superstring Revolution, and with the D6-branes added, they have been known to be damn promising in phenomenology since 2000 or so. Note that there also exist interesting orbifolds of the torus \(T^6\) that involve the group \(\ZZ_3\) – but the six-torus must split to two-tori defined with the angle of 120 degrees. The angles in Mayes' tori may be arbitrary.

Most of the key papers that Mayes uses are about one decade old – papers by Cvetič, Shiu, Uranga; Chen, Li, Mayes, Nanopoulos, and others. Type IIA string theory is great for braneworlds because the fermions and the Higgs doublet emerge really naturally from the branes.

Note that D6-branes are filling the 3+1-dimensional spacetime and they have 3 extra dimensions along the compactified directions. Those latter 3 are exactly equal to 1/2 of the number of the compactified dimensions which means that two generic D6-branes intersect at one point of the extra dimensions. The intersection is where some extra fields may live – fields arising from open strings stretched between two different D6-branes.

On top of that, the cubic couplings such as the Yukawa couplings may be calculated from "open world sheet instantons", triangular (=topologically a disk) fundamental world sheets stretched between the three intersections where the three fields involved in the cubic coupling live! That's wonderful because such "open world sheet instantons" effects are naturally suppressed with \(\exp(-AT)\) where \(A\) is the area and \(T\) is the string tension. That braneworld has a natural "exponential" explanation why the Yukawa couplings may be very small – and why they may differ by orders of magnitude from each other.

In another subfield, pure phenomenology, people have been playing with the fermion mass matrices for some time. The masses of quarks have been mostly understood by the 1970s – the top quark mass was really the newest and only new added parameter, in the mid 1990s. On the other hand, the neutrino masses – only seen through neutrino oscillations – have only been increasingly clearly measured since the late 1990s or so.

By now, the lepton masses plus the (squared) mass differences of the neutrinos and the mixing angles have been measured analogously precisely and completely as their quark counterparts. So in the quark sector, you basically need to know the masses of six quarks, the mass eigenvalues (three upper, three lower quarks), and the CKM matrix depending on four angles.

The story in the lepton sector is almost the same except that the upper and lower quarks are replaced by charged leptons and their neutrinos; the mixing matrix is called the PMNS matrix; there is one "overall" parameter labeling the neutrino masses that is unknown (only the differences of squared masses are known, as I mentioned, because only the differences affect the oscillations which is how the neutrino mass parameters are being measured – we haven't seen a neutrino in its rest frame yet); and there is a possibility that the neutrino masses aren't really Dirac masses but Majorana masses – in which case their fundamental origin could be unequivalent to the quark masses.

The neutrino mass-or-mixing matrices have been measured. One can see that the neutrinos are much lighter than the charged leptons and all the quarks. On top of that, they are apparently much more mixed than the quarks. All the angles in the CKM matrix are "rather small". On the other hand, many angles in the PMNS matrix seem to be "very far from zero or all the multiples of 90 degrees". That means that they're close to things like 45 degrees.

OK, a bit quantitatively. The CKM matrix is a \(3\times 3\) unitary matrix in \(U(3)\) – which encodes the transformation you have to do with the 3 upper-type quark mass eigenstates to get the upper \(SU(2)\) partners of the 3 lower-type quark mass eigenstates. Five of the phases may be thrown away by redefining six phases of the quark mass eigenstates (one of those phases which rotates all 6 quarks equally doesn't affect the CKM matrix so it's one parameter that has to be "subtracted from the subtraction"). It means that out of 9 parameters in the \(U(3)\) matrix, four are left – basically three real angles of an \(SO(3)\) matrix and one CP-violating "complex angle".

It's similar with the neutrinos' PMNS matrix. There's some CKM-like unitary matrix \(U\). A funny observation was that this matrix was close to\[

U_{TB} = \pmatrix {\sqrt{2/3} & \sqrt{1/3} & 0 \\ -\sqrt{1/6} & \sqrt{1/3} & - \sqrt{1/2} \\ -\sqrt{1/6}&\sqrt{1/3}&\sqrt{1/2} }.

\] All the matrix entries are (plus minus) square roots of small integer multiples of \(1/6\). You may check that it's a unitary matrix: all pairs of rows are orthogonal to each other, all rows have length equal to one, and to make a check, the same two types of conditions hold for columns or their pairs, too.

This Ansatz for the PMNS matrix is very close to the observed one and only a decade ago or so, this form of the PMNS matrix was actually falsified, primarily by seeing that the entry "zero" isn't quite zero. A new transformation involving neutrinos of 1st and 3rd generations (because the vanishing entry is in the 1st row and 3rd column) was observed for the first time – at some moment a decade ago.

The matrix \(U_{TB}\) is an intriguing piece of numerology but is there any reason why this form should be the right one (or close to the right one)? The answer is that such reasons were found in the flavor symmetries. There are three generations of quarks and leptons. The generations have different masses but there may still be some symmetries acting on the three generations that constrain the form of the mass matrices – in a nontrivial but not "complete" way, so that different eigenvalues are still allowed.

This "more serious level of neutrino matrix numerology" has led the people to realize that the special form of the unitary matrix above, the "tribimaximal mixing", may be derived from the assumption of flavor symmetries, either \(A_4\) or \(\Delta(27)\), two finite groups. The first is just the group of even permutations of four elements. The second one is more complex and I discussed it in a similar blog post six years ago and e.g. this 8-year-old one.

Mayes localized his pet D-braneworld model and argued that it produces a close-to-tribimaximal mixing matrix – which is non-trivial – and with some choice of some vevs of the many Higgses in the model, all the parameters determining the fermion masses and mixing seem to be OK, too. He seems to assume many values of the parameters. At the end, I think that he can't calculate a single combination of them from the first principles – although I am not sure, maybe he claims that he can.

But even if the "nominal" predictive power of his construction is zero, he has done some non-trivial reverse engineering of the fermion mass parameters. The braneworld he has apparently can rather naturally – in some colloquial sense, but maybe also a technical sense – explain the hierarchy between the fermion masses and the nearly maximal mixing of some neutrino species, among other things.

There are many qualitative choices one can make while choosing a type IIA D-braneworld. Ideally, we would want the number of predictions that arise from his model to be greater than the number of choices that had to be made – imagine both credits and debits are counted in bits or nats. But even if that comparison indicates that he hasn't produced more than he inserted, it's still true that the number of detailed microscopic theories that have a reasonable chance to explain the spectrum of the Standard Model, approximate values of the masses and/or their hierarchies, and the approximate values of the mixing angles, is extremely limited.

Grand unification can do something but it's always limited because grand unified theories still have some parameters. His string compactification ends up being a Pati-Salam theory which is strictly speaking not a grand unified theory because the gauge group has two factors. But his Pati-Salam theory behaves much like a grand unified theory, exhibits the gauge coupling unification, and other things. There's also the \(U(1)_{B-L}\) gauge group in it.

It seems plausible to me that models like that are so amazingly on the right track that a few weeks or months or years of work by some folks could have a chance to "nearly prove" that the model is actually right – that it predicts something. I think it's just terribly painful for this Earth with more than 7 billion humans to only produce "several" people who work at the D-braneworld string phenomenology at this moment (and similarly "several" \(G_2\) holonomy phenomenologists, and analogously with the other three – I guess that the F-theory researcher class is most numerous right now), a truly fascinating subfield. Individuals who would like to reduce this number and similar numbers of researchers further are simply animals. I will never consider them full-blown human beings.

by Luboš Motl ( at February 05, 2019 04:59 PM

February 04, 2019

ZapperZ - Physics and Physicists

When Condensed Matter Physics Became King
If you are one of those, or know one of those, who think Physics is only the LHC and high-energy physics, and String Theory, etc., you need to read this excellent article.

When I first read it in my hard-copy version of Physics Today, the first thing that came across my mind after I put it down is that this should be a must-read for the general public, but especially to high-school students and all of those bushy-tailed and bright-eyed incoming undergraduate student in physics. This is because the need to be introduced to a field of study in physics that has become the "king" in physics. Luckily, someone pointed out to me that this article is available online.

Reading the article, it was hard, but understandable, to imagine the resistance that was there in incorporating the "applied" side of physics into a physics professional organization. But it was at a time when physics was still seen as something esoteric with the grandiose idea of "understanding our world" in a very narrow sense.

Solid state’s odd constitution reflected changing attitudes about physics, especially with respect to applied and industrial research. A widespread notion in the physics community held that “physics” referred to natural phenomena and “physicist” to someone who deduced the rules governing them—making applied or industrial researchers nonphysicists almost by definition. But suspicion of that view grew around midcentury. Stanford University’s William Hansen, whose own applied work led to the development of the klystron (a microwave-amplifying vacuum tube), reacted to his colleague David Webster’s suggestion in 1943 that physics was defined by the pursuit of natural physical laws: “It would seem that your criterion sets the sights terribly high. How many physicists do you know who have discovered a law of nature? … It seems to me, this privilege is given only to a very few of us. Nevertheless the work of the rest is of value.”

Luckily, the APS did form the Division of Solid State Physics, and it quickly exploded from there.

By the early 1960s, the DSSP had become—and has remained since—the largest division of APS. By 1970, following a membership drive at APS meetings, the DSSP enrolled more than 10% of the society’s members. It would reach a maximum of just shy of 25% in 1989. Membership in the DSSP has regularly outstripped the division of particles and fields, the next largest every year since 1974, by factors of between 1.5 and 2.
This is a point that many people outside of physics do not realize. They, and the media, often make broad statements about physics and physicists based on what is happening in, say, elementary particle physics, or String, or many of those other fields, when in reality, those areas of physics are not even an valid representation of the field of physics because they are not the majority. Using, say, what is going on in high-energy physics to represent the whole field of physics is similar to using the city of Los Angeles as a valid representation of the United States. It is neither correct nor accurate!

This field, that has now morphed into Condensed Matter Physics, is vibrant, and encompassed such a huge variety of studies, that the amount of work coming out of it each week or each month is mindboggling. It is the only field of physics that has two separate section on Physical Review Letters, The Physical Review B comes out four (FOUR) times a month. Only Phys. Rev. D has more than one edition per month (twice a month). The APS March Meeting, where the Division of Condensed Matter Physics participatesin, continues to be the biggest giant of annual physics conference in the world.

Everything about this field of study is big, important, high-impact, wide-ranging, and fundamental. But of course, as I've said multiple times on here, it isn't sexy for most of the public and the media. So it never because the poster boy for physics, even if they make up the largest percentage of practicing physicist. Doug Natelson said it as much in commenting about condensed matter physics's image problem:

Condensed matter also faces a perceived shortfall in inherent excitement. Black holes sound like science fiction. The pursuit of the ultimate reductionist building blocks, whether through string theory, loop quantum gravity, or enormous particle accelerators, carries obvious profundity. Those topics are also connected historically to the birth of quantum mechanics and the revelation of the power of the atom, when physicists released primal forces that altered both our intellectual place in the world and the global balance of power.

Compared with this heady stuff, condensed matter can sound like weak sauce: “Sure, they study the first instants after the Big Bang, but we can tell you why copper is shiny.” The inferiority complex that this can engender leads to that old standby: claims of technological relevance (for example, “this advance will eventually let us make better computers”). A trajectory toward applications is fine, but that tends not to move the needle for most of the public, especially when many breathless media claims of technological advances don’t seem to pan out.

It doesn’t have to be this way. It is possible to present condensed-matter physics as interesting, compelling, and even inspiring. Emergence, universality, and symmetry are powerful, amazing ideas. The same essential physics that holds up a white dwarf star is a key ingredient in what makes solids solid, whether we’re talking about a diamond or a block of plastic. Individual electrons seem simple, but put many of them together with a magnetic field in the right 2D environment and presto: excitations with fractional charges. Want electrons to act like ultrarelativistic particles, or act like their own antiparticles, or act like spinning tops pointing in the direction of their motion, or pair up and act together coherently? No problem, with the right crystal lattice. This isn’t dirt physics, and it isn’t squalid.

It is why I keep harping to the historical fact of Phil Anderson's work on a condensed matter system that became the impetus for the Higgs mechanism in elementary particle, and how some of the most exotic consequences of QFT are found in complex material (Majorana fermions, magnetic monopoles, etc...etc.).

So if your view of physics has been just the String theory, the LHC, etc... well, keep them, but include its BIG and more influential brother, the condensed matter physics, that not only has quite a number of important, fundamental stuff, but also has a direct impact on your everyday lives. It truly is the "King" of physics.


by ZapperZ ( at February 04, 2019 03:13 PM

February 01, 2019

Clifford V. Johnson - Asymptotia

Black Holes and Time Travel in your Everyday Life

Oh, look what I found! It is my talk "Black Holes and Time Travel in your Everyday Life", which I gave as the Klopsteg Award lecture at AAPT back in July. Someone put it on YouTube. I hope you enjoy it!

Two warnings: (1) Skip to about 6 minutes to start, to avoid all the embarrassing handshaking and awarding and stuff. (2) There's a bit of early morning slowness + jet lag in my delivery here and there, so sorry about that. :)


Abstract: [...] Click to continue reading this post

The post Black Holes and Time Travel in your Everyday Life appeared first on Asymptotia.

by Clifford at February 01, 2019 07:38 PM

Clifford V. Johnson - Asymptotia

Black Market of Ideas

As a reminder, today I'll be at the natural history museum (LA) as part of the "Night of Ideas" event! I'll have a number of physics demos with me and will be at a booth/table (in the Black Market of Ideas section) talking about physics ideas underlying our energy future as a species. I'll sign some books too! Come along!

Here's link to the event:

Click to continue reading this post

The post Black Market of Ideas appeared first on Asymptotia.

by Clifford at February 01, 2019 07:35 PM

ZapperZ - Physics and Physicists

Standing Out From The Crowd In Large Collaboration
As someone who has never been involved in these huge collaborations that we see in high energy physics, I've often wondered how a graduate student or a postdoc make a name for themselves. If you are one of dozens, even hundreds, of authors in a paper, how do you get recognized?

It seems that this issue has finally been addressed by the high energy physics community, at least in Europe. A working group has been established to look into ways for students, postdocs, and early-career researches to stand out from the crowd and have their effort recognized individually.

To fully exploit the potential of large collaborations, we need to bring every single person to maximum effectiveness by motivating and stimulating individual recognition and career choices. With this in mind, in spring 2018 the European Committee for Future Accelerators (ECFA) established a working group to investigate what the community thinks about individual recognition in large collaborations. Following an initial survey addressing leaders of several CERN and CERN-recognised experiments, a community-wide survey closed on 26 October with a total of 1347 responses. 

Still, the article does not clarify on exactly how these individual recognition can be done. I'd be interested to hear how they are going to do this.


by ZapperZ ( at February 01, 2019 05:56 PM

January 30, 2019

John Baez - Azimuth

From Classical to Quantum and Back

Damien Calaque has invited me to speak at FGSI 2019, a conference on the Foundations of Geometric Structures of Information. It will focus on scientific legacy of Cartan, Koszul and Souriau. Since Souriau helped invent geometric quantization, I decided to talk about this. That’s part of why I’ve been writing about it lately!

I’m looking forward to speaking to various people at this conference, including Mikhail Gromov, who has become interested in using category theory to understand biology and the brain.

Here’s my talk:

From classical to quantum and back.

Abstract. Edward Nelson famously claimed that quantization is a mystery, not a functor. In other words, starting from the phase space of a classical system (a symplectic manifold) there is no functorial way of constructing the correct Hilbert space for the corresponding quantum system. In geometric quantization one gets around this problem by equipping the classical phase space with extra structure: for example, a Kähler manifold equipped with a suitable line bundle. Then quantization becomes a functor. But there is also a functor going the other way, sending any Hilbert space to its projectivization. This makes quantum systems into specially well-behaved classical systems! In this talk we explore the interplay between classical mechanics and quantum mechanics revealed by these functors going both ways.

For more details, read these articles:

  • Part 1: the mystery of geometric quantization: how a quantum state space is a special sort of classical state space.
  • Part 2: the structures besides a mere symplectic manifold that are used in geometric quantization.
  • Part 3: geometric quantization as a functor with a right adjoint, ‘projectivization’, making quantum state spaces into a reflective subcategory of classical ones.
  • Part 4: making geometric quantization into a monoidal functor.
  • Part 5: the simplest example of geometric quantization: the spin-1/2 particle.
  • Part 6: quantizing the spin-3/2 particle using the twisted cubic; coherent states via the adjunction between quantization and projectivization.
  • Part 7: the Veronese embedding as a method of ‘cloning’ a classical system, and taking the symmetric tensor powers of a Hilbert space as the corresponding method of cloning a quantum system.
  • Part 8: cloning a system as changing the value of Planck’s constant.
  • by John Baez at January 30, 2019 06:35 AM

    January 28, 2019

    John Baez - Azimuth

    Systems as Wiring Diagram Algebras


    Check out the video of Christina Vasilakopoulou’s talk, the third in the Applied Category Theory Seminar here at U. C. Riverside! It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

    Abstract. We will start by describing the monoidal category of labeled boxes and wiring diagrams and its induced operad. Various kinds of systems such as discrete and continuous dynamical systems have been expressed as algebras for that operad, namely lax monoidal functors into the category of categories. A major advantage of this approach is that systems can be composed to form a system of the same kind, completely determined by the specific way the composite systems are interconnected (‘wired’ together). We will then introduce a generalized system, called a machine, again as a wiring diagram algebra. On the one hand, this abstract concept is all-inclusive in the sense that discrete and continuous dynamical systems are sub-algebras; on the other hand, we can specify succinct categorical conditions for totality and/or determinism of systems that also adhere to the algebraic description.

    Reading material:

    • Patrick Schultz, David I. Spivak and Christina Vasilakopoulou, Dynamical systems and sheaves.

    • David I. Spivak, The operad of wiring diagrams: formalizing a graphical language for databases, recursion, and plug-and-play circuits.

    • Dmitry Vagner, David I. Spivak and Eugene Lerman, Algebras of open dynamical systems on the operad of wiring diagrams.

    by John Baez at January 28, 2019 11:40 PM

    January 23, 2019

    ZapperZ - Physics and Physicists

    Do you ever want to know about US Fermi National Accelerator Laboratory, or Fermilab?

    Don Lincoln finally has made a video on everything you want to know about Fermilab, especially if you think that they don't do much anymore nowadays now that the Tevatron is long gone.

    As someone who has visited there numerous times and collaborated with scientists and engineers that this facility, it is a neat place to visit if you have the chance.


    by ZapperZ ( at January 23, 2019 07:29 PM

    January 21, 2019

    ZapperZ - Physics and Physicists

    Tommaso Dorigo's "False Claims In Particle Physics"
    Hey, you should read this blog post by Tommaso Dorigo. It touches upon many of the myths regarding particle physics, especially the hype surrounding the name "god particle", as if that means something.

    I've touched upon some of the issues he brought up. I think many of us who are active online and deal with the media and the public tend to see and observe the same thing, the same mistakes, and misinformation that are being put in print. One can only hope that by repeatedly pointing out such myths and why they are wrong, the message will slowly seep into the public consciousness.

    I just wish it is seeping through faster.


    by ZapperZ ( at January 21, 2019 03:37 PM

    January 20, 2019

    ZapperZ - Physics and Physicists

    Negative Capacitance in Ferroelectric Material Finally Found
    I love this sort of reports, because it is based on a material that has been discovered for a long time and rather common, it is based on a consequence of a theory, it has both direct applications and a rich physics, and finally, it has an amazing resemblance to what many physics students have seen in textbooks.

    A group of researchers have finally confirmed the existence of negative capacitance in ferroelectric material haffnium zirconium oxide Hf0.5Zr0.5O2. (You may access the Nature paper here or from that news article).

    Researchers led by Michael Hoffmann have now measured the double-well energy landscape in a thin layer of ferroelectric Hf0.5Zr0.5Ofor the first time and so confirmed that the material indeed has negative capacitance. To do this, they first fabricated capacitors with a thin dielectric layer on top of the ferroelectric. They then applied very short voltage pulses to the electrodes of the capacitor, while measuring both the voltage and the charge on it with an oscilloscope.

    “Since we already knew the capacitance of the dielectric layer from separate experiments, we were then able to calculate the polarization and electric field in the ferroelectric layer,” Hoffmann tells Physics World. “We then calculated the double-well energy landscape by integrating the electric field with respect to the polarization.”

    Of course, there are plenty of potential applications for something like this.

    One of the most promising applications utilising negative capacitance are electronic circuits with much lower power dissipation that could be used to build more energy efficient devices than any that are possible today, he adds. “We are working on making such devices, but it will also be very important to design further experiments to probe the negative capacitance region in the structures we made so far to help improve our understanding of the fundamental physics of ferroelectrics.”

    But the most interesting part for me is that, if you look at Fig. 1 of the Nature paper, the double-well structure is something that many of us former and current physics students may have seen. I know that I remember solving this double-well problem in my graduate level QM class. Of course, we were solving it energy-versus-space dimension, instead of the energy-versus-polarization dimension as shown in the figure.


    by ZapperZ ( at January 20, 2019 03:21 PM

    January 18, 2019

    Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

    Back to school

    It was back to college this week, a welcome change after some intense research over the hols. I like the start of the second semester, there’s always a great atmosphere around the college with the students back and the restaurants, shops and canteens back open. The students seem in good form too, no doubt enjoying a fresh start with a new set of modules (also, they haven’t yet received their exam results!).

    This semester, I will teach my usual introductory module on the atomic hypothesis and early particle physics to second-years. As always, I’m fascinated by the way the concept of the atom emerged from different roots and different branches of science: from philosophical considerations in ancient Greece to considerations of chemistry in the 18th century, from the study of chemical reactions in the 19th century to considerations of statistical mechanics around the turn of the century. Not to mention a brilliant young patent clerk who became obsessed with the idea of showing that atoms really exist, culminating in his famous paper on Brownian motion. But did you know that Einstein suggested at least three different ways of measuring Avogadro’s constant? And each method contributed significantly to establishing the reality of atoms.


     In 1908, the French physicist Jean Perrin demonstrated that the motion of particles suspended in a liquid behaved as predicted by Einstein’s formula, derived from considerations of statistical mechanics, giving strong support for the atomic hypothesis.  

    One change this semester is that I will also be involved in delivering a new module,  Introduction to Modern Physics, to first-years. The first quantum revolution, the second quantum revolution, some relativity, some cosmology and all that.  Yet more prep of course, but ideal for anyone with an interest in the history of 20th century science. How many academics get to teach interesting courses like this? At conferences, I often tell colleagues that my historical research comes from my teaching, but few believe me!


    Then of course, there’s also the module Revolutions in Science, a course I teach on Mondays at University College Dublin; it’s all go this semester!

    by cormac at January 18, 2019 04:15 PM

    Clifford V. Johnson - Asymptotia

    An Update!

    Well, hello to you and to 2019!

    It has been a little while since I wrote here and not since last month when it was also last year, so let's break that stretch. It was not a stretch of entire quiet, as those of you who follow on social media know (twitter, instagram, Facebook... see the sidebar for links), but I do know some of you don't directly on social media, so I apologise for the neglect.

    The fact is that I've been rather swamped with several things, including various duties that were time consuming. Many of them I can't talk about, since they are not for public consumption (this ranges from being a science advisor on various things - some of which will be coming at you later in the year, to research projects that I'd rather not talk about yet, to sitting on various committees doing the service work that most academics do that helps the whole enterprise keep afloat). The most time-consuming of the ones I can talk about is probably being on the search committee for an astrophysics job for which we have an opening here at USC. This is exciting since it means that we'll have a new colleague soon, doing exciting things in one of a variety of exciting areas in astrophysics. Which area still is to be determined, since we've to finish the search yet. But it did involve reading through a very large number of applications (CVs, cover letters, statements of research plans, teaching philosophies, letters of recommendation, etc), and meeting several times with colleagues to narrow things down to a (remarkable) short list... then hosting visitors/interviewees, arrangement meetings, and so forth. It is rather draining, while at the same time being very exciting since it marks a new beginning! It has been a while since we hired in this area in the department, and there's optimism that this marks a beginning of a re-invigoration for certain research areas here.

    Physics research projects have been on my mind a lot, of course. I remain very excited abut the results that I reported on in a post back in June, and I've been working on new ways of building on them. (Actually, I did already do a followup paper that I did not write about here. For those who are interested, it is a whole new way of defining a new generalisation of something called the Rényi entropy, that may be of interest to people in many fields, from quantum information to string theory. I ought to do a post, since it is a rather nice construction that could be useful in ways I've not thought of!) I've been doing some new explorations of how to exploit the central results in useful ways: Finding a direct link between the Second Law of Thermodynamics and properties of RG flow in quantum field theory ought to have several consequences beyond the key one I spelled out in the paper with Rosso (that Zamolodchikov's C-theorem follows). Im particular, I want to sharpen it even further in terms of something following from heat engine constraints, as I've been aiming to do for a while. (See the post for links to earlier posts about the 'holographic heat engines" and their role.)

    You might be wondering how the garden is doing, since that's something I post about here from time to time. Well, right now there is an on-going deluge of rain (third day in a row) that is a pleasure to see. The photo at the top of the page is one I took a few days ago when the sky was threatening the downpours we're seeing now. The rain and the low temperatures for a while will certainly help to renew and refresh things out there for the (early) Spring planting I'll do soon. There'll be fewer bugs and bug eggs that will [...] Click to continue reading this post

    The post An Update! appeared first on Asymptotia.

    by Clifford at January 18, 2019 06:14 AM

    January 17, 2019

    Robert Helling - atdotde

    Has your password been leaked?
    Today, there was news about a huge database containing 773 million email address / password pairs became public. On Have I Been Pawned you can check if any of your email addresses is in this database (or any similar one). I bet it is (mine are).

    These lists are very probably the source for the spam emails that have been around for a number of months where the spammer claims they broke into your account and tries to prove it by telling you your password. Hopefully, this is only a years old LinkedIn password that you have changed aeons ago.

    To make sure, you actually want to search not for your email but for your password. But of course, you don't want to tell anybody your password. To this end, I have written a small perl script that checks for your password without telling anybody by doing a calculation locally on your computer. You can find it on GitHub.

    by Robert Helling ( at January 17, 2019 07:43 PM

    January 15, 2019

    Jon Butterworth - Life and Physics

    Conceptual design for a post-LHC future circular collider at CERN
    This conceptual design report came out today. It looks like an impressive amount of work and although I am familiar with some of its contents, it will take time to digest, and I will undoubtedly be writing more about it … Continue reading

    by Jon Butterworth at January 15, 2019 10:08 PM

    January 12, 2019

    Sean Carroll - Preposterous Universe

    True Facts About Cosmology (or, Misconceptions Skewered)

    I talked a bit on Twitter last night about the Past Hypothesis and the low entropy of the early universe. Responses reminded me that there are still some significant misconceptions about the universe (and the state of our knowledge thereof) lurking out there. So I’ve decided to quickly list, in Tweet-length form, some true facts about cosmology that might serve as a useful corrective. I’m also putting the list on Twitter itself, and you can see comments there as well.

    1. The Big Bang model is simply the idea that our universe expanded and cooled from a hot, dense, earlier state. We have overwhelming evidence that it is true.
    2. The Big Bang event is not a point in space, but a moment in time: a singularity of infinite density and curvature. It is completely hypothetical, and probably not even strictly true. (It’s a classical prediction, ignoring quantum mechanics.)
    3. People sometimes also use “the Big Bang” as shorthand for “the hot, dense state approximately 14 billion years ago.” I do that all the time. That’s fine, as long as it’s clear what you’re referring to.
    4. The Big Bang might have been the beginning of the universe. Or it might not have been; there could have been space and time before the Big Bang. We don’t really know.
    5. Even if the BB was the beginning, the universe didn’t “pop into existence.” You can’t “pop” before time itself exists. It’s better to simply say “the Big Bang was the first moment of time.” (If it was, which we don’t know for sure.)
    6. The Borde-Guth-Vilenkin theorem says that, under some assumptions, spacetime had a singularity in the past. But it only refers to classical spacetime, so says nothing definitive about the real world.
    7. The universe did not come into existence “because the quantum vacuum is unstable.” It’s not clear that this particular “Why?” question has any answer, but that’s not it.
    8. If the universe did have an earliest moment, it doesn’t violate conservation of energy. When you take gravity into account, the total energy of any closed universe is exactly zero.
    9. The energy of non-gravitational “stuff” (particles, fields, etc.) is not conserved as the universe expands. You can try to balance the books by including gravity, but it’s not straightforward.
    10. The universe isn’t expanding “into” anything, as far as we know. General relativity describes the intrinsic geometry of spacetime, which can get bigger without anything outside.
    11. Inflation, the idea that the universe underwent super-accelerated expansion at early times, may or may not be correct; we don’t know. I’d give it a 50% chance, lower than many cosmologists but higher than some.
    12. The early universe had a low entropy. It looks like a thermal gas, but that’s only high-entropy if we ignore gravity. A truly high-entropy Big Bang would have been extremely lumpy, not smooth.
    13. Dark matter exists. Anisotropies in the cosmic microwave background establish beyond reasonable doubt the existence of a gravitational pull in a direction other than where ordinary matter is located.
    14. We haven’t directly detected dark matter yet, but most of our efforts have been focused on Weakly Interacting Massive Particles. There are many other candidates we don’t yet have the technology to look for. Patience.
    15. Dark energy may not exist; it’s conceivable that the acceleration of the universe is caused by modified gravity instead. But the dark-energy idea is simpler and a more natural fit to the data.
    16. Dark energy is not a new force; it’s a new substance. The force causing the universe to accelerate is gravity.
    17. We have a perfectly good, and likely correct, idea of what dark energy might be: vacuum energy, a.k.a. the cosmological constant. An energy inherent in space itself. But we’re not sure.
    18. We don’t know why the vacuum energy is much smaller than naive estimates would predict. That’s a real puzzle.
    19. Neither dark matter nor dark energy are anything like the nineteenth-century idea of the aether.

    Feel free to leave suggestions for more misconceptions. If they’re ones that I think many people actually have, I might add them to the list.

    by Sean Carroll at January 12, 2019 08:31 PM

    January 10, 2019

    Jon Butterworth - Life and Physics

    The award-winning blogger beard Telescoper used to do astronomy look-a-likes, which unfortunately sometimes strayed into other fields. If he strayed a bit further I think he’d find a striking one in today’s news:

    by Jon Butterworth at January 10, 2019 09:09 AM

    January 09, 2019

    Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

    Physical Methods of Hazardous Wastewater Treatment

    Hazardous waste comprises all types of waste with the potential to cause a harmful effect on the environment and pet and human health. It is generated from multiple sources, including industries, commercial properties and households and comes in solid, liquid and gaseous forms.

    There are different local and state laws regarding the management of hazardous waste in different localities. Irrespective of your jurisdiction, the management starts from a proper hazardous waste collection from your Utah property through to its eventual disposal.

    There are many methods of waste treatment after its collection using the appropriate structures recommended by environmental protection authorities. One of the most common and inexpensive ones is physical treatment. The following are the physical treatment options for hazardous wastewater.


    In this treatment technique, the waste is separated into a liquid and a solid. The solid waste particles in the liquid are left to settle at a container’s bottom through gravity. Sedimentation is done in a continuous or batch process.

    Continuous sedimentation is the standard option and generally used for the treatment for large quantities of liquid waste. It is often used in the separation of heavy metals in the steel, copper and iron industries and fluoride in the aluminum industry.


    This treatment method comprises the separation of wastewater into a depleted and aqueous stream. The wastewater passes through alternating cation and anion-permeable membranes in a compartment.

    A direct current is then applied to allow the passage of cations and anions to opposite directions. This results in solutions with elevated concentrations of positive and negative ions and another with a low ion concentration.

    Electro-dialysis is used to enrich or deplete chemical solutions in manufacturing, desalting whey in the food sector and generating potable water from saline water.

    Reverse Osmosis

    Man checking wastewater

    This uses a semi-permeable membrane for the separation of dissolved organic and inorganic elements in wastewater. The wastewater is forced through the semi-permeable membrane by pressure, and larger molecules are filtered out by the small membrane pores.

    Polyamide membranes have largely replaced polysulphone ones for wastewater treatment nowadays owing to their ability to withstand liquids with high pH. Reverse osmosis is usually used in the desalinization of brackish water and treating electroplating rinse waters.

    Solvent Extraction

    This involves the separation of the components of a liquid through contact with an immiscible liquid. The most common solvent used in the treatment technique is supercritical fluid (SCF) mainly CO2.

    These fluids exist at the lowest temperature where condensation occurs and have a low density and fast mass ion transfer when mixed with other liquids. Solvent extraction is used for extracting oil from the emulsions used in steel and aluminum processing and organ halide pesticide from treated soil.

    Superficial ethane as a solvent is also useful for the purification of waste oils contaminated with water, metals, and PCBs.

    Some companies and household have tried handling their hazardous wastewater to minimize costs. This, in most cases, puts their employees at risk since the “treated” water is still often dangerous to human health, the environment and, their machines.

    The physical processes above sometimes used with chemical treatment techniques are the guaranteed options for truly safe wastewater.

    The post Physical Methods of Hazardous Wastewater Treatment appeared first on None Equilibrium.

    by Nonequilibrium at January 09, 2019 11:35 PM

    Jon Butterworth - Life and Physics

    A Dark Matter mystery explained?
    A paper on the arXiv this morning offers an explanation for an intriguing, long-standing anomalous result from the DAMA experiment. According to our current best model of how the universe hangs together, the Earth orbits the Sun within a galactic … Continue reading

    by Jon Butterworth at January 09, 2019 12:34 PM

    January 08, 2019

    Axel Maas - Looking Inside the Standard Model

    Taking your theory seriously
    This blog entry is somewhat different than usual. Rather than writing about some particular research project, I will write about a general vibe, directing my research.

    As usual, research starts with a 'why?'. Why does something happen, and why does it happen in this way? Being the theoretician that I am, this question often equates with wanting to have mathematical description of both the question and the answer.

    Already very early in my studies I ran into peculiar problems with this desire. It usually left me staring at the words '...and then nature made a choice', asking myself, how could it? A simple example of the problem is a magnet. You all know that a magnet has a north pole and a south pole, and that these two are different. So, how does it happen which end of the magnet becomes the north pole and which the south pole? At the beginning you always get to hear that this is a random choice, and it just happens that one particular is made. But this is not really the answer. If you dig deeper than you find that originally the metal of any magnet has been very hot, likely liquid. In this situation, a magnet is not really magnetic. It becomes magnetic when it is cooled down, and becomes solid. At some temperature (the so-called Curie temperature), it becomes magnetic, and the poles emerge. And here this apparent miracle of a 'choice by nature' happens. Only that it does not. The magnet cools down not all by itself, but it has a surrounding. And the surrounding can have magnetic fields as well, e.g. the earth's magnetic field. And the decision what is south and what is north is made by how the magnet forms relative to this field. And thus, there is a reason. We do not see it directly, because magnets have usually moved since then, and thus this correlation is no longer obvious. But if we would heat the magnet again, and let it cool down again, we could observe this.

    But this immediately leaves you with the question of where did the Earth's magnetic field comes from, and got its direction? Well, it comes from the liquid metallic core of the Earth, and aligns along or oppositely, more or less, the rotation axis of the Earth. Thus, the question is, how did the rotation axis of the Earth comes about, and why has it a liquid core? Both questions are well understood, and arise from how the Earth has formed billions of years ago. This is due to the mechanics of the rotating disk of dust and gas which formed around our fledgling sun. Which in turns comes from the dynamics on even larger scales. And so on.

    As you see, whenever one had the feeling of a random choice, it was actually the outside of what we looked at so far, which made the decision. So, such questions always lead us to include more into what we try to understand.

    'Hey', I now can literally hear people say who are a bit more acquainted with physics, 'does not quantum mechanics makes really random choices?'. The answer to this is yes and no in equal measures. This is probably one of the more fundamental problems of modern physics. Yes, our description of quantum mechanics, as we teach it also in courses, has intrinsic randomness. But when does it occur? Yes, exactly, whenever we jump outside of the box we describe in our theory. Real, random choice is encountered in quantum physics only whenever we transcend the system we are considering. E.g. by an external measurement. This is one of the reasons why this is known as the 'measurement problem'. If we stay inside the system, this does not happen. But at the expense that we are loosing the contact to things, like an ordinary magnet, which we are used to. The objects we are describing become obscure, and we talk about wave functions and stuff like this. Whenever we try to extend our description to also include the measurement apparatus, on the other hand, we again get something which is strange, but not as random as it originally looked. Although talking about it becomes almost impossible beyond any mathematical description. And it is not really clear what random means anymore in this context. This problem is one of the big ones in the concept of physics. While there is a relation to what I am talking about here, this question can still be separated.

    And in fact, it is not this divide what I want to talk about, at least not today. I just wanted to get away with this type of 'quantum choice'. Rather, I want to get to something else.

    If we stay inside the system we describe, then everything becomes calculable. Our mathematical description is closed in the sense that after fixing a theory, we can calculate everything. Well, at least in principle, in practice our technical capabilities may limit this. But this is of no importance for the conceptual point. Once we have fixed the theory, there is no choice anymore. There is no outside. And thus, everything needs to come from inside the theory. Thus, a magnet in isolation will never magnetize, because there is nothing which can make a decision about how. The different possibilities are caught in an eternal balanced struggle, and none can win.

    Which makes a lot of sense, if you take physical theories really seriously. After all, one of the basic tenants is that there is no privileged frame of reference: 'Everything is relative'. If there is nothing else, nothing can happen which creates an absolute frame of reference, without violating the very same principles on which we found physics. If we take our own theories seriously, and push them to the bitter end, this is what needs to come about.

    And here I come back to my own research. One of the driving principles has been to really push this seriousness. And ask what it implies if one really, really takes it seriously. Of course, this is based on the assumption that the theory is (sufficiently) adequate, but that is everyday uncertainty for a physicist anyhow. This requires me to very, very carefully separate what is really inside, and outside. And this leads to quite surprising results. Essentially most of my research on Brout-Englert-Higgs physics, as described in previous entries, is coming about because of this approach. And leads partly to results quite at odds with common lore, often meaning a lot of work to convince people. Even if the mathematics is valid and correct, interpretation issues are much more open to debate when it comes to implications.

    Is this point of view adequate? After all, we know for sure that we are not yet finished, and our theories do not contain all there is, and there is an 'outside'. However it may look. And I agree. But, I think it is very important that we very clearly distinguish what is an outside influence, and what is not. And as a first step to ensure what is outside, and thus, in a sense, is 'new physics', we need to understand what our theories say if they are taken in isolation.

    by Axel Maas ( at January 08, 2019 10:15 AM

    January 06, 2019

    Jaques Distler - Musings

    TLS 1.0 Deprecation

    You have landed on this page because your HTTP client used TLSv1.0 to connect to this server. TLSv1.0 is deprecated and support for it is being dropped from both servers and browsers.

    We are planning to drop support for TLSv1.0 from this server in the near future. Other sites you visit have probably already done so, or will do so soon. Accordingly, please upgrade your client to one that supports at least TLSv1.2. Since TLSv1.2 has been around for more than a decade, this should not be hard.

    by Jacques Distler at January 06, 2019 06:12 AM

    The n-Category Cafe

    TLS 1.0 Deprecation

    You have landed on this page because your HTTP client used TLSv1.0 to connect to this server. TLSv1.0 is deprecated and support for it is being dropped from both servers and browsers.

    We are planning to drop support for TLSv1.0 from this server in the near future. Other sites you visit have probably already done so, or will do so soon. Accordingly, please upgrade your client to one that supports at least TLSv1.2. Since TLSv1.2 has been around for more than a decade, this should not be hard.

    by Jacques Distler at January 06, 2019 06:12 AM

    January 05, 2019

    The n-Category Cafe

    Applied Category Theory 2019 School

    Dear scientists, mathematicians, linguists, philosophers, and hackers:

    We are writing to let you know about a fantastic opportunity to learn about the emerging interdisciplinary field of applied category theory from some of its leading researchers at the ACT2019 School. It will begin February 18, 2019 and culminate in a meeting in Oxford, July 22–26. Applications are due January 30th; see below for details.

    Applied category theory is a topic of interest for a growing community of researchers, interested in studying systems of all sorts using category-theoretic tools. These systems are found in the natural sciences and social sciences, as well as in computer science, linguistics, and engineering. The background and experience of our community’s members is as varied as the systems being studied.

    The goal of the ACT2019 School is to help grow this community by pairing ambitious young researchers together with established researchers in order to work on questions, problems, and conjectures in applied category theory.

    Who should apply

    Anyone from anywhere who is interested in applying category-theoretic methods to problems outside of pure mathematics. This is emphatically not restricted to math students, but one should be comfortable working with mathematics. Knowledge of basic category-theoretic language—the definition of monoidal category for example—is encouraged.

    We will consider advanced undergraduates, PhD students, and post-docs. We ask that you commit to the full program as laid out below.

    Instructions for how to apply can be found below the research topic descriptions.

    Senior research mentors and their topics

    Below is a list of the senior researchers, each of whom describes a research project that their team will pursue, as well as the background reading that will be studied between now and July 2019.

    Miriam Backens

    Title: Simplifying quantum circuits using the ZX-calculus

    Description: The ZX-calculus is a graphical calculus based on the category-theoretical formulation of quantum mechanics. A complete set of graphical rewrite rules is known for the ZX-calculus, but not for quantum circuits over any universal gate set. In this project, we aim to develop new strategies for using the ZX-calculus to simplify quantum circuits.

    Background reading:

    1. Matthes Amy, Jianxin Chen, Neil Ross. A finite presentation of CNOT-Dihedral operators.
    2. Miriam Backens. The ZX-calculus is complete for stabiliser quantum mechanics.

    Tobias Fritz

    Title: Partial evaluations, the bar construction, and second-order stochastic dominance

    Description: We all know that 2+2+1+1 evaluates to 6. A less familiar notion is that it can partially evaluate to 5+1. In this project, we aim to study the compositional structure of partial evaluation in terms of monads and the bar construction and see what this has to do with financial risk via second-order stochastic dominance.

    Background reading:

    1. Tobias Fritz and Paolo Perrone. Monads, partial evaluations, and rewriting.
    2. Maria Manuel Clementino, Dirk Hofmann, George Janelidze. The monads of classical algebra are seldom weakly cartesian.
    3. Todd Trimble. On the bar construction.

    Pieter Hofstra

    Title: Complexity classes, computation, and Turing categories

    Description: Turing categories form a categorical setting for studying computability without bias towards any particular model of computation. It is not currently clear, however, that Turing categories are useful to study practical aspects of computation such as complexity. This project revolves around the systematic study of step-based computation in the form of stack-machines, the resulting Turing categories, and complexity classes. This will involve a study of the interplay between traced monoidal structure and computation. We will explore the idea of stack machines qua programming languages, investigate the expressive power, and tie this to complexity theory. We will also consider questions such as the following: can we characterize Turing categories arising from stack machines? Is there an initial such category? How does this structure relate to other categorical structures associated with computability?

    Background reading:

    1. J.R.B. Cockett and P.J.W. Hofstra. Introduction to Turing categories. APAL, Vol 156, pp. 183-209, 2008.
    2. J.R.B. Cockett, P.J.W. Hofstra and P. Hrubes. Total maps of Turing categories. ENTCS (Proc. of MFPS XXX), pp. 129-146, 2014.
    3. A. Joyal, R. Street and D. Verity. Traced monoidal categories. Mat. Proc. Cam. Phil. Soc. 3, pp. 447-468, 1996.

    Bartosz Milewski

    Title: Traversal optics and profunctors

    Description: In functional programming, optics are ways to zoom into a specific part of a given data type and mutate it. Optics come in many flavors such as lenses and prisms and there is a well-studied categorical viewpoint, known as profunctor optics. Of all the optic types, only the traversal has resisted a derivation from first principles into a profunctor description. This project aims to do just this.

    Background reading:

    1. Bartosz Milewski. Profunctor optics, categorical view.
    2. Craig Pastro, Ross Street. Doubles for monoidal categories.

    Mehrnoosh Sadrzadeh

    Title: Formal and experimental methods to reason about dialogue and discourse using categorical models of vector spaces

    Description: Distributional semantics argues that meanings of words can be represented by the frequency of their co-occurrences in context. A model extending distributional semantics from words to sentences has a categorical interpretation via Lambek’s syntactic calculus or pregroups. In this project, we intend to further extend this model to reason about dialogue and discourse utterances where people interrupt each other, there are references that need to be resolved, disfluencies, pauses, and corrections. Additionally, we would like to design experiments and run toy models to verify predictions of the developed models.

    Background reading:

    1. Gerhard Jager (1998): A multi-modal analysis of anaphora and ellipsis. University of Pennsylvania Working Papers in Linguistics 5(2), p. 2.
    2. Matthew Purver, Ronnie Cann, and Ruth Kempson. Grammars as parsers: meeting the dialogue challenge. Research on Language and Computation, 4(2-3):289–326, 2006.

    David Spivak

    Title: Toward a mathematical foundation for autopoiesis

    Description: An autopoietic organization—anything from a living animal to a political party to a football team—is a system that is responsible for adapting and changing itself, so as to persist as events unfold. We want to develop mathematical abstractions that are suitable to found a scientific study of autopoietic organizations. To do this, we’ll begin by using behavioral mereology and graphical logic to frame a discussion of autopoeisis, most of all what it is and how it can be best conceived. We do not expect to complete this ambitious objective; we hope only to make progress toward it.

    Background reading:

    1. Brendan Fong, David Jaz Myers, David Spivak. Behavioral mereology.
    2. Brendan Fong, David Spivak. Graphical regular logic.
    3. Luhmann. Organization and Decision, CUP. (Preface)

    School structure

    All of the participants will be divided up into groups corresponding to the projects. A group will consist of several students, a senior researcher, and a TA. Between January and June, we will have a reading course devoted to building the background necessary to meaningfully participate in the projects. Specifically, two weeks are devoted to each paper from the reading list. During this two week period, everybody will read the paper and contribute to discussion in a private online chat forum. There will be a TA serving as a domain expert and moderating this discussion. In the middle of the two week period, the group corresponding to the paper will give a presentation via video conference. At the end of the two week period, this group will compose a blog entry on this background reading that will be posted to the n-category cafe.

    After all of the papers have been presented, there will be a two-week visit to Oxford University, 15–26 July 2019. The second week is solely for participants of the ACT2019 School. Groups will work together on research projects, led by the senior researchers.

    The first week of this visit is the ACT2019 Conference, where the wider applied category theory community will arrive to share new ideas and results. It is not part of the school, but there is a great deal of overlap and participation is very much encouraged. The school should prepare students to be able to follow the conference presentations to a reasonable degree.

    To apply

    To apply please send the following to by January 30th, 2019:

    • Your CV
    • A document with:
      • An explanation of any relevant background you have in category theory or any of the specific projects areas
      • The date you completed or expect to complete your Ph.D and a one-sentence summary of its subject matter.
    • Order of project preference
    • To what extent can you commit to coming to Oxford (availability of funding is uncertain at this time)
    • A brief statement (~300 words) on why you are interested in the ACT2019 School. Some prompts:
      • how can this school contribute to your research goals?
      • how can this school help in your career?

    Also have sent on your behalf to a brief letter of recommendation confirming any of the following:

    • your background
    • ACT2019 School’s relevance to your research/career
    • your research experience


    For more information, contact either

    • Daniel Cicala. cicala (at) math (dot) ucr (dot) edu

    • Jules Hedges. julian (dot) hedges (at) cs (dot) ox (dot) ac (dot) uk

    by john ( at January 05, 2019 10:54 PM

    January 04, 2019

    Jon Butterworth - Life and Physics

    Mile End Road
    I spent most of the past two days in the “Arts 2” building of Queen Mary University of London, on Mile End Road. According to Wikipedia, Mile End was one of the earliest suburbs of London, recorded in 1288 as … Continue reading

    by Jon Butterworth at January 04, 2019 09:18 PM

    Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

    A Christmas break in academia

    There was a time when you wouldn’t catch sight of this academic in Ireland over Christmas – I used to head straight for the ski slopes as soon as term ended. But family commitments and research workloads have put paid to that, at least for a while, and I’m not sure it’s such a bad thing. Like many academics, I dislike being away from the books for too long and there is great satisfaction to be had in catching up on all the ‘deep roller’ stuff one never gets to during the teaching semester.


    The professor in disguise in former times 

    The first task was to get the exam corrections out of the way. This is a job I quite enjoy, unlike most of my peers. I’m always interested to see how the students got on and it’s the only task in academia that usually takes slightly less time than expected. Then it was on to some rather more difficult corrections – putting together revisions to my latest research paper, as suggested by the referee. This is never a quick job, especially as the points raised are all very good and some quite profound. It helps that the paper has been accepted to appear in Volume 8 of the prestigious Einstein Studies series, but this is a task that is taking some time.

    Other grown-up stuff includes planning for upcoming research conferences – two abstracts now in the post, let’s see if they’re accepted. I also spent a great deal of the holidays helping to organize an international conference on the history of physics that will be hosted in Ireland in 2020. I have very little experience in such things, so it’s extremely interesting, if time consuming.

    So there is a lot to be said for spending Christmas at home, with copious amounts of study time uninterrupted by students or colleagues. An interesting bonus is that a simple walk in the park or by the sea seems a million times more enjoyable after a good morning’s swot.  I’ve never really holidayed well and I think this might be why.


    A walk on Dun Laoghaire pier yesterday afternoon

    As for New Year’s resolutions, I’ve taken up Ciara Kelly’s challenge of a brisk 30-minute walk every day. I also took up tennis in a big way a few months ago – now there’s a sport that is a million times more practical in this part of the world than skiing.


    by cormac at January 04, 2019 08:56 PM

    January 03, 2019

    Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

    Getting the Most Out of Your Solar Hot Water System

    Solar panels and hot water systems are great ways to save some serious cash when it comes to your energy bills. You make good use of the sun, which means that you are helping with maximizing the use of natural resources rather than creating unnatural ones, which can usually harm the Earth in the long run.

    However, solar panel systems should be properly used and maintained to make sure that you are making the most out of it. Most users and owners of the system do not know how to properly use it, which is a huge waste of energy and money.

    Here, we will talk about the things you can do to make sure you get the most out of your solar panels after you are done with your solar PV installation.

    Make Use of Boiler Timers and Solar Controllers

    Ask your solar panel supplier if they can provide you with boiler timers and solar controllers. This is to make sure that the water will only be heated by the backup heating source, which is most likely after the water is heated by the sun to the maximum extent. It usually happens after the solar panels are not directly exposed to the sun, which means that this usually takes place late in the afternoon or whenever the sun changes its position.

    You should also see to it that the cylinder has enough cold water for the sun to heat up after you have used up all of the hot water. This is to ensure that you will have hot water to use for the next day, which is especially important if you use hot water in the morning.

    Check the Cylinder and Pipes Insulation

    After having the solar panels and hot water system installed on your home, you should see to it that the cylinder and pipes are properly insulated. Failure to do so will result in inadequate hot water, making the system inefficient.

    Solar panel systems that do not have insulated cylinders will not heat up your water enough, so make sure to ask the supplier and the people handling the installation about this to make the most out of your system.

    Do Not Overfill the Storage

    Man checking hot water system

    Avoid filling the hot water vessel to the brim, as doing so can make the system inefficient. Aside from not getting the water as hot as you want it to be, you will risk the chance of having the system break down sooner than you expect.

    Ask the supplier or the people installing the system to install a twin coil cylinder. This will allow the solar hot water system to heat up only one section of the coil cylinder, which is usually what the solar collector or thermal store is for.

    In cases wherein the dedicated solar volume is not used, the timing of the backup heating will have a huge impact on the solar hot water system’s performance. This usually happens in systems that do not require the current cylinder to be changed.

    Knowing how to properly use and maintain your solar hot water system is a huge time and money saver. It definitely would not hurt to ask questions from your solar panel supplier and installer, so make sure to ask them the questions that you have in mind. Enjoy your hot water and make sure to have your system checked every once in a while!

    The post Getting the Most Out of Your Solar Hot Water System appeared first on None Equilibrium.

    by Bertram Mortensen at January 03, 2019 06:05 PM

    December 31, 2018

    Jaques Distler - Musings

    Python urllib2 and TLS

    I was thinking about dropping support for TLSv1.0 in this webserver. All the major browser vendors have announced that they are dropping it from their browsers. And you’d think that since TLSv1.2 has been around for a decade, even very old clients ought to be able to negotiate a TLSv1.2 connection.

    But, when I checked, you can imagine my surprise that this webserver receives a ton of TLSv1 connections… including from the application that powers Planet Musings. Yikes!

    The latter is built around the Universal Feed Parser which uses the standard Python urrlib2 to negotiate the connection. And therein lay the problem …

    At least in its default configuration, urllib2 won’t negotiate anything higher than a TLSv1.0 connection. And, sure enough, that’s a problem:

    ERROR:planet.runner:Error processing
    ERROR:planet.runner:URLError: <urlopen error [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590)>
    ERROR:planet.runner:Error processing
    ERROR:planet.runner:URLError: <urlopen error [Errno 54] Connection reset by peer>
    ERROR:planet.runner:Error processing
    ERROR:planet.runner:URLError: <urlopen error EOF occurred in violation of protocol (_ssl.c:590)>

    Even if I’m still supporting TLSv1.0, others have already dropped support for it.

    Now, you might find it strange that urllib2 defaults to a TLSv1.0 connection, when it’s certainly capable of negotiating something more secure (whatever OpenSSL supports). But, prior to Python 2.7.9, urllib2 didn’t even check the server’s SSL certificate. Any encryption was bogus (wide open to a MiTM attack). So why bother negotiating a more secure connection?

    Switching from the system Python to Python 2.7.15 (installed by Fink) yielded a slew of

    ERROR:planet.runner:URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726)>

    errors. Apparently, no root certificate file was getting loaded.

    The solution to both of these problems turned out to be:

    --- a/feedparser/
    +++ b/feedparser/
    @@ -5,13 +5,15 @@ import gzip
     import re
     import struct
     import zlib
    +import ssl
    +import certifi
         import urllib.parse
         import urllib.request
     except ImportError:
         from urllib import splithost, splittype, splituser
    -    from urllib2 import build_opener, HTTPDigestAuthHandler, HTTPRedirectHandler, HTTPDefaultErrorHandler, Request
    +    from urllib2 import build_opener, HTTPSHandler, HTTPDigestAuthHandler, HTTPRedirectHandler, HTTPDefaultErrorHandler, Request
         from urlparse import urlparse
         class urllib(object):
    @@ -170,7 +172,9 @@ def get(url, etag=None, modified=None, agent=None, referrer=None, handlers=None,
         # try to open with urllib2 (to use optional headers)
         request = _build_urllib2_request(url, agent, ACCEPT_HEADER, etag, modified, referrer, auth, request_headers)
    -    opener = urllib.request.build_opener(*tuple(handlers + [_FeedURLHandler()]))
    +    context = ssl.SSLContext(ssl.PROTOCOL_TLS)
    +    context.load_verify_locations(cafile=certifi.where())
    +    opener = urllib.request.build_opener(*tuple(handlers + [HTTPSHandler(context=context)] + [_FeedURLHandler()]))
         opener.addheaders = [] # RMK - must clear so we only send our custom User-Agent
         f =
         data =

    Actually, the lines in red aren’t strictly necessary. As long as you set a ssl.SSLContext(), a suitable set of root certificates gets loaded. But, honestly, I don’t trust the internals of urllib2 to do the right thing anymore, so I want to make sure that a well-curated set of root certificates is used.

    With these changes, Venus negotiates a TLSv1.3 connection. Yay!

    Now, if only everyone else would update their Python scripts …


    This article goes some of the way towards explaining the brokenness of Python’s TLS implementation on MacOSX. But only some of the way …

    Update 2:

    Another offender turned out to be the very application (MarsEdit 3) that I used to prepare this post. Upgrading to MarsEdit 4 was a bit of a bother. Apple’s App-sandboxing prevented my Markdown+itex2MML text filter from working. One is no longer allowed to use IPC::Open2 to pipe text through the commandline itex2MML. So I had to create a Perl Extension Module for itex2MML. Now there’s a MathML::itex2MML module on CPAN to go along with the Rubygem.

    by distler ( at December 31, 2018 06:12 AM

    December 29, 2018

    Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

    Nature Drive: Is an Electric Car Worth Buying?

    Electric Vehicles (EVs) are seen as the future of the automotive industry. With sales projected at $30 million by 2030, electric cars are slowly but surely taking over their market. The EV poster boy, The Tesla Model S, is a consistent frontrunner in luxury car sales. However, there are still doubts about the electric car’s environmental benefits.

    Established names like General Motors, Audi, and Nissan are all hopping on the electric vehicle wave. Competition has made EVs more attractive to the public. This is so in spite of threats from the government to cut federal tax credits on electric cars. Fluctuating prices for battery components like graphite may also be a concern. Some states in the US like California and New York plan on banning the sale of cars with internal combustion by 2050. Should you take the leap to go full electric?


    The Tesla Model S starts at $75,700 and the SUV Model X at $79,500. There are many affordable options for your budget. The 2018 Ford Focus Electric, Hyundai Ioniq Electric, and Nissan Leaf start well under $30,000. Tesla even has the $35,000 Model 3, for those who want to experience the brand’s offerings for a lower price.

    The Chevrolet Bolt EV ($36,620) is also a favorite among those who want to make use of the $7,500 tax credit. The tax credit brings the Bolt EV’s price into the sub $30,000 range.

    EVs still cost more than their gasoline-powered counterparts up front. The regular 2018 Ford Focus starts at $18,825, about $10,000 cheaper than its electric sibling. Even if this is the case, electric cars still cost less to fuel.

    Charging Options

    EV charging station

    EV charging has three levels:

    • Level one uses your wall outlet to charge. Most electric cars come with level 1 chargers that you can plug into the nearest socket. This is the slowest way to charge your EV. You’ll have to leave it charging overnight to top it up.
    • Level two is what you would commonly find on public charging stations. It’s faster than a level 1 charger, taking about three to eight hours to recharge. You can also have a level 2 charger installed in your home with a permit and the help of an electrician.
    • Level three or DC Fast Charge (DCFC) stations are usually found in public as well. DCFCs can fully charge a vehicle in the span of 20 minutes to one hour.

    There are currently 23,809 electric vehicle charging stations in the USA and Canada. Some argue that this amount is meager compared to the 168,000 gas stations in the same area. Loren McDonald from CleanTechnica says this isn’t really a problem since electric vehicles still take up less than 0.29% of the automobiles in the US.

    McDonald also argued that most of the charging would be done at home. There are still continuous efforts to build charging stations to suit the needs of electric car users across the country.

    The Bumpy Road Ahead

    Despite its promise of a greener drive for everyone, electric cars have received their fair share of scrutiny from environmentalists, as well. The Fraunhofer Institute of Building Physics stated that the energy needed to make an electric vehicle is more than double what it takes to make a conventional one because of its battery.

    The International Council on Clean Transportation, however, says that battery manufacturing emissions may be similar to the ones from internal combustion engine manufacturing. The only difference is that electric cars don’t produce as much greenhouse gases as conventional ones do in the long run. The ICCT also says that with efforts to reduce the use of carbon in power sources, emissions from battery manufacturing will decrease by around 17%.

    Electric vehicles are becoming more accessible. Manufacturers are creating electric versions of existing models. They’re even sponsoring electric charging stations around the country. With moves to use cleaner energy in manufacturing, it only makes sense to switch. You can do your part now and get yourself an EV with more affordable options available.

    It also makes sense to wait for more competition to drive prices down if you don’t have the cash now. Either way, it’s not a matter of “if” but “when” you’ll switch to an EV for the greater good.

    The post Nature Drive: Is an Electric Car Worth Buying? appeared first on None Equilibrium.

    by Nonequilibrium at December 29, 2018 01:00 AM

    December 27, 2018

    Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

    Fluid Chromatography: How It Prevents Drug Contamination

    Many Americans take medications on a daily basis. A survey by Consumer Reports reveals that more than half of the population have a prescribed drug. Among this group, a good number consume more than three. Others are taking prescribed medications along with over-the-counter medicines, vitamins, and other forms of supplements.

    As Americans get older and more drugs become available to manage or cure diseases, this percentage of consumers is also likely to increase. This trend then brings up one of the underrated medicinal concerns. How possible is drug contamination?

    How Common Is Drug Contamination?

    Drug contamination incidents do not happen all the time. In fact, they tend to be rare. This is because companies implement strict process regulations and quality control. On the downside, once they do, the implications are severe.

    This problem can happen in different ways. One of these is through tampering. People can recall the Chicago Tylenol murders that occurred in 1982. During this time, seven people died after ingesting acetaminophen laced with potassium cyanide.

    It can also happen at the manufacturing level. A good example is the 2009 contamination of a Sanofi Genzyme plant in Massachusetts. The manufacturers detected a viral presence in one of its bioreactors used to produce Cerezyme, a drug used to treat type 1 Gaucher disease. Although the threat cannot cause human infection, it impaired the cell’s viability.

    Due to this incident, the company had to write off close to $30 million worth of products and lose over $300 million in revenue. Since it’s also one of the few manufacturers of medication for this disease, the shutdown led to a supply shortage for more than a year.

    Using Fluid Chromatography as a Solution

    Scientist working with test tubesTo ensure the viability and safety of the medications, companies such as provide supercritical and high-performance fluid chromatography.

    Chromatography is the process of breaking down a product, such as a drug, into its components. In fluid or liquid chromatography, dissolved molecules or ions separate depending on how they interact with the mobile and the stationary phases.

    In turn, the pharmaceutical analysts can determine the level of purity of the drug as well as the presence of minute traces of contaminants. Besides quality control, the technique can enable scientists to find substances that can be helpful in future research.

    The level of accuracy of this test is already high, thanks to the developments of the equipment. Still, it’s possible they cannot detect all types of compounds due to factors such as thermal instability.

    Newer or modern types of machines can have programmable settings. These allow the users to set the temperatures to extremely high levels or subzero conditions. They can also have features that will enable users to replicate the test many times at the same level of consistency.

    It takes more than chromatography to prevent the contamination of medications, especially during manufacturing. It also requires high-quality control standards and strict compliance with industry protocols. Chromatography, though, is one useful way to safeguard the health of drug consumers in the country.

    The post Fluid Chromatography: How It Prevents Drug Contamination appeared first on None Equilibrium.

    by Nonequilibrium at December 27, 2018 07:27 PM

    December 25, 2018

    The n-Category Cafe

    Category Theory 2019

    As announced here previously, the major annual category theory meeting is taking place next year in Edinburgh, on 7-13 July. And after a week in the city of Arthur Conan Doyle, James Clerk Maxwell, Dolly the Sheep and the Higgs Boson, you can head off to Oxford for the Applied Category Theory 2019.

    We’re now pleased to advertise our preliminary list of invited speakers, together with key dates for others who’d like to give talks.

    The preliminary Invited Speakers include three of your Café hosts, and are as follows:

    • John Baez (Riverside)
    • Neil Ghani (Strathclyde)
    • Marco Grandis (Genoa)
    • Simona Paoli (Leicester)
    • Emily Riehl (Johns Hopkins)
    • Mike Shulman (San Diego)
    • Manuela Sobral (Coimbra)

    Further invited speakers are to be confirmed.

    Contributed talks   We are offering an early round of submissions and decisions to allow for those who need an early decision (e.g. for funding purposes) or want preliminary feedback for a possible resubmission. The timetable is as follows:

    • Early submission opens: January 1
    • Early submission deadline: March 1
    • Early decision notifications: April 1

    For those who don’t need an early decision:

    • Submission opens: March 1
    • Submission deadline: May 1
    • Notifications: June 1

    Submission for CT2019 is handled by EasyChair through the link

    In order to submit, you will need to make an EasyChair account, which is a simple process. Submissions should be in the form of a brief (one page) abstract.

    Registration is independent of abstract submission and will be open at a later date.

    by leinster ( at December 25, 2018 05:12 PM

    The n-Category Cafe

    HoTT 2019

    The first International Conference on Homotopy Type Theory, HoTT 2019, will take place from August 12th to 17th, 2019 at Carnegie Mellon University in Pittsburgh, USA. Here is the organizers’ announcement:

    The invited speakers will be:

    • Ulrik Buchholtz (TU Darmstadt, Germany)
    • Dan Licata (Wesleyan University, USA)
    • Andrew Pitts (University of Cambridge, UK)
    • Emily Riehl (Johns Hopkins University, USA)
    • Christian Sattler (University of Gothenburg, Sweden)
    • Karol Szumilo (University of Leeds, UK)

    Submissions of contributed talks will open in January and conclude in March; registration will open sometime in the spring.

    There will also be an associated Homotopy Type Theory Summer School in the preceding week, August 7th to 10th.

    The topics and instructors are:

    • Cubical methods: Anders Mortberg
    • Formalization in Agda: Guillaume Brunerie
    • Formalization in Coq: Kristina Sojakova
    • Higher topos theory: Mathieu Anel
    • Semantics of type theory: Jonas Frey
    • Synthetic homotopy theory: Egbert Rijke

    We expect some funding to be available for students to attend the summer school and conference.

    Looking forward to seeing you in Pittsburgh!

    The scientific committee consists of:

    • Steven Awodey
    • Andrej Bauer
    • Thierry Coquand
    • Nicola Gambino
    • Peter LeFanu Lumsdaine
    • Michael Shulman, chair

    by john ( at December 25, 2018 06:33 AM

    The n-Category Cafe

    Monads and Lawvere Theories

    guest post by Jade Master

    I have a question about the relationship between Lawvere theories and monads.

    Every morphism of Lawvere theories <semantics>f:TT<annotation encoding="application/x-tex">f \colon T \to T'</annotation></semantics> induces a morphism of monads <semantics>M f:M TM T <annotation encoding="application/x-tex">M_f \colon M_T \Rightarrow M_{T^'}</annotation></semantics> which can be calculated by using the universal property of the coend formula for <semantics>M T<annotation encoding="application/x-tex">M_T</annotation></semantics>. (This can be found in Hyland and Power’s paper Lawvere theories and monads.)

    On the other hand <semantics>f:TT<annotation encoding="application/x-tex">f \colon T \to T'</annotation></semantics> gives a functor <semantics>f *:Mod(T)Mod(T)<annotation encoding="application/x-tex">f^\ast \colon Mod(T') \to Mod(T)</annotation></semantics> given by precomposition with <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>. Because everything is nice enough, <semantics>f *<annotation encoding="application/x-tex">f^\ast</annotation></semantics> always has a left adjoint <semantics>f *:Mod(T)Mod(T)<annotation encoding="application/x-tex">f_\ast \colon Mod(T) \to Mod(T')</annotation></semantics>. (Details of this can be found in Toposes, Triples and Theories.)

    My question is the following:

    What relationship is there between the left adjoint <semantics>f *:Mod(T)Mod(T)<annotation encoding="application/x-tex">f_\ast \colon Mod(T) \to Mod(T')</annotation></semantics> and the morphism of monads computed using coends <semantics>M f:M TM T <annotation encoding="application/x-tex">M_f \colon M_T \Rightarrow M_{T^'}</annotation></semantics>?

    In the examples I can think of the components of <semantics>M f<annotation encoding="application/x-tex">M_f</annotation></semantics> are given by the unit of the adjunction between <semantics>f *<annotation encoding="application/x-tex">f^\ast</annotation></semantics> and <semantics>f *<annotation encoding="application/x-tex">f_\ast</annotation></semantics> but I cannot find a reference explaining this. It doesn’t seem to be in Toposes, Triples, and Theories.

    by john ( at December 25, 2018 06:21 AM

    December 22, 2018

    Alexey Petrov - Symmetry factor

    David vs. Goliath: What a tiny electron can tell us about the structure of the universe
    File 20181128 32230 mojlgr.jpg?ixlib=rb 1.1An artist’s impression of electrons orbiting the nucleus.
    Roman Sigaev/

    Alexey Petrov, Wayne State University

    What is the shape of an electron? If you recall pictures from your high school science books, the answer seems quite clear: an electron is a small ball of negative charge that is smaller than an atom. This, however, is quite far from the truth.

    A simple model of an atom with the nucleus of made of protons, which have a positive charge, and neutrons, which are neutral. The electrons, which have a negative charge, orbit the nucleus.
    Vector FX /

    The electron is commonly known as one of the main components of atoms making up the world around us. It is the electrons surrounding the nucleus of every atom that determine how chemical reactions proceed. Their uses in industry are abundant: from electronics and welding to imaging and advanced particle accelerators. Recently, however, a physics experiment called Advanced Cold Molecule Electron EDM (ACME) put an electron on the center stage of scientific inquiry. The question that the ACME collaboration tried to address was deceptively simple: What is the shape of an electron?

    Classical and quantum shapes?

    As far as physicists currently know, electrons have no internal structure – and thus no shape in the classical meaning of this word. In the modern language of particle physics, which tackles the behavior of objects smaller than an atomic nucleus, the fundamental blocks of matter are continuous fluid-like substances known as “quantum fields” that permeate the whole space around us. In this language, an electron is perceived as a quantum, or a particle, of the “electron field.” Knowing this, does it even make sense to talk about an electron’s shape if we cannot see it directly in a microscope – or any other optical device for that matter?

    To answer this question we must adapt our definition of shape so it can be used at incredibly small distances, or in other words, in the realm of quantum physics. Seeing different shapes in our macroscopic world really means detecting, with our eyes, the rays of light bouncing off different objects around us.

    Simply put, we define shapes by seeing how objects react when we shine light onto them. While this might be a weird way to think about the shapes, it becomes very useful in the subatomic world of quantum particles. It gives us a way to define an electron’s properties such that they mimic how we describe shapes in the classical world.

    What replaces the concept of shape in the micro world? Since light is nothing but a combination of oscillating electric and magnetic fields, it would be useful to define quantum properties of an electron that carry information about how it responds to applied electric and magnetic fields. Let’s do that.

    This is the apparatus the physicists used to perform the ACME experiment.
    Harvard Department of Physics, CC BY-NC-SA

    Electrons in electric and magnetic fields

    As an example, consider the simplest property of an electron: its electric charge. It describes the force – and ultimately, the acceleration the electron would experience – if placed in some external electric field. A similar reaction would be expected from a negatively charged marble – hence the “charged ball” analogy of an electron that is in elementary physics books. This property of an electron – its charge – survives in the quantum world.

    Likewise, another “surviving” property of an electron is called the magnetic dipole moment. It tells us how an electron would react to a magnetic field. In this respect, an electron behaves just like a tiny bar magnet, trying to orient itself along the direction of the magnetic field. While it is important to remember not to take those analogies too far, they do help us see why physicists are interested in measuring those quantum properties as accurately as possible.

    What quantum property describes the electron’s shape? There are, in fact, several of them. The simplest – and the most useful for physicists – is the one called the electric dipole moment, or EDM.

    In classical physics, EDM arises when there is a spatial separation of charges. An electrically charged sphere, which has no separation of charges, has an EDM of zero. But imagine a dumbbell whose weights are oppositely charged, with one side positive and the other negative. In the macroscopic world, this dumbbell would have a non-zero electric dipole moment. If the shape of an object reflects the distribution of its electric charge, it would also imply that the object’s shape would have to be different from spherical. Thus, naively, the EDM would quantify the “dumbbellness” of a macroscopic object.

    Electric dipole moment in the quantum world

    The story of EDM, however, is very different in the quantum world. There the vacuum around an electron is not empty and still. Rather it is populated by various subatomic particles zapping into virtual existence for short periods of time.

    The Standard Model of particle physics has correctly predicted all of these particles. If the ACME experiment discovered that the electron had an EDM, it would suggest there were other particles that had not yet been discovered.

    These virtual particles form a “cloud” around an electron. If we shine light onto the electron, some of the light could bounce off the virtual particles in the cloud instead of the electron itself.

    This would change the numerical values of the electron’s charge and magnetic and electric dipole moments. Performing very accurate measurements of those quantum properties would tell us how these elusive virtual particles behave when they interact with the electron and if they alter the electron’s EDM.

    Most intriguing, among those virtual particles there could be new, unknown species of particles that we have not yet encountered. To see their effect on the electron’s electric dipole moment, we need to compare the result of the measurement to theoretical predictions of the size of the EDM calculated in the currently accepted theory of the Universe, the Standard Model.

    So far, the Standard Model accurately described all laboratory measurements that have ever been performed. Yet, it is unable to address many of the most fundamental questions, such as why matter dominates over antimatter throughout the universe. The Standard Model makes a prediction for the electron’s EDM too: it requires it to be so small that ACME would have had no chance of measuring it. But what would have happened if ACME actually detected a non-zero value for the electric dipole moment of the electron?

    View of the Large Hadron Collider in its tunnel near Geneva, Switzerland. In the LHC two counter-rotating beams of protons are accelerated and forced to collide, generating various particles.
    AP Photo/KEYSTONE/Martial Trezzini

    Patching the holes in the Standard Model

    Theoretical models have been proposed that fix shortcomings of the Standard Model, predicting the existence of new heavy particles. These models may fill in the gaps in our understanding of the universe. To verify such models we need to prove the existence of those new heavy particles. This could be done through large experiments, such as those at the international Large Hadron Collider (LHC) by directly producing new particles in high-energy collisions.

    Alternatively, we could see how those new particles alter the charge distribution in the “cloud” and their effect on electron’s EDM. Thus, unambiguous observation of electron’s dipole moment in ACME experiment would prove that new particles are in fact present. That was the goal of the ACME experiment.

    This is the reason why a recent article in Nature about the electron caught my attention. Theorists like myself use the results of the measurements of electron’s EDM – along with other measurements of properties of other elementary particles – to help to identify the new particles and make predictions of how they can be better studied. This is done to clarify the role of such particles in our current understanding of the universe.

    What should be done to measure the electric dipole moment? We need to find a source of very strong electric field to test an electron’s reaction. One possible source of such fields can be found inside molecules such as thorium monoxide. This is the molecule that ACME used in their experiment. Shining carefully tuned lasers at these molecules, a reading of an electron’s electric dipole moment could be obtained, provided it is not too small.

    However, as it turned out, it is. Physicists of the ACME collaboration did not observe the electric dipole moment of an electron – which suggests that its value is too small for their experimental apparatus to detect. This fact has important implications for our understanding of what we could expect from the Large Hadron Collider experiments in the future.

    Interestingly, the fact that the ACME collaboration did not observe an EDM actually rules out the existence of heavy new particles that could have been easiest to detect at the LHC. This is a remarkable result for a tabletop-sized experiment that affects both how we would plan direct searches for new particles at the giant Large Hadron Collider, and how we construct theories that describe nature. It is quite amazing that studying something as small as an electron could tell us a lot about the universe.

    A short animation describing the physics behind EDM and ACME collaboration’s findings.

    Alexey Petrov, Professor of Physics, Wayne State University

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    The Conversation

    by apetrov at December 22, 2018 08:51 PM

    December 21, 2018

    Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

    Chiropractic Marketing: 5 Ways to Earn More Online Leads

    Every business requires a steady stream of clients to succeed. The world is now doing everything online; from taking classes and shopping, to finding services online. This has been seen the rise of digital marketing to boost sales by targeting online customers.

    While you can find customers by offering services, such as free spinal cord exams to local clients, online patients will remain unreached. As a result, digital marketing is crucial. If you have not started marketing digitally, here are five strategies to generate more leads online.

    1.Website Design

    For you to reach potential clients and create converting leads, you ought to have a platform on which to do it. Websites are, therefore, essential in digital marketing for chiropractors in Gilbert, Arizona.

    Your website is your main marketing tool in reaching the online audience. It gives you an online presence. And, it will only make sense to invest in a good website — one that is visually appealing while at the same time functions well.

    The website design should be suitable enough to attract your target audience and make you stand out from your competitors.

    2.Search Engine Optimization (SEO)

    SEO is an effective marketing strategy that helps clients to find you by searching on Google. One of the significant factors in conducting SEO is the use of relevant keywords that patients use when entering queries on search engines.

    Creating appropriate content will also go a long way in gaining you favorable ranking. You don’t have to limit your topics on your products and services. Talk about trend in your industry, too. Anything that will be relevant to your target audience.

    3. Social Media

    Social media platforms are a good place to start building your online presence. People are always seeking recommendations, and social media offers an opportunity to get referrals. Some people even get the information they need from social media now, not search engines.

    Brand your profiles in a similar way to boost recognition and share content regularly. Add social media buttons to your website to encourage people to share. Another tip is to create a social media personality for your brand that is approachable and fun, so people can trust your business.

    4. Online Directories

    Team lead speaking in front of the team

    It is not enough to have a website and social media platforms. You need to be on online directories as well. Remember that it is in your best intention to cover as much digital ground as possible.

    Popular online directories such as Google, Yahoo! and Yelp list businesses. The listings enable people to search businesses based on location. They can also access reviews left by other clients. Ensure that all your information is filled accurately and work toward getting many positive reviews.

    5. Mobile friendly site

    Most people use mobile devices to access the internet. Make your website mobile-friendly for easy view and navigation. Besides, Google ranks websites that are mobile user-friendly better.

    By optimizing your website using SEO and social media among other strategies, you can earn more online leads and boost the success of your business.

    The post Chiropractic Marketing: 5 Ways to Earn More Online Leads appeared first on None Equilibrium.

    by Bertram Mortensen at December 21, 2018 02:15 PM

    December 13, 2018

    Axel Maas - Looking Inside the Standard Model

    The size of the W
    As discussed in an earlier entry we set out to measure the size of a particle: The W boson. We have now finished this, and published a paper about our results. I would like to discuss these results a bit in detail.

    This project was motivated because we think that the W (and its sibling, the Z boson) are actually more complicated than usually assured. We think that they may have a self-similar structure. The bits and pieces of this is quite technical. But the outline is the following: What we see and measure as a W at, say, the LHC or earlier, is actually not a point-like particle. Although this is the currently most common view. But science has always been about changing the common ideas and replacing them with something new and better. So, our idea is that the W has a substructure. This substructure is a bit weird, because it is not made from additional elementary particles. It rather looks like a bubbling mess of quantum effects. Thus, we do not expect that we can isolate anything which resembles a physical particle within the W. And if we try to isolate something, we should not expect it to behave as a particle.

    Thus, this scenario gives two predictions. One: Substructure needs to have space somewhere. Thus, the W should have a size. Two: Anything isolated from it should not behave like a particle. To test both ideas in the same way, we decided to look at the same quantity: The radius. Hence, we simulated a part of the standard model. Then we measured the size of the W in this simulation. Also, we tried to isolate the most particle-like object from the substructure, and also measured its size. Both of these measurements are very expensive in terms of computing time. Thus, our results are rather exploratory. Hence, we cannot yet regard what we found as final. But at least it gives us some idea of what is going on.

    The first thing is the size of the W. Indeed, we find that it has a size, and one which is not too small either. The number itself, however, is far less accurate. The reason for this is twofold. On the one hand, we have only a part of the standard model in our simulations. On the other hand, we see artifacts. They come from the fact that our simulations can only describe some finite part of the world. The larger this part is, the more expensive the calculation. With what we had available, the part seems to be still so small that the W is big enough to 'bounce of the walls' fairly often. Thus, our results still show a dependence on the size of this part of the world. Though we try to accommodate for this, this still leaves a sizable uncertainty for the final result. Nonetheless, the qualitative feature that it has a significant size remains.

    The other thing are the would-be constituents. We indeed can identify some kind of lumps of quantum fluctuations inside. But indeed, they do not behave like a particle, not even remotely. Especially, when trying to measure their size, we find that the square of their radius is negative! Even though the final value is still uncertain, this is nothing a real particle should have. Because when trying to take the square root of such a negative quantity to get the actual number yields an imaginary number. That is an abstract quantity, which, while not identifiable with anything in every day, has a well-defined mathematical meaning. In the present case, this means this lump is nonphysical, as if you would try to upend a hole. Thus, this mess is really not a particle at all, in any conventional sense of the word. Still, what we could get from this is that such lumps - even though they are not really lumps, 'live' only in areas of our W much smaller than the W size. So, at least they are contained. And let the W be the well-behaved particle it is.

    So, the bottom line is, our simulations agreed with our ideas. That is good. But it is not enough. After all, who can tell if what we simulate is actually the thing happening in nature? So, we will need an experimental test of this result. This is surprisingly complicated. After all, you cannot really get a measure stick to get the size of a particle. Rather, what you do is, you throw other particles at them, and then see how much they are deflected. At least in principle.

    Can this be done for the W? Yes, it can be done, but is very indirect. Essentially, it could work as follows: Take the LHC, at which two protons are smashed in each other. In this smashing, it is possible that a Z boson is produced, which smashes of a W. So, you 'just' need to look at the W before and after. In practice, this is more complicated. Since we cannot send the W in there to hit the Z, we use that mathematically this process is related to another one. If we get one, we get the other for free. This process is that the produced Z, together with a lot of kinetic energy, decays into two W particles. These are then detected, and their directions measured.

    As nice as this sounds, this is still horrendously complicated. The problem is that the Ws themselves decay into some leptons and neutrinos before they reach the actual detector. And because neutrinos escape essentially always undetected, one can only indirectly infer what has been going on. Especially the directions of the Ws cannot easily be reconstructed. Still, in principle it should be possible, and we discuss this in our paper. So we can actually measure this size in principle. It will be now up to the experimental experts if it can - and will - be done in practice.

    by Axel Maas ( at December 13, 2018 04:15 PM

    December 07, 2018

    Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

    The last day of term

    There is always a great sense of satisfaction on the last day of the teaching semester. That great moment on a Friday afternoon when the last lecture is over, the last presentation is marked, and the term’s teaching materials can be transferred from briefcase to office shelf. I’m always tempted to jump in the car, and drive around the college carpark beeping madly. Of course there is the small matter of marking, from practicals, assignments and assessments to the end-of-semester exams, but that’s a very different activity!

    Image result for waterford institute of technology

    The last day of term at WIT

    For me, the semesterisation of teaching is one of the best aspects of life as an academic. I suppose it’s the sense of closure, of things finished – so different from research, where one paper just leads to another in a never-ending cycle. There never seems to be a good moment for a pause in the world of research, just a ton of papers I would like to write if I had the time.

    In recent years, I’ve started doing a brief tally of research outputs at the end of each semester. Today, the tally is 1 book chapter, 1 journal article, 2 conference presentations and 1 magazine article (plus 2 newspaper columns). All seems ok until I remember that most of this material was in fact written over the summer. On reflection, the semester’s ‘research’ consisted of carrying out corrections to the articles above and preparing slides for conferences.

    The reason for this is quite simple – teaching. On top of my usual lecturing duties, I had to prepare and deliver a module in 4th-year particle physics this term. It was a very interesting experience and I learnt a lot, but preparing the module took up almost every spare moment of my time, nuking any chances of doing any meaningful research during the teaching term. And now I hear that I will be involved in the delivery of yet another new module next semester, oh joy.

    This has long been my problem with the Institutes of Technology. With contact hours set at a minimum of 16 hours/week, there is simply far too much teaching (a situation that harks back to a time when lecturers taught to Diploma level only). While the high-ups in education in our capital city make noises about the importance of research and research-led teaching, they refuse to countenance any change in this for research-active staff in the IoTs. If anything, one has the distinct impression everyone would much rather we didn’t bother.  I don’t expect this situation to change anytime soon  – in all the talk about technological universities, I have yet to hear a single mention of new lecturer contracts.


    by cormac at December 07, 2018 06:09 PM

    Clifford V. Johnson - Asymptotia

    Physics Plans

    After a conversation over coffee with one of the event planners over at the Natural History Museum, I had an idea and wandered over to talk to Angella Johnson (no relation), our head of demo labs. Within seconds we were looking at some possible props I might use in an event at the NHM in February. Will tell you more about it later!

    -cvj Click to continue reading this post

    The post Physics Plans appeared first on Asymptotia.

    by Clifford at December 07, 2018 05:01 AM

    November 30, 2018

    Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

    Is science influenced by politics?

    “Most scientists and historians would agree that Einstein’s quest was driven by scientific curiosity.” Photograph:  Getty Images)

    “Science is always political,” asserted a young delegate at an international conference on the history of physics earlier this month. It was a very enjoyable meeting, but I noticed the remark caused a stir among many of the physicists in the audience.

    In truth, the belief that the practice of science is never entirely free of politics has been a steady theme of historical scholarship for some years now, as can be confirmed by a glance at any scholarly journal on the history of science. At a conference specifically designed to encourage interaction between scientists, historians and sociologists of science, it was interesting to see a central tenet of modern scholarship openly questioned.

    Famous debate

    Where does the idea come from? A classic example of the hypothesis can be found in the book Leviathan and the Air-Pump by Steven Shapin and Simon Schaffer. In this highly influential work, the authors considered the influence of the politics of the English civil war and the restoration on the famous debate between scientist Robert Boyle and philosopher Thomas Hobbesconcerning the role of experimentation in science. More recently, many American historians of science have suggested that much of the success of 20th century American science, from aeronautics to particle physics, was driven by the politics of the cold war.

    Similarly, there is little question that CERN, the famous inter-European particle physics laboratory at Geneva, was constructed to stem the brain-drain of European physicists to the United States after the second World War. CERN has proved itself many times over as an outstanding example of successful international scientific collaboration, although Ireland has yet to join.

    But do such examples imply that science is always influenced by politics? Some scientists and historians doubt this assertion. While one can see how a certain field or technology might be driven by national or international political concerns, the thesis seems less tenable when one considers basic research. In what way is the study of the expanding universe influenced by politics? Surely the study of the elementary particles is driven by scientific curiosity?


    In addition, it is difficult to definitively prove a link between politics and a given scientific advance – such assertions involve a certain amount of speculation. For example, it is interesting to note that many of the arguments in Leviathan have been seriously questioned, although these criticisms have not received the same attention as the book itself.

    That said, few could argue that research into climate science in the United States suffered many setbacks during the presidency of George W Bush, and a similar situation pertains now. But the findings of American climate science are no less valid than they were at other time and the international character of scientific enquiry ensures a certain objectivity and continuity of research. Put bluntly, there is no question that resistance to the findings of climate science is often politically motivated, but there is little evidence that climate science itself is political.

    Another factor concerns the difference between the development of a given field and the dawning of an entirely new field of scientific inquiry. In a recent New York Times article titled “How politics shaped general relativity”, the American historian of science David Kaiser argued convincingly for the role played by national politics in the development of Einstein’s general theory of relativity in the United States. However, he did not argue that politics played a role in the original gestation of the theory – most scientists and historians would agree that Einstein’s quest was driven by scientific curiosity.

    All in all, I think there is a danger of overstating the influence of politics on science. While national and international politics have an impact on every aspect our lives, the innate drive of scientific progress should not be overlooked. Advances in science are generally propelled by the engine of internal logic, by observation, hypothesis and theory-testing. No one is immune from political upheaval, but science has a way of weeding out incorrect hypotheses over time.

    Cormac O’Raifeartaigh lectures in physics at Waterford Institute of Technology and is a visiting associate professor at University College Dublin

    by cormac at November 30, 2018 04:43 PM

    November 22, 2018

    Sean Carroll - Preposterous Universe


    This year we give thanks for an historically influential set of celestial bodies, the moons of Jupiter. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, the Fourier Transform, Riemannian Geometry, the speed of light, and the Jarzynski equality.)

    For a change of pace this year, I went to Twitter and asked for suggestions for what to give thanks for in this annual post. There were a number of good suggestions, but two stood out above the rest: @etandel suggested Noether’s Theorem, and @OscarDelDiablo suggested the moons of Jupiter. Noether’s Theorem, according to which symmetries imply conserved quantities, would be a great choice, but in order to actually explain it I should probably first explain the principle of least action. Maybe some other year.

    And to be precise, I’m not going to bother to give thanks for all of Jupiter’s moons. 78 Jovian satellites have been discovered thus far, and most of them are just lucky pieces of space debris that wandered into Jupiter’s gravity well and never escaped. It’s the heavy hitters — the four Galilean satellites — that we’ll be concerned with here. They deserve our thanks, for at least three different reasons!

    Reason One: Displacing Earth from the center of the Solar System

    Galileo discovered the four largest moons of Jupiter — Io, Europa, Ganymede, and Callisto — back in 1610, and wrote about his findings in Sidereus Nuncius (The Starry Messenger). They were the first celestial bodies to be discovered using that new technological advance, the telescope. But more importantly for our present purposes, it was immediately obvious that these new objects were orbiting around Jupiter, not around the Earth.

    All this was happening not long after Copernicus had published his heliocentric model of the Solar System in 1543, offering an alternative to the prevailing Ptolemaic geocentric model. Both models were pretty good at fitting the known observations of planetary motions, and both required an elaborate system of circular orbits and epicycles — the realization that planetary orbits should be thought of as ellipses didn’t come along until Kepler published Astronomia Nova in 1609. As everyone knows, the debate over whether the Earth or the Sun should be thought of as the center of the universe was a heated one, with the Roman Catholic Church prohibiting Copernicus’s book in 1616, and the Inquisition putting Galileo on trial in 1633.

    Strictly speaking, the existence of moons orbiting Jupiter is equally compatible with a heliocentric or geocentric model. After all, there’s nothing wrong with thinking that the Earth is the center of the Solar System, but that other objects can have satellites. However, the discovery brought about an important psychological shift. Sure, you can put the Earth at the center and still allow for satellites around other planets. But a big part of the motivation for putting Earth at the center was that the Earth wasn’t “just another planet.” It was supposed to be the thing around which everything else moved. (Remember that we didn’t have Newtonian mechanics at the time; physics was still largely an Aristotelian story of natures and purposes, not a bunch of objects obeying mindless differential equations.)

    The Galilean moons changed that. If other objects have satellites, then Earth isn’t that special. And if it’s not that special, why have it at the center of the universe? Galileo offered up other arguments against the prevailing picture, from the phases of Venus to mountains on the Moon, and of course once Kepler’s ellipses came along the whole thing made much more mathematical sense than Ptolemy’s epicycles. Thus began one of the great revolutions in our understanding of our place in the cosmos.

    Reason Two: Measuring the speed of light

    Time is what clocks measure. And a clock, when you come right down to it, is something that does the same thing over and over again in a predictable fashion with respect to other clocks. That sounds circular, but it’s a nontrivial fact about our universe that it is filled with clocks. And some of the best natural clocks are the motions of heavenly bodies. As soon as we knew about the moons of Jupiter, scientists realized that they had a new clock to play with: by accurately observing the positions of all four moons, you could work out what time it must be. Galileo himself proposed that such observations could be used by sailors to determine their longitude, a notoriously difficult problem.

    Danish astronomer Ole Rømer noted a puzzle when trying to use eclipses of Io to measure time: despite the fact that the orbit should be an accurate clock, the actual timings seemed to change with the time of year. Being a careful observational scientist, he deduced that the period between eclipses was longer when the Earth was moving away from Jupiter, and shorter when the two planets were drawing closer together. An obvious explanation presented itself: the light wasn’t traveling instantaneously from Jupiter and Io to us here on Earth, but rather took some time. By figuring out exactly how the period between eclipses varied, we could then deduce what the speed of light must be.

    Rømer’s answer was that light traveled at about 220,000 kilometers per second. That’s pretty good! The right answer is 299,792 km/sec, about 36% greater than Rømer’s value. For comparison purposes, when Edwin Hubble first calculated the Hubble constant, he derived a value of about 500 km/sec/Mpc, whereas now we know the right answer is about 70 km/sec/Mpc. Using astronomical observations to determine fundamental parameters of the universe isn’t easy, especially if you’re the first one to to it.

    Reason Three: Looking for life

    Here in the present day, Jupiter’s moons have not lost their fascination or importance. As we’ve been able to study them in greater detail, we’ve learned a lot about the history and nature of the Solar System more generally. And one of the most exciting prospects is that one or more of these moons might harbor life.

    It used to be common to think about the possibilities for life outside Earth in terms of a “habitable zone,” the region around a star where temperatures allowed planets to have liquid water. (Many scientists think that liquid water is a necessity for life to exist — but maybe we’re just being parochial about that.) In our Solar System, Earth is smack-dab in the middle of the habitable zone, and Mars just sneaks in. Both Venus and Jupiter are outside, on opposite ends.

    But there’s more than one way to have liquid water. It turns out that both Europa and Ganymede, as well as Saturn’s moons Titan and Enceladus, are plausible homes for large liquid oceans. Europa, in particular, is thought to possess a considerable volume of liquid water underneath an icy crust — approximately two or three times as much water as in all the oceans on Earth. The point is that solar radiation isn’t the only way to heat up water and keep it at liquid temperatures. On Europa, it’s likely that heat is generated by the tidal pull from Jupiter, which stretches and distorts the moon’s crust as it rotates.

    Does that mean there could be life there? Maybe! Nobody really knows. Smart money says that we’re more likely to find life on a wet environment like Europa than a dry one like Mars. And we’re going to look — the Europa Clipper mission is scheduled for launch by 2025.

    If you can’t wait for then, go back and watch the movie Europa Report. And while you do, give thanks to Galileo and his discovery of these fascinating celestial bodies.

    by Sean Carroll at November 22, 2018 10:59 PM

    November 04, 2018

    Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

    A welcome mid-term break

    Today marks the end of the mid-term break for many of us in the third level sector in Ireland. While a non-teaching week in the middle of term has been a stalwart of secondary schools for many years, the mid-term break only really came to the fore in the Irish third level sector when our universities, Institutes of Technology (IoTs) and other colleges adopted the modern model of 12-week teaching semesters.

    Also known as ‘reading week’ in some colleges, the break marks a precious respite in the autumn/winter term. A chance to catch one’s breath, a chance to prepare teaching notes for the rest of term and a chance to catch up on research. Indeed, it is the easiest thing in the world to let the latter slide during the teaching term – only to find that deadlines for funding, book chapters and conference abstracts quietly slipped past while one was trying to keep up with teaching and administration duties.


    A quiet walk in Foxrock on the last day of the mid-term break

    Which brings me to a pet peeve. All those years later, teaching loads in the IoT sector remain far too high. Lecturers are typically assigned four teaching modules per semester, a load that may have been reasonable in the early days of teaching to Certificate and Diploma level, but makes little sense in the context of today’s IoT lecturer who may teach several modules at 3rd and 4th year degree level, with typically at least one brand new module each year – all of this whilst simultaneously attempting to keep up the research. It’s a false economy if ever there was one, as many a new staff member, freshly graduated from a top research group, will simply abandon research after a few busy years.

    Of course, one might have expected to hear a great deal about this issue in the governments plan to ‘upgrade’ IoTs to technological university status. Actually, I have yet to see any public discussion of a prospective change in the teaching contracts of IoT lecturers – a question of money, no doubt. But this is surely another indication that we are talking about a change in name, rather than substance…

    by cormac at November 04, 2018 05:15 PM

    October 27, 2018

    Robert Helling - atdotde

    Interfere and it didn't happen
    I am a bit late for the party, but also wanted to share my two cents on the paper "Quantum theory cannot consistently describe the use of itself" by Frauchiger and Renner. After sitting down and working out the math for myself, I found that the analysis in this paper and the blogpost by Scot (including many of the the 160+ comments, some by Renner) share a lot with what I am about to say but maybe I can still contribute a slight twist.

    Coleman on GHZS

    My background is the talk "Quantum Mechanics In Your Face" by Sidney Coleman which I consider as the best argument why quantum mechanics cannot be described by a local and realistic theory (from which I would conclude it is not realistic). In a nutshell, the argument goes like this: Consider the three qubit state state 

    $$\Psi=\frac 1{\sqrt 2}(\uparrow\uparrow\uparrow-\downarrow\downarrow\downarrow)$$

    which is both an eigenstate of eigenvalue -1 for $\sigma_z\otimes\sigma_z\otimes\sigma_z$ and an eigenstate of eigenvalue +1 for $\sigma_x\otimes\sigma_x\otimes\sigma_z$ or any permutation. This means that, given that the individual outcomes of measuring a $\sigma$-matrix on a qubit is $\pm 1$, when measuring all in the z-direction there will be an odd number of -1 results but if two spins are measured in x-direction and one in z-direction there is an even number of -1's. 

    The latter tells us that the outcome of one z-measurement is the product of the two x-measurements on the other two spins. But multiplying this for all three spins we get that in shorthand $ZZZ=(XXX)^2=+1$ in contradiction to the -1 eigenvalue for all z-measurments. 

    The conclusion is (unless you assume some non-local conspiracy between the spins) that one has to take serious the fact that on a given spin I cannot measure both $\sigma_x$ and $\sigma_z$ and thus when actually measuring the latter I must not even assume that $X$ has some (although unknown) value $\pm 1$ as it leads to the contradiction. Stuff that I cannot measure does not have a value (that is also my understanding of what "not realistic" means).

    Fruchtiger and Renner

    Now to the recent Nature paper. In short, they are dealing with two qubits (by which I only mean two state systems). The first is in a box L' (I will try to use the somewhat unfortunate nomenclature from the paper) and the second in in a box L (L stands for lab). For L, we use the usual z-basis of $\uparrow$ and $\downarrow$ as well as the x-basis $\leftarrow = \frac 1{\sqrt 2}(\downarrow - \uparrow)$  and $\rightarrow  = \frac 1{\sqrt 2}(\downarrow + \uparrow)$ . Similarly, for L' we use the basis $h$ and $t$ (heads and tails as it refers to a coin) as well as $o = \frac 1{\sqrt 2}(h - t)$ and $f  = \frac 1{\sqrt 2}(h+f)$.  The two qubits are prepared in the state

    $$\Phi = \frac{h\otimes\downarrow + \sqrt 2 t\otimes \rightarrow}{\sqrt 3}$$.

    Clearly, a measurement of $t$ in box L' implies that box L has to contain the state $\rightarrow$. Call this observation A.

    Let's re-express $\rightarrow$ in the x-basis:

    $$\Phi =\frac {h\otimes \downarrow + t\otimes \downarrow + t\otimes\uparrow}{\sqrt 3}$$

    From which one concludes that an observer inside box L that measures $\uparrow$ concludes that the qubit in box L' is in state $t$. Call this observation B.

    Similarly, we can express the same state in the x-basis for L':

    $$\Phi = \frac{4 f\otimes \downarrow+ f\otimes \uparrow - o\otimes \uparrow}{\sqrt 3}$$

    From this once can conclude that measuring $o$ for the state of L' one can conclude that L is in the state $\uparrow$. Call this observation C.

    Using now C, B and A one is tempted to conclude that observing L' to be in state $o$ implies that L is in state $\rightarrow$. When we express the state in the $ht\leftarrow\rightarrow$-basis, however, we get

    $$\Phi = \frac{f\otimes\leftarrow+ 3f\otimes \rightarrow + o\otimes\leftarrow - o\otimes \rightarrow}{\sqrt{12}}.$$

    so with probability 1/12 we find both $o$  and $\leftarrow$. Again, we hit a contradiction.

    One is tempted to use the same way out as above in the three qubit case and say one should not argue about contrafactual measurements that are incompatible with measurements that were actually performed. But Frauchiger and Renner found a set-up which seems to avoid that.

    They have observers F and F' ("friends") inside the boxes that do the measurements in the $ht$ and $\uparrow\downarrow$ basis whereas later observers W and W' measure the state of the boxes including the observer F and F' in the $of$ and $\leftarrow\rightarrow$ basis.  So, at each stage of A,B,C the corresponding measurement has actually taken place and is not contrafactual!

    Interference and it did not happen

    I believe the way out is to realise that at least from a retrospective perspective, this analysis stretches the language and in particular the word "measurement" to the extreme. In order for W' to measure the state of L' in the $of$-basis, he has to interfere the contents including F' coherently such that there is no leftover of information from F''s measurement of $ht$ remaining. Thus, when W''s measurement is performed one should not really say that F''s measurement has in any real sense happened as no possible information is left over. So it is in any practical sense contrafactual.

    To see the alternative, consider a variant of the experiment where a tiny bit of information (maybe the position of one air molecule or the excitation of one of F''s neutrons) escapes the interference. Let's call the two possible states of that qubit of information $H$ and $T$ (not necessarily orthogonal) and consider instead the state where that neutron is also entangled with the first qubit:

    $$\tilde \Phi =  \frac{h\otimes\downarrow\otimes H + \sqrt 2 t\otimes \rightarrow\otimes T}{\sqrt 3}$$.

    Then, the result of step C becomes

    $$\tilde\Phi = \frac{f\otimes \downarrow\otimes H+ o\otimes \downarrow\otimes H+f\otimes \downarrow\otimes T-o\otimes\downarrow\otimes T + f\otimes \uparrow\otimes T-o \otimes\uparrow\times T}{\sqrt 6}.$$

    We see that now there is a term containing $o\otimes\downarrow\otimes(H-T)$. Thus, as long as the two possible states of the air molecule/neuron are actually different, observation C is no longer valid and the whole contradiction goes away.

    This makes it clear that the whole argument relies of the fact that when W' is doing his measurement any remnant of the measurement by his friend F' is eliminated and thus one should view the measurement of F' as if it never happened. Measuring L' in the $of$-basis really erases the measurement of F' in the complementary $ht$-basis.

    by Robert Helling ( at October 27, 2018 08:39 AM

    October 24, 2018

    Axel Maas - Looking Inside the Standard Model

    Looking for something when no one knows how much is there
    This time, I want to continue the discussion from some months ago. Back then, I was rather general on how we could test our most dramatic idea. This idea is connected to what we regard as elementary particles. So far, our idea is that those you have heard about, the electrons, the Higgs, and so on are truly the basic building blocks of nature. However, we have found a lot of evidence that indicate that we see in experiment, and call these names, are actually not the same as the elementary particles themselves. Rather, they are a kind of bound state of the elementary ones, which only look at first sight like they themselves would be the elementary ones. Sounds pretty weird, huh? And if it sounds weird, it means it needs to be tested. We did so with numerical simulations. They all agreed perfectly with the ideas. But, of course, its physics, and thus we need also an experiment. The only question is which one.

    We had some ideas already a while back. One of them will be ready soon, and I will talk again about it in due time. But this will be rather indirect, and somewhat qualitative. The other, however, required a new experiment, which may need two more decades to build. Thus, both cannot be the answer alone, and we need something more.

    And this more is what we are currently closing in. Because one has this kind of weird bound state structure to make the standard model consistent, not only exotic particles are more complicated than usually assumed. Ordinary ones are too. And most ordinary are protons, the nucleus of the hydrogen atom. More importantly, protons is what is smashed together at the LHC at CERN. So, we have a machine already, which may be able to test it. But this is involved, as protons are very messy. They are already in the conventional picture bound states of quarks and gluons. Our results just say there are more components. Thus, we have somehow to disentangle old and new components. So, we have to be very careful in what we do.

    Fortunately, there is a trick. All of this revolves around the Higgs. The Higgs has the property that interacts stronger with particles the heavier they are. The heaviest particles we know are the top quark, followed by the W and Z bosons. And the CMS experiment (and other experiments) at CERN has a measurement campaign to look at the production of these particles together! That is exactly where we expect something interesting can happen. However, our ideas are not the only ones leading to top quarks and Z bosons. There are many known processes which produce them as well. So we cannot just check whether they are there. Rather, we need to understand if there are there as expected. E.g., if they fly away from the interaction in the expected direction and with the expected speeds.

    So what a master student and myself do is the following. We use a program, called HERWIG, which simulates such events. One of the people who created this program helped us to modify this program, so that we can test our ideas with it. What we now do is rather simple. An input to such simulations is how the structure of the proton looks like. Based on this, it simulates how the top quarks and Z bosons produced in a collision are distributed. We now just add our conjectured additional contributions to the proton, essentially a little bit of Higgs. We then check, how the distributions change. By comparing the changes to what we get in experiment, we can then deduced how large the Higgs contribution in the proton is. Moreover, we can even indirectly deduce its shape, i.e. how in the proton the Higgs is located.

    And this we now study. We iterate modifications of the proton structure with comparison to experimental results and predictions without this Higgs contribution. Thereby, we constraint the Higgs contribution in the proton bit by bit. At the current time, we know that the data is only sufficient to provide an upper bound to this amount inside the proton. Our first estimates show already that this bound is actually not that strong, and quite a lot of Higgs could be inside the proton. But on the other hand, this is good, because that means that the expected data in the next couple of years from the experiments will be able to actually either constraint the contribution further, or could even detect it, if it is large enough. At any rate, we now know that we have a sensitive leverage to understand this new contribution.

    by Axel Maas ( at October 24, 2018 07:26 AM

    October 17, 2018

    Robert Helling - atdotde

    Bavarian electoral system
    Last Sunday, we had the election for the federal state of Bavaria. Since the electoral system is kind of odd (but not as odd as first past the post), I would like to analyse how some variations (assuming the actual distribution of votes) in the rule would have worked out. So, first, here is how actually, the seats are distributed: Each voter gets two ballots: On the first ballot, each party lists one candidate from the local constituency and you can select one. On the second ballot, you can vote for a party list (it's even more complicated because also there, you can select individual candidates to determine the position on the list but let's ignore that for today).

    Then in each constituency, the votes on ballot one are counted. The candidate with the most votes (like in first past the pole) gets elected for parliament directly (and is called a "direct candidate"). Then over all, the votes for each party on both ballots (this is where the system differs from the federal elections) are summed up. All votes for parties with less then 5% of the grand total of all votes are discarded (actually including their direct candidates but this is not of a partial concern). Let's call the rest the "reduced total". According to the fraction of each party in this reduced total the seats are distributed.

    Of course the first problem is that you can only distribute seats in integer multiples of 1. This is solved using the Hare-Niemeyer-method: You first distribute the integer parts. This clearly leaves fewer seats open than the number of parties. Those you then give to the parties where the rounding error to the integer below was greatest. Check out the wikipedia page explaining how this can lead to a party losing seats when the total number of seats available is increased.

    Because this is what happens in the next step: Remember that we already allocated a number of seats to constituency winners in the first round. Those count towards the number of seats that each party is supposed to get in step two according to the fraction of votes. Now, it can happen, that a party has won more direct candidates than seats allocated in step two. If that happens, more seats are added to the total number of seats and distributed according to the rules of step two until each party has been allocated at least the number of seats as direct candidates. This happens in particular if one party is stronger than all the other ones leading to that party winning almost all direct candidates (as in Bavaria this happened to the CSU which won all direct candidates except five in Munich and one in Würzburg which were won by the Greens).

    A final complication is that Bavaria is split into seven electoral districts and the above procedure is for each district separately. So there are seven times rounding and adding seats procedures.

    Sunday's election resulted in the following distribution of seats:

    After the whole procedure, there are 205 seats distributed as follows

    • CSU 85 (41.5% of seats)
    • SPD 22 (10.7% of seats)
    • FW 27 (13.2% of seats)
    • GREENS 38 (18.5% of seats)
    • FDP 11 (5.4% of seats)
    • AFD 22 (10.7% of seats)
    You can find all the total of votes on this page.

    Now, for example one can calculate the distribution without districts throwing just everything in a single super-district. Then there are 208 seats distributed as

    • CSU 85 (40.8%)
    • SPD 22 (10.6%)
    • FW 26 (12.5%)
    • GREENS 40 (19.2%)
    • FDP 12 (5.8%)
    • AFD 23 (11.1%)
    You can see that in particular the CSU, the party with the biggest number of votes profits from doing the rounding 7 times rather than just once and the last three parties would benefit from giving up districts.

    But then there is actually an issue of negative weight of votes: The greens are particularly strong in Munich where they managed to win 5 direct seats. If instead those seats would have gone to the CSU (as elsewhere), the number of seats for Oberbayern, the district Munich belongs to would have had to be increased to accommodate those addition direct candidates for the CSU increasing the weight of Oberbayern compared to the other districts which would then be beneficial for the greens as they are particularly strong in Oberbayern: So if I give all the direct candidates to the CSU (without modifying the numbers of total votes), I get the follwing distribution:
    221 seats
    • CSU 91 (41.2%)
    • SPD 24 (10.9%)
    • FW 28 (12,6%)
    • GREENS 42 (19.0%)
    • FDP 12 (5.4%)
    • AFD 24 (10.9%)
    That is, there greens would have gotten a higher fraction of seats if they had won less constituencies. Voting for green candidates in Munich actually hurt the party as a whole!

    The effect is not so big that it actually changes majorities (CSU and FW are likely to form a coalition) but still, the constitutional court does not like (predictable) negative weight of votes. Let's see if somebody challenges this election and what that would lead to.

    The perl script I used to do this analysis is here.

    The above analysis in the last point is not entirely fair as not to win a constituency means getting fewer votes which then are missing from the grand total. Taking this into account makes the effect smaller. In fact, subtracting the votes from the greens that they were leading by in the constituencies they won leads to an almost zero effect:

    Seats: 220
    • CSU  91 41.4%
    • SPD  24 10.9%
    • FW  28 12.7%
    • GREENS  41 18.6%
    • FDP  12 5.4%
    • AFD  24 10.9%
    Letting the greens win München Mitte (a newly created constituency that was supposed to act like a bad bank for the CSU taking up all central Munich more left leaning voters, do I hear somebody say "Gerrymandering"?) yields

    Seats: 217
    • CSU  90 41.5%
    • SPD  23 10.6%
    • FW  28 12.9%
    • GREENS  41 18.9%
    • FDP  12 5.5%
    • AFD  23 10.6%
    Or letting them win all but Moosach and Würzbug-Stadt where the lead was the smallest:

    Seats: 210

    • CSU  87 41.4%
    • SPD  22 10.5%
    • FW  27 12.9%
    • GREENS  40 19.0%
    • FDP  11 5.2%
    • AFD  23 11.0%

    by Robert Helling ( at October 17, 2018 06:55 PM

    September 27, 2018

    Axel Maas - Looking Inside the Standard Model

    Unexpected connections
    The history of physics is full of stuff developed for one purpose ending up being useful for an entirely different purpose. Quite often they also failed their original purpose miserably, but are paramount for the new one. Newer examples are the first attempts to describe the weak interactions, which ended up describing the strong one. Also, string theory was originally invented for the strong interactions, and failed for this purpose. Now, well, it is the popular science star, and a serious candidate for quantum gravity.

    But failing is optional for having a second use. And we just start to discover a second use for our investigations of grand-unified theories. There our research used a toy model. We did this, because we wanted to understand a mechanism. And because doing the full story would have been much too complicated before we did not know, whether the mechanism works. But it turns out this toy theory may be an interesting theory on its own.

    And it may be interesting for a very different topic: Dark matter. This is a hypothetical type of matter of which we see a lot of indirect evidence in the universe. But we are still mystified of what it is (and whether it is matter at all). Of course, such mysteries draw our interests like a flame the moth. Hence, our group in Graz starts to push also in this direction, being curious on what is going on. For now, we follow the most probable explanation that there are additional particles making up dark matter. Then there are two questions: What are they? And do they, and if yes how, interact with the rest of the world? Aside from gravity, of course.

    Next week I will go to a workshop in which new ideas on dark matter will be explored, to get a better understanding of what is known. And in the course of preparing for this workshop I noted that there is this connection. I will actually present this idea at the workshop, as it forms a new class of possible explanations of dark matter. Perhaps not the right one, but at the current time an equally plausible one as many others.

    And here is how it works. Theories of the type of grand-unified theories were for a long time expected to have a lot of massless particles. This was not bad for their original purpose, as we know quite some of them, like the photon and the gluons. However, our results showed that with an improved treatment and shift in paradigm that this is not always true. At least some of them do not have massless particles.

    But dark matter needs to be massive to influence stars and galaxies gravitationally. And, except for very special circumstances, there should not be additional massless dark particles. Because otherwise the massive ones could decay into the massless ones. And then the mass is gone, and this does not work. Thus the reason why such theories had been excluded. But with our new results, they become feasible. Even more so, we have a lot of indirect evidence that dark matter is not just a single, massive particle. Rather, it needs to interact with itself, and there could be indeed many different dark matter particles. After all, if there is dark matter, it makes up four times more stuff in the universe than everything we can see. And what we see consists out of many particles, so why should not dark matter do so as well. And this is also realized in our model.

    And this is how it works. The scenario I will describe (you can download my talk already now, if you want to look for yourself - though it is somewhat technical) finds two different types of stable dark matter. Furthermore, they interact. And the great thing about our approach is that we can calculate this quite precisely, giving us a chance to make predictions. Still, we need to do this, to make sure that everything works with what astrophysics tells us. Moreover, this setup gives us two more additional particles, which we can couple to the Higgs through a so-called portal. Again, we can calculate this, and how everything comes together. This allows to test this model not only by astronomical observations, but at CERN. This gives the basic idea. Now, we need to do all the detailed calculations. I am quite excited to try this out :) - so stay tuned, whether it actually makes sense. Or whether the model will have to wait for another opportunity.

    by Axel Maas ( at September 27, 2018 11:53 AM