Particle Physics Planet

December 17, 2018

Emily Lakdawalla - The Planetary Society Blog

Planetary Radio celebrates 16 years of PB&J
Planetary Radio host Mat Kaplan has spent sixteen years sharing the “passion, beauty, and joy” of space exploration with the world. We picked our sixteen favorite episodes to share with you.

December 17, 2018 01:07 AM

December 16, 2018

Christian P. Robert - xi'an's og

let the evidence speak [book review]

This book by Alan Jessop, professor at the Durham University Business School,  aims at presenting Bayesian ideas and methods towards decision making “without formula because they are not necessary; the ability to add and multiply is all that is needed.” The trick is in using a Bayes grid, in other words a two by two table. (There are a few formulas that survived the slaughter, see e.g. on p. 91 the formula for the entropy. Contained in the chapter on information that I find definitely unclear.) When leaving the 2×2 world, things become more complicated and the construction of a prior belief as a probability density gets heroic without the availability of maths formulas. The first part of the paper is about Likelihood, albeit not the likelihood function, despite having the general rule that (p.73)

belief is proportional to base rate x likelihood

which is the book‘s version of Bayes’ (base?!) theorem. It then goes on to discuss the less structure nature of prior (or prior beliefs) against likelihood by describing Tony O’Hagan’s way of scaling experts’ beliefs in terms of a Beta distribution. And mentioning Jaynes’ maximum entropy prior without a single formula. What is hard to fathom from the text is how can one derive the likelihood outside surveys. (Using the illustration of 1963 Oswald’s murder by Ruby in the likelihood chapter does not particularly help!) A bit of nitpicking at this stage: the sentence

“The ancient Greeks, and before them the Chinese and the Aztecs…”

is historically incorrect since, while the Chinese empire dates back before the Greek dark ages, the Aztecs only rule Mexico from the 14th century (AD) until the Spaniard invasion. While most of the book sticks with unidimensional parameters, it also discusses more complex structures, for which it relies on Monte Carlo, although the description is rather cryptic (use your spreadsheet!, p.133). The book at this stage turns into a more story-telling mode, by considering for instance the Federalist papers analysis by Mosteller and Wallace. The reader can only follow the process of assessing a document authorship for a single word, as multidimensional cases (for either data or parameters) are out of reach. The same comment applies to the ecology, archeology, and psychology chapters that follow. The intermediary chapter on the “grossly misleading” [Court wording] of the statistical evidence in the Sally Clark prosecution is more accessible in that (again) it relies on a single number. Returning to the ban of Bayes rule in British courts:

In the light of the strong criticism by this court in the 1990s of using Bayes theorem before the jury in cases where there was no reliable statistical evidence, the practice of using a Bayesian approach and likelihood ratios to formulate opinions placed before a jury without that process being disclosed and debated in court is contrary to principles of open justice.

the discussion found in the book is quite moderate and inclusive, in that a Bayesian analysis helps in gathering evidence about a case, but may be misunderstood or misused at the [non-Bayesian] decision level.

In conclusion, Let the Evidence Speak is an interesting introduction to Bayesian thinking, through a simplifying device, the Bayes grid, which seems to come from management, with a large number of examples, if not necessarily all realistic and some side-stories. I doubt this exposure can produce expert practitioners, but it makes for an worthwhile awakening for someone “likely to have read this book because [one] had heard of Bayes but were uncertain what is was” (p.222). With commendable caution and warnings along the way.

by xi'an at December 16, 2018 11:18 PM

Peter Coles - In the Dark


Having finished the Everyman crossword in this morning’s Observer, I was reading a review of some books about Pieter Bruegel in the Times Literary Supplement where I found mention of a piece by that artist also called Everyman.

Here is the work, an ink drawing on paper, of dimensions 20.9cm by 29.2 cm made in Antwerp in 1558 and currently in the British Museum.

According to the catalogue, the work is called Elck in Dutch, which means ‘each’ or ‘everyone’, but is usually known in English as ‘Everyman’.

The scenes in the drawing illustrate proverbs or sayings. The central proverb concerns Elck who vainly seeks himself in the objects of this world as he stands over a broken globe. With a lantern he searches through a pile of barrels and bales, a game board, cards and objects which signify the distractions of life.

To the right, two more Elck figures play tug of war with a rope, illustrating the saying, ‘each tugs for the longest end’.

In the background on a wall hangs a picture which continues the moral theme. It shows a fool sitting among a pile of broken household objects gazing at himself in a mirror. He is Nemo or Nobody, as the inscription below him informs us: ‘Nobody knows himself.

To me it seems that Elck is searching (no doubt in vain) for something worth keeping in the junkyard of human existence. Perhaps he should perhaps have a go at a crossword to cheer himself up?

by telescoper at December 16, 2018 03:07 PM

December 15, 2018

Christian P. Robert - xi'an's og

about French inequalities [graphs]

A first graph in Le Monde about the impact of the recent tax changes on French households as a percentage of net income with negative values at both ends, except for a small spike at about 10% and another one for the upper 1%, presumably linked with the end of the fortune tax (ISF).

A second one showing incompressible expenses by income category, with poorest households facing a large constraint on lodging, missing the fraction due to taxes. Unless the percentage is computed after tax.

A last and amazing one detailing the median monthly income per socio-professional category, not because of the obvious typo on the blue collar median 1994!, but more fundamentally because retirees have a median income in the upper part of the range. (This may be true in most developed countries, I was just unaware of this imbalance.)

by xi'an at December 15, 2018 11:18 PM

Peter Coles - In the Dark

A Sign of Ireland

Following my post earlier this week about Irish orthography and related matters, I thought I’d share a couple of random thoughts inspired by the above road sign.

First, notice the font used for the Irish names, which is a variant of the UK Transport typeface, but is notable for the absence of any tittles (a ‘tittle’ being one of those little dots above the i and j in standard type).

The other thing, which I only found out a few days ago, is that`Leixlip’ is a name of Norse origin – it means ‘Salmon’s Leap’. Apparently there was a viking settlement there, positioned because of the abundance salmon in the River Liffey which flows through on the way to Dublin. `Leix’ is similar to, e.g., the Danish `Laks’, meaning salmon, and ‘leap’ is similar to many words in modern European languages derived from proto-Germanic sources.

There is a Salmon Leap Inn in Leixlip. I have heard very good things about the food but not yet dined there. Nowadays however Leixlip is best known for the presence of a huge Intel ‘campus’, which is home to a large semiconductor fabrication facility, among other things.

by telescoper at December 15, 2018 03:29 PM

December 14, 2018

Christian P. Robert - xi'an's og

military records of two great-grand fathers


Here are the military records [recovered by my brother] of two of my great-grand-fathers, who both came from Western Normandy (Manche) and both died from diseases contracted in the Army during the first World War. My grand-father‘s father, Médéric Eude, was raising horses before the was and hence ended looking after horses in the Army, horses from whom he contracted a disease that eventually killed him (and granted one of my great-aunts the status of “pupille de la Nation”). Very little is known of my other great-grand-fathers. A sad apect shared by both records is that both men were retired from service for unfitness before been redrafted when the war broke in August 1914…

by xi'an at December 14, 2018 11:18 PM

Christian P. Robert - xi'an's og

John Baez - Azimuth

Applied Category Theory Seminar

We’re going to have a seminar on applied category theory here at U. C. Riverside! My students have been thinking hard about category theory for a few years, but they’ve decided it’s time to get deeper into applications. Christian Williams, in particular, seems to have caught my zeal for trying to develop new math to help save the planet.

We’ll try to videotape the talks to make it easier for you to follow along. I’ll also start discussions here and/or on the Azimuth Forum. It’ll work best if you read the papers we’re talking about and then join these discussions. Ask questions, and answer any questions you can!

Here’s how the schedule of talks is shaping up so far. I’ll add more information as it becomes available, either here or on a webpage devoted to the task.

January 8, 2019: John Baez – Mathematics in the 21st century

I’ll give an updated synthesized version of these earlier talks of mine, so check out these slides and the links:

The mathematics of planet Earth.

What is climate change?

Props in network theory.

January 15, 2019: Jonathan Lorand – Problems in symplectic linear algebra

Lorand is visiting U. C. Riverside to work with me on applications of symplectic geometry to chemistry. Here is the abstract of his talk:

In this talk we will look at various examples of classification problems in symplectic linear algebra: conjugacy classes in the symplectic group and its Lie algebra, linear lagrangian relations up to conjugation, tuples of (co)isotropic subspaces. I will explain how many such problems can be encoded using the theory of symplectic poset representations, and will discuss some general results of this theory. Finally, I will recast this discussion from a broader category-theoretic perspective.

January 22, 2019: Christina Vasilakopoulou – Wiring diagrams

Vasilakopoulou, a visiting professor here, previously worked with David Spivak. So, we really want to figure out how two frameworks for dealing with networks relate: Brendan Fong’s ‘decorated cospans’, and Spivak’s ‘monoidal category of wiring diagrams’. Since Fong is now working with Spivak they’ve probably figured it out already! But anyway, Vasilakopoulou will give a talk on systems as algebras for the wiring diagram monoidal category. It will be based on this paper:

• Patrick Schultz, David I. Spivak and Christina Vasilakopoulou, Dynamical systems and sheaves.

but she will focus more on the algebraic description (and conditions for deterministic/total systems) rather than the sheaf theoretic aspect of the input types. This work builds on earlier papers such as these:

• David I. Spivak, The operad of wiring diagrams: formalizing a graphical language for databases, recursion, and plug-and-play circuits.

• Dmitry Vagner, David I. Spivak and Eugene Lerman, Algebras of open dynamical systems on the operad of wiring diagrams.

January 29, 2019: Daniel Cicala – Dynamical systems on networks

Cicala will discuss a topic from this paper:

• Mason A. Porter and James P. Gleeson, Dynamical systems on networks: a tutorial.

His leading choice is a model for social contagion (e.g. opinions) which is discussed in more detail here:

• Duncan J. Watts, A simple model of global cascades on random networks.

by John Baez at December 14, 2018 07:05 PM

Peter Coles - In the Dark

Messiah in Dublin

On 10th December last year I posted a review of a performance of Handel’s Messiah in Cardiff. At the end of that item I wondered where I would be listening to Messiah in 2018. Well, the answer to that question turned out to be at the National Concert Hall in Dublin, the city where Messiah received its premiere way back in 1742.

Messiah was initially performed at Easter (on 13th April 1742) and it’s by no means clear (to me) why it ended up almost universally regarded as a Christmas work. The work actually spans the entire biblical story of the Messiah, from Old Testament prophecy to the Nativity (Part 1), the Passion of Christ (Part II), culminating in the Hallelujah Chorus, and the Resurrection of the Dead (Part III). The Nativity only features (briefly) in Part I, which is why it’s a little curious that Messiah is so strongly associated with Christmas.

The printed programme for last night (cover shown above) included the first advertisement for the first performance of Messiah:

For the relief of the prisoners in the several Gaols and for the Support of Mercer’s Hospital in Stephen’s Street and of the Charitable Infirmary on the Inn’s Quay, on Monday 12th April will be performed at the Musick Hall in Fishamble Street, Mr Handel’s new Grand Oratorio MESSIAH…

The venue was designed to hold 600 people (less than half the capacity of the National Concert Hall) but 700 people crammed in. Ladies had been asked not to wear hoops in their dresses and gentlemen were asked not to bring their swords to help squeeze in the extra hundred. The concert raised the huge sum of £400 and Messiah was an immediate hit in Ireland.

It wasn’t the same story when Messiah was first performed in England the following year. It failed again in England when performed in 1745 but after some rewriting Handel put it on again in 1749 and it proved an enormous success. It has remained popular ever since. But it is still exceptionally popular in Dublin. There are umpteen performances of Messiah at this time of year, and the one I attended last night was one of three in the same week at the same venue, all more-or-less sold out. The Dubliners I chatted to in the bar before the concert were extremely proud that their city is so strongly associated with this remarkable work.

I don’t mind admitting that Messiah is a piece that’s redolent with nostalgia for me – some of the texts remind me a lot of Sunday School and singing in a church choir when I was little and then, a bit later, listening to the whole thing at Christmas time at the City Hall in Newcastle. I loved it then, and still do now, well over 40 years later. I know it’s possible to take nostalgia too far – nobody can afford to spend too much time living in the past – but I think it’s good to stay in contact with your memories and the things that shaped you when you were young.

Last night’s performance was by Our Lady’s Choral Society with the RTÉ Concert Orchestra. Soloists were Sarah Brady (soprano), Patricia Bardon (mezzo), Andrew Gavin (tenor) and Padraic Rowan (bass), the latter really coming into his own in the second half with a wonderfully woody sonority to his voice, especially in No. 40:

Why do the nations so furiously rage together, and why do the people imagine a vain thing?

Topical, or what?

Our Lady’s Choral Society is an amateur outfit and, while it might not sound as slick and polished as some professional choirs, there was an honesty about its performance last night that I found very engaging. It actually sounded like people singing, which professional choirs sometimes do not. The orchestra played very well too, and weren’t forced to use the dreaded `period instruments’. There was a harpsichord, but fortunately it was barely audible. Anyway, I enjoyed the concert very much and so did the packed house. I couldn’t stay for all the applause as I had dash off to get the last train back to Maynooth, but that doesn’t mean I didn’t appreciate the music.

Incidentally, among the bass section of Our Lady’s Choral Society last night was my colleague Brian Dolan. On Monday next I’m going to another Concert at the National Concert Hall, Bach’s Christmas Oratorio. Among the choir for that performance is another of my colleagues, Jonivar Skullerud. Obviously, choral singing is the in-thing for theoretical physicists in this part of the world!

by telescoper at December 14, 2018 04:31 PM

Christian P. Robert - xi'an's og

running plot [and simulated annealing]

Last weekend, I found out a way to run updated plots within a loop in R, when calling plot() within the loop was never updated in real time. The above suggestion of including a Sys.sleep(0.25) worked perfectly on a simulated annealing example for determining the most dispersed points in a unit disc.

by xi'an at December 14, 2018 06:26 AM

December 13, 2018

Emily Lakdawalla - The Planetary Society Blog

Mars Reconnaissance Orbiter Spots InSight Hardware on Mars
Mars Reconnaissance Orbiter has finally spotted the InSight lander, its parachute, and its heat shield resting on the Martian surface. The images confirm the location of InSight's landing site, a little to the north and west of the center of the landing ellipse. The lander is located at 4.499897° N, 135.616000° E.

December 13, 2018 06:37 PM

Axel Maas - Looking Inside the Standard Model

The size of the W
As discussed in an earlier entry we set out to measure the size of a particle: The W boson. We have now finished this, and published a paper about our results. I would like to discuss these results a bit in detail.

This project was motivated because we think that the W (and its sibling, the Z boson) are actually more complicated than usually assured. We think that they may have a self-similar structure. The bits and pieces of this is quite technical. But the outline is the following: What we see and measure as a W at, say, the LHC or earlier, is actually not a point-like particle. Although this is the currently most common view. But science has always been about changing the common ideas and replacing them with something new and better. So, our idea is that the W has a substructure. This substructure is a bit weird, because it is not made from additional elementary particles. It rather looks like a bubbling mess of quantum effects. Thus, we do not expect that we can isolate anything which resembles a physical particle within the W. And if we try to isolate something, we should not expect it to behave as a particle.

Thus, this scenario gives two predictions. One: Substructure needs to have space somewhere. Thus, the W should have a size. Two: Anything isolated from it should not behave like a particle. To test both ideas in the same way, we decided to look at the same quantity: The radius. Hence, we simulated a part of the standard model. Then we measured the size of the W in this simulation. Also, we tried to isolate the most particle-like object from the substructure, and also measured its size. Both of these measurements are very expensive in terms of computing time. Thus, our results are rather exploratory. Hence, we cannot yet regard what we found as final. But at least it gives us some idea of what is going on.

The first thing is the size of the W. Indeed, we find that it has a size, and one which is not too small either. The number itself, however, is far less accurate. The reason for this is twofold. On the one hand, we have only a part of the standard model in our simulations. On the other hand, we see artifacts. They come from the fact that our simulations can only describe some finite part of the world. The larger this part is, the more expensive the calculation. With what we had available, the part seems to be still so small that the W is big enough to 'bounce of the walls' fairly often. Thus, our results still show a dependence on the size of this part of the world. Though we try to accommodate for this, this still leaves a sizable uncertainty for the final result. Nonetheless, the qualitative feature that it has a significant size remains.

The other thing are the would-be constituents. We indeed can identify some kind of lumps of quantum fluctuations inside. But indeed, they do not behave like a particle, not even remotely. Especially, when trying to measure their size, we find that the square of their radius is negative! Even though the final value is still uncertain, this is nothing a real particle should have. Because when trying to take the square root of such a negative quantity to get the actual number yields an imaginary number. That is an abstract quantity, which, while not identifiable with anything in every day, has a well-defined mathematical meaning. In the present case, this means this lump is nonphysical, as if you would try to upend a hole. Thus, this mess is really not a particle at all, in any conventional sense of the word. Still, what we could get from this is that such lumps - even though they are not really lumps, 'live' only in areas of our W much smaller than the W size. So, at least they are contained. And let the W be the well-behaved particle it is.

So, the bottom line is, our simulations agreed with our ideas. That is good. But it is not enough. After all, who can tell if what we simulate is actually the thing happening in nature? So, we will need an experimental test of this result. This is surprisingly complicated. After all, you cannot really get a measure stick to get the size of a particle. Rather, what you do is, you throw other particles at them, and then see how much they are deflected. At least in principle.

Can this be done for the W? Yes, it can be done, but is very indirect. Essentially, it could work as follows: Take the LHC, at which two protons are smashed in each other. In this smashing, it is possible that a Z boson is produced, which smashes of a W. So, you 'just' need to look at the W before and after. In practice, this is more complicated. Since we cannot send the W in there to hit the Z, we use that mathematically this process is related to another one. If we get one, we get the other for free. This process is that the produced Z, together with a lot of kinetic energy, decays into two W particles. These are then detected, and their directions measured.

As nice as this sounds, this is still horrendously complicated. The problem is that the Ws themselves decay into some leptons and neutrinos before they reach the actual detector. And because neutrinos escape essentially always undetected, one can only indirectly infer what has been going on. Especially the directions of the Ws cannot easily be reconstructed. Still, in principle it should be possible, and we discuss this in our paper. So we can actually measure this size in principle. It will be now up to the experimental experts if it can - and will - be done in practice.

by Axel Maas ( at December 13, 2018 04:15 PM

Emily Lakdawalla - The Planetary Society Blog

Go Out and See the Geminid Meteor Shower
The Geminid meteor shower, which peaks tonight, is usually the best meteor shower of the year,

December 13, 2018 03:37 PM

Peter Coles - In the Dark

Voting for Beard of the Year 2018

Having made it onto the shortlist, I seem to be ahead in the polling for this Year’s Beard of the Year (for the time being at least – it’s very early days).

I know some people would consider it inappropriate for me to use the medium of this blog to tout for votes. All I can say to such people is VOTE FOR ME!

Kmflett's Blog

Beard Liberation Front

Media Release 11th December

Contact Keith Flett 07803 16726

Voting for the Beard of the Year 2018

The Beard Liberation Front the informal network of beard wearers, has said that voting is open for the Beard of the Year 2018. The vote closes at midnight on 24th December with the winner declared on 28th December.

Names can be added by write-ins at the bottom of the poll. 1% of the overall vote is needed for someone to join the poll.

The BLF say that the shortlist comprises those whose beard has had a positive impact in the public eye during the year rather than the style or the length of the beard or the views of the beard wearer.

BLF Organiser Keith Flett said, competition for Beard of the Year is bristling

For the first time this year the result will be determined by a…

View original post 128 more words

by telescoper at December 13, 2018 10:23 AM

December 12, 2018

Emily Lakdawalla - The Planetary Society Blog

Chang’e-4 Successfully Enters Lunar Orbit
Following a 4.6-day cruise, on 12 December at 8:45 Beijing time (16:45 UTC), the spacecraft arrived in lunar orbit, preparing for a landing in early January.

December 12, 2018 04:23 PM

Emily Lakdawalla - The Planetary Society Blog

Mastcam-Z Flight Hardware!
After a more-than-four-year adventure, the flight Mars 2020 rover Mastcam-Z cameras have been fully assembled!

December 12, 2018 12:00 PM

Peter Coles - In the Dark

On Probability and Cosmology

I just noticed a potentially interesting paper by Martin Sahlén on the arXiv. I haven’t actually read it yet, so don’t know if I agree with it, but thought I’d point it out here for those interested in cosmology and things Bayesian.

Here is the abstract:

Modern scientific cosmology pushes the boundaries of knowledge and the knowable. This is prompting questions on the nature of scientific knowledge. A central issue is what defines a ‘good’ model. When addressing global properties of the Universe or its initial state this becomes a particularly pressing issue. How to assess the probability of the Universe as a whole is empirically ambiguous, since we can examine only part of a single realisation of the system under investigation: at some point, data will run out. We review the basics of applying Bayesian statistical explanation to the Universe as a whole. We argue that a conventional Bayesian approach to model inference generally fails in such circumstances, and cannot resolve, e.g., the so-called ‘measure problem’ in inflationary cosmology. Implicit and non-empirical valuations inevitably enter model assessment in these cases. This undermines the possibility to perform Bayesian model comparison. One must therefore either stay silent, or pursue a more general form of systematic and rational model assessment. We outline a generalised axiological Bayesian model inference framework, based on mathematical lattices. This extends inference based on empirical data (evidence) to additionally consider the properties of model structure (elegance) and model possibility space (beneficence). We propose this as a natural and theoretically well-motivated framework for introducing an explicit, rational approach to theoretical model prejudice and inference beyond data.

You can download a PDF of the paper here.

As usual, comments are welcome below. I’ll add my thoughts later, after I’ve had the chance to read the article!


by telescoper at December 12, 2018 11:52 AM

CERN Bulletin


Dear Colleagues,

We would like to share the following information about the creation of new TPG (Geneva public transport) lines that will replace the existing Y line in December.

We would also like to flag a scheme that could be of interest to residents in France.

The scheme is call “Parcours Court”. It allows traveling on specific lines, or sections of lines at discount rates.

For CERN personnel in particular, this scheme can be used with:

  • Y line (future 68 line) between France and GVA airport
  • O line (future 64 line) between France and Meyrin Gravière.

Lines O and Y no longer exist since Sunday, December 9

The costs savings are summarised below:

For more information :

December 12, 2018 10:12 AM

December 11, 2018

CERN Bulletin

Offer for our members

Our partner FNAC is offering to all our members 10% discount on all books and 15% discount on all games and toys.

This offer is valid between 1st and 31 December, 2018 upon the presentation of your Staff Association membership card.


December 11, 2018 11:12 AM

CERN Bulletin

CERN Photowalk 2018

Du 10 au 21 décembre 2018

CERN Meyrin, Bâtiment principal


LeCERN Photowalk 2018 1er juin dernier, vingt photographes du monde entier sont venus au CERN pour visiter, objectif en main, les coulisses du Laboratoire. Les photographes ont pu découvrir quatre lieux emblématiques du CERN : le hall SMI2, préparant les futurs aimants du LHC haute luminosité, un des ateliers de fabrication et de test des aimants, le centre de contrôle du CERN et le hall d’antimatière, aussi appelé « Antimatter Factory ».

Cette compétition s’inscrit dans un concours mondial de photographie, le “Global Physics Photowalk 2018”, organisé par le réseau de communicants des grands laboratoires de physique, Interactions.

Le jury du CERN a sélectionné les 20 meilleures photos ainsi que 2 des 3 gagnantes, choisies non seulement pour leur valeur artistique et esthétique, mais aussi pour les aspects du Laboratoire qu’ils mettent en avant. La troisième photo a été choisie par un vote du public.

Vous pourrez découvrir cette sélection au bâtiment principal du 10 au 21 décembre.

Pour plus d’informations et demandes d’accès :  |  +41 22 767 28 19



December 11, 2018 11:12 AM

CERN Bulletin


Cooperative open to international civil servants. We welcome you to discover the advantages and discounts negotiated with our suppliers either on our website or at our information office located at CERN, on the ground floor of bldg. 504, open Monday through Friday from 12.30 to 15.30.

December 11, 2018 10:12 AM

CERN Bulletin

What has the Staff Association done for me?

The Staff Association’s mission is fourfold:

Serve and defend the economic, social, professional and moral interests of all CERN staff.

Safeguard the rights and the interests of the families of staff and beneficiaries of the Pension Fund.

Promote good relations between staff, other employees working on site, and the communities in which staff members and their families live.

Work with the Council and the Director-General to propose and implement ways to further the mission of the Organization.

Created in 1955 the Staff Association entered the Staff Rules as the intermediary for relations between staff and CERN Management with the creation of a Standing Advisory Committee in 1962.

Twenty years later, this committee became the ‘Standing Concertation Committee’ (SCC). 1994 saw the creation of the Tripartite Employment Conditions Forum (TREF).

All well and good, but what has the Staff Association ever done for me?

Cast your mind back to 2012; the Long-Term Saved Leave Scheme (LTSLS) came into existence and CERN was awarded the third HR Innovation Award for the scheme’s novelty and innovation.

However, the seeds for (LTSLS) were sown many years before, in 1996, thanks to proposals from the Staff Association.

On several occasions, the Staff Council discussed flexible working hours, a wide access to part-time work, and decreasing hours worked as a pre-retirement measure.

This proposal was led by former Staff Association President Michel Vitasse, who defined flexible working time arrangements as a voluntary adjustment of working time, even with a reduction in salary, and advocated a collective organization of time worked to allow flexibility in individual's working time arrangements.

In September 1986, in collaboration with the Applied Psychology group of Neuchâtel University, a survey was sent to all members of staff to gather their views on various aspects of working time arrangements.

Despite an analysis of the 1282 replies received, the results made available to the SCC and the DG at the time, there was no follow-up.

The time to introduce such flexible working hours was not ripe; nevertheless, the Staff Association did not forget this topic and consulted other international organizations and national staff union representatives to learn about their initiatives on this subject.

In 1996, crisis struck the Organization, the CERN Council decided to reduce the Organization’s budget by 7.5% with a cut in the staff budget of 2%.

Out of this crisis, the Staff Association proposed two programs to allow additional recruitment and facilitate the transfer of knowledge between generations, the RSL scheme (precursor of the SLS) and the PRP scheme, a progressive retirement program.

This innovative idea, proposed by the Staff Association was praised in several articles published in national newspapers, for instance, “Staff take holes so CERN can hire young” (The Times Higher Education Supplement of 16th January 1998). A great social advance!

In December 2007, Management considered that the programs were too costly, thus the long-term component was withdrawn, to be replaced with STSLS (Short-Term Saved Leave Scheme) in 2008. In January 2012, the STSLS was extended with a long-term component, LTSLS.

The Staff Association played an instrumental role in co-sponsoring this proposal with a much-wanted long-term component.

Now the LTSLS can be used flexibly, not only at the end of the contract but also for specific needs throughout the career (e.g. care for a close relative suffering from a serious illness or for professional development).

The Staff Association exists because different categories of CERN staff come together to defend their rights and material and moral interests, as a group of persons or as individuals.

The Staff Association is also a source of proposals, gaining respect in its role as a negotiating partner the more staff members it has behind it.

Why not join us?

December 11, 2018 10:12 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Top Ten Open Problems in Physics

What is the ultimate purpose of my work as theoretical physicist and, if you want, my existence itself? Is it serving the community of other physicists like organizing and participating in conferences? Nop. Then, maybe teaching future physicists in the University, encourage young people to enter the exciting field of physics? Not quite. Writing good papers?  Ei.  Maybe blogging? Sorry but nein. I think… the ultimate purpose of my work is solving unsolved mysteries in physics. I am afraid, this and only this makes my work enjoyable for me, makes it fun. For the sake of future reference, let me enlist here the most important (from my point of view), hard and interesting unsolved problems in physics.

1. The nature of time

The nature of time

This problem is so fundamental that ultimately I have no idea to how even approach it. I mean, it’s like to answer the question what are we born for and why do we have to die. Why 4-dimensional spacetime we live in has three spacelike coordinates and one timelike? Well, I mean – it is rather clear why it does not have 2 timelike coordinates (otherwise, time machines could be possible). But why there is even 1 timelike coordinate and not zero of them?

What is the nature of the arrow of time? What happened to the Universe 14 billion years ago – what was in the beginning of time and what was before the beginning? Was there anything at all? And is there any meaning to even ask this kind of questions? How to reconcile classical chaos, loss of information, with quantum mechanics and S-matrix in quantum field theory – both unitary and therefore forbidding any loss of information? Does CP-violation (and that is – T-violation, since CPT is conserved) have anything to do with the emergence of the arrow of time? Does existence of horizons, black holes and any non-trivial causal structure of spacetime influence play any role?

Asking all these questions, I feel being ridiculously stupid, that’s why I put the problem of time in the first place in my list.

2. Cosmological constant. Inflation

Cosmological constant. Inflation

It seems that the Universe is currently expanding with nearly constant acceleration driven by the cosmological constant (the very same one that Einstein once called “the biggest blunder” of his life) or something that approximately behaves like a cosmological constant. A very surprising fact is that although the associated energy density takes about 70% of the total energy density in the Universe, it  is still vastly smaller (namely, 226. Top ten open problems in physics smaller) than the natural energy scale associated with gravity effects – i.e., Planckian energy density 226. Top ten open problems in physics.

Present accelerated character of expansion of the Universe seems even more interesting taking into account that the very early Universe (13.8 billion years younger than it is now) has also passed through the stage of accelerated expansion, but the associated energy density was much higher at  that time – although still not as high as the Planckian energy density.

To be honest, after years of research and observations (cosmology did become a precision science nowadays) we still have no slightest idea what is behind the present accelerated expansion of the Universe and what is the nature of the huge (in fact, the largest) heirarchy of scales which seem relevant for the gravitational physics.

3. Turbulence


I wrote about it many times, it is worth writing another time. We don’t know how to analytically treat developed turbulence. We are mumbling something about 2-dimensional scalar turbulence, weak turbulence, but ultimately we don’t even know whether a general solution of Navier-Stokes equation exists, whether it is smooth or finite time singularities are getting developed in the fluid velocity flow (note that Clay Mathematics Institute currently offers 1 million USD for the solution of this problem).

Ultimately, I think, the very heart of the problem is this: out of viscosity 226. Top ten open problems in physics, velocity of the flow 226. Top ten open problems in physics and a linear scale of the flow 226. Top ten open problems in physics one can construct a single dimensionless combination called Reynolds number:

226. Top ten open problems in physics. (1)

Essentially, it defines whether the flow of a fluid is turbulent or laminar (smooth) – it becomes turbulent when the Reynolds number gets larger than 100.

Now, usually in physics all dimensionless combinations are of the order 1. Why the combination (1) instead has to be of the order 100 in order for physics of fluid to become interesting? If we will know the answer to this question, we will advance quite a bit in understanding developed turbulence.

Why the latter is so critically important for our health and wealth? I am Russian, so let me talk a bit about oil pipes.What would be the optimal velocity of the oil flow through the pipe to maximize income of GAZPROM? If I pump the oil with small velocity, the oil flow through it is slow, as well as the flow of money to my pocket. If I pump oil with very large velocity, then turbulence and turbulent viscosity becomes important at some point – I start to loose lots and lots of energy to pumping (viscosity is essentially equivalent of friction).

As you see, the price of the question is much higher than 1 million offered by the Clay Mathematics Institute.

4. Confinement. Quark-gluon plasma.

Confinement. Quark-gluon plasma.

Studying hard inelastic scatterings, we know from the end of 1960s that hadrons (like protons and neutrons) have constituents called quarks. And yet we are unable to break a hadron into its constituents and get quarks in a free state. We say that quarks are confined within the hardron. Why? We have many different ideas but so far we are never 100% sure that a given idea is correct. The theory of quarks and gluons (mediating interaction between quarks) is known from the beginning of 1970s – it is called quantum chromodynamics. Somehow, we are unable to take it and prove by hands that quarks are getting confined at large distances, although computer lattice simulations do seem to imply confinement.

What is the physics behind confinement of quarks? Is it dual Meissner effect? Is it specific behavior of instanton liquid? Is it something more exotic? We just don’t know – any single idea seems to have its own advantages and serious drawbacks.

Note that confinement of quarks is closely related to the problem of existence of mass gap in Yang-Mills theories – another problem that Clay Mathematics Institute is willing to pay 1000000 USD for.

5. String theory. M-theory. Dualities.

String theory. M-theory. Dualities.

String theory and its 11-dimensional generalization, M-theory, seems to be the ultimate Theory of Everything – it’s very promising and very powerful. Indeed, the spectrum of strings contains graviton (so the low energy effective action of the string theory is ultimately the Einstein-Hilbert action plus matter fields, that describe the large scale structure of the Universe observable to us). Compactifications of heterotic string theories seem to also contain all matter fields that we observe in Nature, to describe electroweak and strong interactions.

Although the amount of energy and time invested into the developement of string theory by the physics community is enormous, there are many questions that remain unanswered for decades. How to reconcile string theory with accelerated expansion of the Universe that we currently observe? How to technically analyze the perturbation theory of superstrings (beyond, say, four loops)? How to deal with string theory on curved backgrounds or time-dependent backgrounds?

The latter question is of especial importance. The reason is that Yang-Mills theories and quantum chromodynamics in particular behave as string theories in the regime of strong coupling (which is of especial interest for us if we want to solve the problem 4 above). However, these string theories should be defined on curved backgrounds – like Anti-de Sitter background, for example (Anti de Sitter is the spacetime of constant negative curvature). We somewhat (qualitatively) understand how to deal with such string theories, but full, detailed, technical understanding is yet to be achieved. And it may well take 20 or 30 or 100 years to achieve it.

6. Black holes, information loss paradox

Black holes, information loss paradox

If the amount of matter within a given 3-volume becomes large enough, the generated gravitational field gets so strong (or, to say it better, the spacetime gets curved so much) that it becomes impossible for rays of light to leave the given 3-volume. We say that a black hole is getting formed. It swallows all objects trapped by its gravitational field, and nothing can escape out of it once getting trapped. With the exception of Hawking radiation.

As Hawking has shown long time ago, since gravitational field is very strong, it can produce quanta of particles and antiparticles – electrons and positions, for example. If the position that belongs to the produced pair falls inside black hole, then the electron acquires sufficient energy to escape the gravitational field of the black hole, and we can later detect it. Therefore, there is constant outgoing flux of particles emitted by the black hole – the latter is getting evaporated.

Now, the tricky thing is that the spectrum of Hawking radiation is thermal – i.e., it is characterized by just a single number, temperature. What if we drop many different objects into the black hole, and the latter will completely evaporate emitting Hawking radiation? Will the information about the objects that fell to the black hole get ultimately lost? After 30 years of study we still honestly have no answer, although lots of ideas are on the market.

The question of information loss is actually very important one, because all quantum field theories (and string theory!) we are dealing with are unitary, that is, they preserve information.

7. Thermonuclear fusion

Thermonuclear fusion

But let us get somewhat more practical. Once we will construct a thermonuclear reactor, we will get tons of energy almost for free. Indeed, reactions that we are interested in,

226. Top ten open problems in physics,

226. Top ten open problems in physics,

226. Top ten open problems in physics,

run with a huge excess of energy – frankly, the Sun shines due to this excess and will keep shining for another 6 billion years or so. Although we are trying to practically solve the problem of thermonuclear fusion for more than 50 years already, we were yet unable to run the self-sustaining thermonuclear reaction, not to mention a reaction with an energy outcome.

Honestly, I don’t think that the problem of thermonuclear fusion can get into the list, since its theoretical aspects are fairly understood by now. But the promises are so incredibly high… And Steven Chu is the Head of the Department of Energy.

Anyway, there are tons of very complicated technical problems that do not allow us to start and run a thermonuclear reaction today – for one, the plasma is unstable. One can try to control it by magnetic fields but we need really powerful (superconducting) magnets fo it – more powerful than the ones that got broken at LHC several months ago. And this brings us to the next problem:

8. High temperature superconductivity

High temperature superconductivity

Maybe, even room temperature superconductivity – once we thought that this is impossible, but we are no longer so sure of it after the discoveries of 1980s (some organic superconductors allow the critical temperature as high as 138 K). It seems though that high temperature superconductors are not described by the canonic Bardeen-Cooper-Schrieffer (BCS) theory, that is, possibly the mechanism of superconductivity is not related to the interaction between electrons and phonons (phonons are quanta of fluctuations of the molecular lattice). Another possibility is that interaction between electron-phonon interaction in high 226. Top ten open problems in physics superconductors is so strong that we loose analytic control over the theory.

In any event, if we will understand the nature of high 226. Top ten open problems in physicssuperconductivity in cuprate-perovskite ceramics, the room temperature supperconductivity can also become possible and cheap electricity, flying cars (as well as other flying junk) will enter our everyday life.

9. Dark matter

Dark matter

Almost 26% of all matter in the Universe does not emit light. We can feel it gravitationally – see how it attracts stars to galaxies, how it affects expansion of the Universe, but we are not able to detect it which is quite frustrating. Does dark matter consist of some heavy particles (actually, observations seem to suggest so) which were born in the early Universe and do not interact at all with matter we consist of except through the gravitational interaction? Does dark matter consist of supersymmetric partners of the Standard Model fields? Is it an inflaton condensate? We don’t know.

10. Ultra high energy cosmic rays

Ultra high energy cosmic rays

Ultra-high energy cosmic rays (UHECRs) are cosmic rays with energy higher than 226. Top ten open problems in physics eV. They strike the atmosphere and produce spectacular showers of daughter particles, and that is how we detect them (about hundred events per year or so).

The problem is that the source of UHECRs is unknown. They cannot be of cosmological origin (that is, they are not created in the early Universe when energies were high and then travel to us over very large distances) – Greizen-Zatsepin-Kuzmin (GZK) cutoff forbids that. (The physics behind GZK is basically that ultra-high energy cosmic rays flying over cosmological distances should scatter on quanta of cosmic microwave background radiation and inevitably loose energy.) On the other hand, if they are not of cosmological origin, following their trajectory we don’t quite see what is their source on the sky. Recently, Pierre Auger collaboration has claimed  that active galactic nuclei may be the source (see the picture above), but the community is not yet quite sure of that.

Bonus problem.* Metallic hydrogen and exotic matter

Metallic hydrogen and exotic matterSource:  University of Rochester

Okay, maybe some problems (like 7 or 10) do not quite deserve to be included into the list, so I have to compensate somehow. What you see on the picture above is the artist’s expression of an ocean on Jupiter. The ocean is of metallic hydrogen.

As you may remember, Jupiter is a gas giant (basically, the statement is that a planet cannot become too large unless it mostly consists of gas) – so is Uranus, Saturn and Neptune. However, near the core, the planet should be solid, and the solid core is expected to be covered by liquid metallic hydrogen. It is metallic because under a very strong pressure any material is loosing hold of its electrons and becomes metal. It is liquid since proton (that is, hydrogen atom with electron ripped off) is much lighter than 226. Top ten open problems in physics, and the latter remains liquid even at very small temperatures. Moreover, we expect that metallic hydrogen is superconducting and its critical temperature may be rather high – 100-200 K (so, the Bonus problem is somewhat related to the problem 8).

There are many technical problems related to the production of metallic hydrogen – the main one is of course pressure, we need it to be as high as hundreds of GPa. It seems that some progress has been recently sumultaneously achieved by several groups (LNBL, for one). In particular, it seems to be proven that metallic hydrogen is superconducting.

Instead of conclusion

I seriously doubt that science (and theoretical physics in particular!) is near its completion as some people like to suggest. I believe that even this extremely short list of 11 problems can keep us busy for decades (if not centuries if we consider the problems 1, 5 and 6 really seriously). I am also sure that my list is terribly incomplete, so please feel very welcome to rearrange the order of problems in this list as you like or point out some other problems that I missed due to my ignorance.

The post Top Ten Open Problems in Physics appeared first on None Equilibrium.

by Daniel Kohler at December 11, 2018 09:59 AM

December 10, 2018

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

Building Skyward: How Physics Made Possible the Vertical Rise of Modern Cities.

There is a new measure to the advancement and modernity of cities around the world. That measure does not belong to the group of economic or financial yardstick. That degree of modernization is now seen on the skyline. The upward trend of building the spaces that people dwell in. The expansion of towns and cities across the world has jumped into vertical rates in the recent years. The era of skyscrapers have arrived. Buildings that literally touch the clouds, tower over any man-made structures and cast shadows several kilometers long

The vertical growth might be due to the innovation or the heavy amount of competition between countries. It might be because of land scarcity and the skyrocketing prices of real estate. But one trend has emerged – the most advanced cities are building high, instead of wide.

advanced cities

The skyscrapers have drawn in the tourism, commerce, business and trade. It has become the icon of a city. It has brought fame, functionality and profits. It has become the quintessential marking of an advanced and modern world.

These man-made wonders we are looking at does not simply emerge from the ground. It took the most talented engineers, architects, contractors and businessmen. It took hard work, perseverance and will. It took countless working hours. It took blood, sweat and tears.

However, most important element in the mix is science – engineering physics to be exact.

Engineering Physics is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. It is a branch of applied physics that bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis.

In building skyward, it takes the expertise on materials science, applied mechanics, mechanical physics, electrical engineering, aerodynamics, energy and solid state physics.

We’ll look into 2 main innovations in applied physics that allowed skyscrapers to grow from a few storeys in size to the modern skyscraper.

Innovation 1: Elevators


Before buildings can go taller it must solve how it can bring people higher. The first dilemma that struck physicist and engineers is the challenge of lifting people to higher floors efficiently and safely. Engineers knew that elevators does exist, but they cannot guarantee the safety of passengers when riding an elevator. They found inspiration from the invention of Elijah Graves Otis in 1854 during the World’s Fair in New York wherein he rode a cab connected to an elevator rope that is secured with a powerful wagon spring mounted on top of the cab. The spring connects to a set of metal prongs on each side of the elevator. The prongs run along guide rails fitted with a row of teeth when the rope breaks it triggers a chain of friction spring relaxes and forces the metal prongs into the teeth locking the cab in place.

The solution used physics as a way to counter the natural force of physics – gravity.

An elevator has three forces, the force of gravity, a downward normal force from the passenger and the elevator itself and an upward force from the tension in the cable holding the elevator.

This application traces its roots from Newton’s Law. Newton’s Law proved that an elevator when stationary the acceleration is 0. When the elevator is going up passengers are accelerating, which adds more force to the scale and increases in the total weight. When the elevator is going down, the same is true, but the acceleration is negative, subtracting force from the scale and decreasing the elevator’s apparent weight.

The understanding of Newton’s Law has allowed engineers to calculate and establish the safe parameters for an elevator to carry passengers up and down the tallest structures on earth.

Innovation 2: Aerodynamics

As engineers wanted to build taller buildings they faced another problem – the wind. In a glance, wind hitting a concrete building with reinforced steel frames seems to be a normal occurrence. However, as the buildings go higher, the forces of wind gets dangerously wilder. In this case, building a stronger and more durable building is not the easy way out to counter the forces of wind. So engineers turn to aerodynamics.

Aerdoynamics is the study of the properties of moving air, and especially of the interaction between the air and solid bodies moving through it.They have to marry the design of the building to the natural forces of the wind.

This innovation is best exemplified by the Burj Dubai tower. The Burj Dubai currently stands as the tallest man-made structure in the world. It stands at 2,717 feet and it houses 30,000 residences spread out over 19 residential towers, an artificial lake, nine hotels, and a shopping mall. The engineers who built and design this mammoth structure took aerodynamics in unprecedented territories.


Instead of making the building sleeker, thinner and smoother so that the wind can easily glide pass it, they designed the angles to “break down” the wind that smashes it at 2000 feet in the air. It eliminates the forces of the wind by deflecting it and disrupting the powerful vortices. It had separate stalks, which top out unevenly around the central spire. The unusual and odd-looking design deflects the wind around the structure and prevents it from forming organized whirlpools of air current, or vortices.

One of the wind-engineering specialist for the skyscraper, Jason Garber said that: “the amount of motion you’d expect is on the order of 1/200 to 1/500 times its height.” For the BurjKhalifa, this translates into about two to four meters. “It’s not much, but certainly enough to make residents queasy if they can sense this motion. That’s why one of the chief concerns of architects and engineers is acceleration, which can result in perceptible forces on the human body.”

These advanced techniques they employed at Burj Dubai relates back to the law of conservation of energy. Energy may neither be created nor destroyed. Air has no force or power, except pressure, unless it is in motion. When it is moving, however, its force becomes apparent.The breaking of the redirection of the force of the wind made it possible to break its force into small, manageable and safe pieces.

Most of us have been inside a tall building and have worked or lived inside a high-rise structure. It is easy to miss the science behind the technology that brought us to newer heights. But as much as we are in awe of the skyscraping buildings physical structure and functionality, we must be in awe of the basic principles that allowed it to exist.


The post Building Skyward: How Physics Made Possible the Vertical Rise of Modern Cities. appeared first on None Equilibrium.

by Bertram Mortensen at December 10, 2018 03:34 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

In Space, After Chaos There is Birth: Mysterious Formation of Star Clusters

The stars that we see on a clear night sky are conceived through a pandemonium of atoms. These light elements are squeezed under enough pressure for their nuclei to undergo fusion. These little twinkling bits of light we see adorning the infinity of sky at night are beautiful to gaze at. However, its beauty is a result of a mayhem. Atomic behaviors that are pattern-less and random. The small sensitivity in the changes of its atom’s condition can bring life to myriad of possibilities.

Stars are born as a result of balance of forces. The team up of varying forces of gravity squeezes atoms so tight that fusion and reaction emanates. The result of this fusion is an outward and expanding pressure. Once the outward and inward pressure becomes stable, the star that was born can live up to millions – if not billions – of years.

This sounds like a simple lighting of a match when read from a paper. However, the upheaval that the fusion and reaction takes is beyond everyone’s imagination. The biggest explosion known to human being is just a tiny spark in the process of giving birth to a star.

Such awe-inspiring events have made physical cosmologists and astrophysicist craving for more details. They have found some indicators and explanation on the different results of a birth of a star.

Star cluster has received renewed attention in the realm of physical cosmology and astrophysics. Thanks to the technology and relevant discovery they culminated throughout the years.

The Understanding of How Star Clusters are Born

How Star Clusters are BornSource: NASA

Stars start from large clouds of molecular gas. As a member of a molecular gas, stars form in groups or clusters. The influence of various level of gravity has made possible the exchange of energy between the stars. Some are born as runaway stars that run astray and escaped the tug of the gravity. The rest of them fall into a bind and exists as a collection of stars orbiting one another for an indefinite period of time.

A single star is born from the giant chilling cloud of molecular gas and dust. The stars have had dozens or even hundreds of stellar siblings in a cluster.

When a cluster is young, the brightest members are O, B and A stars. Young clusters in our Galaxy are called open clusters due to their loose appearance. They usually contain between 100 and 1,000 members.

Traditional models claim that the force of gravity may be solely responsible for the formation of stars and star clusters. More recent observations suggest that magnetic fields, turbulence, or both are also involved and may even dominate the creation process. But just what triggers the events that lead to the formation of star clusters.

Gravity has a Hand, But Not Most of the Time.

While gravity plays a very important role in keeping the clusters together, scientists have recently discovered that there are other causes of star birth. Aside from gravity, magnetic fields and turbulence, it is found out that collision among giant molecular clouds sparks the formation of star clusters.

At the forefront of the discovery is National Aeronautics and Space Administration (NASA) and German Aerospace Center’s SOFIA. SOFIA is a Stratospheric Observatory for Infrared Astronomy. It is a modified Boeing 747SP aircraft that houses a 2.7-meter (106-inch) reflecting telescope (with an effective diameter of 2.5 meters or 100 inches). Flying into the stratosphere at 38,000-45,000 feet allows SOFIA to escape 99 percent of Earth’s infrared-blocking atmosphere, allowing astronomers to study the solar system and beyond in ways that are not possible with ground-based telescopes.

Gravity has a Hand, But Not Most of the Time. Source: NASA

The astronomers leveraging SOFIA’s instrument have observed the amount of motion needed for the ionized carbon around a molecular cloud to form stars. They found the existence of two distinct components of molecular gas colliding with each other at unimaginable speeds of 20,000 miles per hour

The relationship of the distribution and velocity of the molecular and ionized gases are found to be consistent with their model simulations of cloud collisions. Thus the cluster form as a huge gas that is compressed upon collision – creating a shock wave as clouds hit one another.

Thomas Bisbas, a postdoctoral researcher at the University of Virginia, Charlottesville, Virginia, and the lead author on the paper describing these new results said that “Stars are powered by nuclear reactions that create new chemical elements.”

“The very existence of life on earth is the product of a star that exploded billions of years ago, but we still don’t know how these stars — including our own sun — form,” he added.

Universe’s Own Way of Giving Birth

The universe and all the mysteries that shroud its infinite length and width have its own unique ways of giving birth. Astronomers are delving deeply into their simulations and theories that would match their findings. In the case of SOFIA, the infrared-based observation on near-, mid- and far-infrared wavelengths has brought new ideas on how the universe can bring to life a star.

“These star formation models are difficult to assess observationally,” said Jonathan Tan, a professor at Chalmers University of Technology in Gothenburg, Sweden, and the University of Virginia, and a lead researcher on the paper. “We’re at a fascinating point in the project, where the data we are getting with SOFIA can really test the simulations.”

The next step for astronomers is to gather larger amount of data and draw scientific consensus on the mechanism responsible for driving the creation of star clusters.

The breakthrough sparked by SOFIA points that the universe and the limitless chaos that sparks from time to time has its own way of giving birth. It was a spark in a huge atomic fusion that – according to the Big Bang Theory – gave birth to our galaxy.

The unravelling of the actual and scientific cause of a birth of a star is a step towards determining the possibility of life – maybe even civilizations – thriving in one of those stars in a cluster.


The post In Space, After Chaos There is Birth: Mysterious Formation of Star Clusters appeared first on None Equilibrium.

by Daniel Kohler at December 10, 2018 03:24 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

To Study Small, You Must Build Big: Why are Scientists Building Big Colliders for the Smallest Collisions?

We were once thought to believe – maybe back in grade school – that the smallest unit of matter is the atom. The word atom that came from the Greek word “atomos,” which means indivisible and were made of protons, neutrons, and electrons. Atoms that can be illustrated by drawing a spherical core, enveloped by overlapping oval lines that serves as orbits of the electron. The iconic diagram of the atom that has become the inspiration of many logo of scientific organizations – and the fascination of the brightest minds in science.

Scientists have proven that atoms were just the starting point. Atoms were just the beginning. Atoms is another universe that is worth digging.

While it is an exciting feat to look deeper into atoms, dissecting it is not as easy as using scalpel to cut open a frog during one of your high school lab experiments. It requires knowledge, effort and funding which closest comparison are space explorations.

It is an effort in the grandest scale to test the predictions of different theories of particle physics, including measuring the properties of the Higgs boson.

Building Big for Particles

Building Big for Particles

The mysteries that are stored inside an atom is being unearthed in a facility that is 27-kilometer long, occupies two countries and requires thousands of people just to operate.

Its size is hard to fathom. Let’s simply describe it as the largest machine on earth.

The Large Hadron Collider (LHC) is the most powerful particle accelerator ever built and the biggest machine on earth. The accelerator is situated in a tunnel 100 meters underground at CERN, the European Organization for Nuclear Research, on the Franco-Swiss border near Geneva, Switzerland.

Its purpose is to propel charged particles to super-fast speeds and energies, then store them in a beam. The beam is composed of neutral particles moving the near speed of light. Through this process, scientist can store high-energy particles that are useful for fundamental and applied research in the sciences.

The LHC consists of a 27-kilometre ring of superconducting magnets with a number of accelerating structures boosting the energy of the particles along the way.

It happens in almost a snap, in an invisible environment clad by tubes and magnets. A couple of high-energy particle beams travel at close to the speed of light before they are made to collide. These particles – like heavy ions and lead ions – are guided around the accelerator ring by a strong magnetic field maintained by superconducting electromagnets. The electromagnets are made from coils of specialized electric cable that are subjected in a superconducting state

This method has allowed efficient conducting of electricity with little to no resistance or loss of energy.

The most energetic process of placing ions in a collision course is by no means possible in a small tube. In order to derive the unexplored particles of an atom, massive magnetic coils that span kilometers is needed. The ions need to gain enough momentum so it can ‘break’ or ‘react upon collision.

The energy requirement to run the machine is like putting 4000 coal-powered power plants together. The heat output of the process is way above the normal levels. Such is why in LHC, they are chilling the magnets to ‑271.3°C – a temperature colder than outer space – and are using liquid helium as the main cooling component.

Building to Add Meaning to the Standard Model

Building to Add Meaning to the Standard ModelSource: Digital Trends/a>

The ultimate goal of this gigantic operating machine is to provide an avenue for scientist to contribute to the Standard Model of Particle Physics.

The Standard Model details how the building blocks of matter interact under the presence of the four fundamental elements. All findings in particle physics has been summed up and boiled down to this model.

It illustrates the electromagnetic, weak and strong forces in the universe with the capability to classify the elementary particles.

It classifies and details the following elementary elements:

  • They are the particles in an atom that have mass and exhibit a spin. They combine to constitute all hadrons (baryons and mesons)–i.e., all particles that interact by means of the strong force, the force that binds the components of the nucleus. Physicists have formed three groups under the quarks: up/down, charm/strange and top/bottom.
  • Particles of half-integer spin that does not subject to strong interactions. Two main classes of leptons exist: charged leptons, and neutral leptons
  • The particles that exhibits zero or integral spin and follows the statistical description given by S. N. Bose and Einstein. Includes fundamental particles such as photons, gluons, and W and Z bosons
  • Higgs Boson. The elementary particle in the Standard Model of particle physics, produced by the quantum excitation of the Higgs field. It is defined as the “God Particle.” There has been no experiment that has seen or observed the before theoretical particle. However, the LHC, on July 4, 2012 announced that they have successfully found evidence of the particle. This particle is believed to be responsible for all the mass in the universe.

This interactive illustration shall help in understanding these elusive elements.

Since the discovery of Higgs Boson, the questions are piling up. What the scientist know, as of the moment is that the said particle exists, and can be summoned using their gigantic magnetic coils and kilometric tubes.

Building Even Bigger

The biggest questions require an even bigger collider. The discovery of the “God Particle” has spurred more interest in the field of particle physics. However, the best findings can only be produced using most capable collider.

 Circular Electron Positron Collider Source:Vision Times

CERN’s counterpart in the Far East, China’s Institute of High Energy Physics has announced that the conceptual design report for its Circular Electron Positron Collider (CEPC) has been released a few weeks ago.

The two-volume report containing the sophisticated technical details shows that it will be built to exceed the capabilities of LHC. According to the report, it would be a ten-year operation that will yield one million Higgs bosons. Plus one hundred million W bosons, and close to one trillion Z bosons. Billions of bottom quarks, charm quarks and tau-leptons will also be produced in the decays of the Z bosons.


The post To Study Small, You Must Build Big: Why are Scientists Building Big Colliders for the Smallest Collisions? appeared first on None Equilibrium.

by Frieda Nielsen at December 10, 2018 03:13 AM

Dmitry Podolsky - NEQNET: Non-equilibrium Phenomena

A Nonequilibrium Energy and Environment Special Part 2: The Key to Green Energy is Putting it into People’s Smartphones.

“This is the first of a two-part series detailing the possibility of small-scale energy production to help curb the use of fossil fuels and ultimately – global warming. Explored here are two promising technologies – even if they are at their infancy stage – can make everyone a contributor to the goal of reducing our carbon dioxide footprint.”

The rise of smartphones has disrupted the lives of billions of people. Just like its functionalities that can be attained by a touch of a button, it has crept through the human system like a flick of a switch.Few can imagine life without the bright, wide and intuitive screens of smartphones. Fewer can live without a smartphone.

They haven’t been around for too long, but it has rose to becoming a basic necessity for people around the world. It hasn’t taken too long before it dominated the telecommunication sector.

The innovation and technology being poured to improve the smartphonehas satiated the taste buds of the tech-savvy generation. The rise of smartphone is inevitable and nobody is seemingly stopping it. Well, why would anyone?

It has brought functionality, convenience and a whole lot of cool stuff right into the palm of our hands. It has made communicating through a device easier and more accessible.

However, scientists have sounded the alarm about the negative impact of smartphones to our planet.

In a study conducted by the McMaster University’s Journal of Cleaner Production which analyzed the carbon impact of the entirety of Information and Communication Industry (ICT) has found that smartphones are slowly killing the planet.The blame is pinned to the smartphone due to its life cycle.According to the study, a smartphone is more or less disposable. An average smartphone would be used for two years, then disposed or replaced with a newer model.

Smartphones, just like energy consumption is not harmful by itself. There is a small amount of carbon footprint hooked to the actual use of smartphone as its main energy requirement is the charging of its batteries.

The harm is rooted from the manufacturing of smartphones. Such harm is amplified by the short life cycle of a smartphone.In 2017 alone,175,780 units of smartphones were sold every hour. More than half of the 1.54 billion smartphones sold last year are for disposal or have been replaced by a newer model. In 2020, 90% of these phones have ended their life cycle.


The worrisome figure is that 85% to 95% of the device’s carbon emission came from the mined minerals that make up a smartphone. The big players here are the lithium ion batteries, the glass and semiconductors used inside its system.

As the trend of smartphone design shifts toward bigger, brighter and better screens, the carbon footprint of producing these devices also shoots up. A testament for this is the publicized report of Apple admitting that building an iPhone 7 Plus creates more or less 10% more CO2 than the iPhone 6s, but an iPhone 7 standard creates roughly 10% less than a 6s.

The manufacturing of the smartphone is just the first of the two-prong environmental impact. The second, insidious impact is the smartphone’s server and data centers. These huge facilities are solely responsible for the 45% of the carbon emission of the ICT. That is because every message, every Facebook like, every Instagram post, every Tweet requires a server to execute the commands – and these servers or cloud is not cheap to operate. Nor its carbon emission can be neglected.

Do we take a step back and just stop?

If your answer is yes, you may bid farewell to instant messages, you may say goodbye to Facebook likes, you may stop tweeting and you may halt taking selfies.

Sounds difficult and unfair, right? Yes, we agree that humans will not stop using the smartphone.But what can we do? What can you do?

What we can do has been done before. In fact, it is the reason for the immense popularity of smartphones. Its designers can innovate the build and functionality of the smartphone we know today.

As avid students, researchers and followers of science, especially physics, the environmental team of Nonequilibrium has been trying to find the needle in the haystack and trying to think out of the box.

When almost all functionality has been squeezed in a smartphone’s rectangular body frame, we will look into one additional functionality that can make it not just smarter, but greener.

Harvesting Energy from Glass PanelsSource: MSU Today

The majority of smartphone is made of glass. The capacitive touchscreen, the camera lens, and the premium ones’ back panels are made up of glass. It also comes in different types like Gorilla and Corning.

A few bright minds that are not even associated in the smartphone industryhas a revamped breakthrough in the production of glass. A glass that can harvest energy from the sun.

Researchers and scientists from Michigan State University (MSU) have floated the use of transparent solar panels.The thin sheets of see-through glass that offers solar-harvesting system using the organic molecules developed by Richard Lunt, the Johansen Crosby Endowed Associate Professor of Chemical Engineering and Materials Science at MSU andhis team to absorb invisible wavelengths of sunlight. The team claims that they can set the glass to absorb ultraviolet and the near-infrared wavelengths that then convert this energy into electricity.

The glass that they have been testing fits well into the requirements of a smartphone, because, according to the researchers, it would be made of transparent luminescent solar concentrator that could generatesolar energy on any clear surface without affecting the view.

Once this new breed of solar panel has been refined it can replace the glass that is being used in a smartphone. The smartphone manufacturers can plug in the chipset that will collect the energy from the glass.

This, in theory is more convenient than the new buzz of wireless charging. If tech giants like Apple, Samsung, Huawei, LG and so on puts their fine touch,we can be seeinga smartphone that does not need any charger. A smartphone that is energy self-sufficient.

“We analyzed their potential and show that by harvesting only invisible light, these devices can provide a similar electricity-generation potential as rooftop solar while providing additional functionality to enhance the efficiency of buildings, automobiles and mobile electronics.” Lunt wrote in the research.


technologies and trends

Humans are great in adapting to new technologies and trends. We are great at clinging to our basic necessities and finding ways to acquire them. Clothing, as mentioned in the first part of this report is a basic necessity for a human being to survive – and be decent. Smartphones, as mentioned here has been a necessity in our technologically advancing world.

When we cannot live without these materials, then incorporating our green energy goals to them is a win for both sides. We continue to enjoy what we enjoy, while each and every one contributing to curbing our use of fossil fuels.

These two researches that have caught the attention of our writers and editors are possible and actionable ways to lessen our carbon footprint – so we may continue our existence in this planet without compromising what we love.

The post A Nonequilibrium Energy and Environment Special Part 2: The Key to Green Energy is Putting it into People’s Smartphones. appeared first on None Equilibrium.

by Daniel Kohler at December 10, 2018 02:55 AM

December 09, 2018

Lubos Motl - string vacua and pheno

Interviews with Susskind and Veneziano
Two recent interviews with founders of string theory are out there. A recent CERN Courier published Matthew Chalmers' interview with the founder number zero, Gabriele Veneziano:
The roots and fruits of string theory
Veneziano was a really young boy in 1968, as a photograph demonstrates, and he – and others – were systematically studying the strong force and its apparent and hypothesized properties such as the DHS (world sheet) duality. Veneziano happened to be the guy who wrote down an amplitude, the Euler Beta function, that obeyed this cool property of the S-matrix, the DHS duality.

In Veneziano's presentation, it was all very systematic. The strong force was seen, its precise theory wasn't known, but some properties seemed experimentally known as well and researchers employed some reverse engineering. From that proper historical perspective, it isn't even true that string theory was discovered by an accident. They systematically looked at many properties of the strong force and the existence of the string-like fluxtubes is simply a fact. That's why they investigated a possible string-like description of the nuclear effects, amplitudes with lots of resonances, and why they had to try the amplitudes and Lagrangians describing the string itself.

In that enterprise, the Veneziano amplitude had to be found at some moment. At a later moment, the connection to the string had to be found. And when one analyzes perturbative string theory, they couldn't avoid gravity and general relativity, of course.

In the interview, Veneziano also mentions his pre-big-bang string cosmology – which is far from the stringy picture of cosmology as believed by a majority of string theorists. He says that young people should study more ambitious questions and the huge hierarchy between the short Planck scale and the long Hubble scale is the greatest challenge.

Veneziano also reiterates the point that all the claims that string theory is untestable or unfalsifiable are just completely wrong. The difference between the 1960s and the present is that we don't have any major experimental anomaly, a crisis that requires some explanation.

Meanwhile, two days ago, Craig Cannon interviewed Leonard Susskind:

The transcript of this 66-minute interview is available at the Ycombinator server.

Leonard Susskind, an ex-plumber who noticed – along with Nielsen and perhaps Nambu – that strings could produce the Veneziano amplitude starts with saying that he is not a contrarian at all and the claims to the contrary are a misconception.

I find these claims by Susskind about his being the "most mainstream physicist in the world" rather implausible. His very work on the Veneziano-like business placed him somewhat out of the mainstream in the early 1970s. But I think that Susskind's contribution was less systematic than e.g. Veneziano's at that time. It turned out to be correct and important – strings were behind formulae with these nice properties.

And indeed, string theory has grown to a super-mainstream framework, although it's questionable whether we should still say it is true in the environment flooded with assorted postmodern crackpots and non-physicists pretending to be physicists. But string theory's later mainstream status says very little about Susskind's alleged contrarian traits. Those should be judged according to the apparent "normality" and reception of the ideas at the very moment when they were presented, not in the subsequent 50 years, right? The interviewer understands this point well, Susskind doesn't. The holographic principle by Susskind and 't Hooft was also far from mainstream when published (and in fact, 't Hooft's thinking about the holographic principle remained extremely out-of-mainstream if not away-from-experts even by now, 2018). It did gain power in the next two decades but because of the chronology, that outcome cannot be used to quantify Susskind's contrarian traits.

Susskind has done lots of things that are quickly seen to be right and that work – that could be said to be mainstream – after a shorter or longer delay. But you know, he has also written an extremely above-the-average number of speculative paper and even an unusually high number of super speculative papers that are not just wrong but stupidly so. Hundreds of TRF blog posts contain Susskind's name – it's not shocking given his status as a physics titan, and my natural top 5 role model at some important times.

But those texts may remind you of some weirder and weird papers by Susskind. Just to be sure, the BFSS Matrix Model is a mainstream thing – but because it doesn't encourage people to reuse their 4D QFT toolkit all the time, it's unfortunately less mainstream than the AdS/CFT. ER-EPR is also mainstream, a new duality relevant for quantum gravity, but I grew almost certain that Juan Maldacena gave it all the things that work and Lenny Susskind mainly gave it the tendency to expand in provocative directions. Maybe such an inspiring idea was needed at the beginning as well but Juan was essential to turn it into a paper that makes sense (almost the same is true in the case of the holographic principle).

Some subsequent, ER-EPR-related papers by Susskind only are unequivalent to the propositions of the first ER-EPR correspondence paper and they promote crazy concepts such as the anti-quantum zeal, AQZs' nonlocality claims, classical simulations instead of the genuine quantum mechanical theory, and wrong things that are in no way related to the ER-EPR correspondence itself because they're just efforts to abuse the ER-EPR context to promote some standard misconceptions about quantum mechanics.

A similarly crazy was a Bousso-Susskind "hypermultiverse" paper promoting the many worlds interpretation of quantum mechanics along with some parallel worlds in cosmology and – insanely – claiming that those were the same thing. On its title page, that paper had famous enough authors who had been known for some very serious work but this classification notwithstanding, it was a full-blown crackpot paper.

Most recently, Susskind is being associated with the complexity in cosmology and quantum gravity – click to read the newest Quanta Magazine article about it. That's provocative and numerous people have joined this fad, numerous papers have acquired 100-200 citations but the Quanta Magazine rightfully quotes Aron Wall (Stanford, moving to Cambridge) who says that it's very speculative and may be wrong. I am convinced that most physicists from such elite productive corners would agree with Aron Wall. Complexity surely isn't mainstream within theoretical high-energy physics.

A spatial volume found inside the black hole is claimed to be "equal" to the computational complexity and those people seem excited because it's a more "algorithmic" cousin of the second law of thermodynamics – some complexity "grows" which they believe to be "good". But what complexity? In which programming language? What is the numerical coefficient? Which precise surface? And why should the law take one form or another? Susskind et al. don't even seem to ask these questions – and especially the "why" questions which is what I find very annoying and I am convinced that my view is the majority view among the professionals.

If there is a link between spatial volumes inside black holes (or even cosmology) and some computational complexity, it must be a "derived law" that is waiting for a clarification or a proof. They don't even seem to accept this statement and deal with the "computational complexity" as if it were an elementary concept in theoretical physics – which it simply isn't. And people in this complexity realm also add lots of things that look like pure hype – e.g. that the expansion of the Universe is due to some growing complexity. Well, given the identification of complexity and volumes, this statement would be self-evident. What is unclear is any added value, any evidence for that statement, or any useful implications. It sounds like pure hype that just "sounds" interesting.

Susskind continued to be extremely productive, even in very recent years. But much of this production is an industry of provocations. Especially in these recent papers where he combines many worlds, simulations of quantum mechanics, computational complexity, and lots of other things, it seems clear that his "comparative advantage" is that without too much evidence, he is not afraid of combining lots of somewhat random things that other physicists wouldn't combine so casually. And Susskind also says lots of big things that seem completely unbacked and that even seem easy to disprove, e.g. that "gravity and quantum mechanics are the same thing and make each other unavoidable. For this reason, I think that Susskind is a classic example of a contrarian – who enjoys saying provocative things for the provocation's sake. That doesn't mean that his papers end up being as overwhelmingly wrong as e.g. Lee Smolin's. But Lee Smolin isn't a contrarian, he is just a hopeless crank. That's a different level.

An incredibly strong extra piece of evidence to debunk Susskind's claims that "he is not a contrarian" is his bizarre monologue about the validity of string theory. He says that it is a consistent theory of quantum mechanics and gravity and a valuable playground to test ideas. But it doesn't describe "particles" or "the real world" – string theory needs to be modified or deformed to describe the real Universe. WTF? Would you please provide us with a paper to support your extraordinary statements? How can string theory fail to describe "particles"? They're there – e.g. perturbatively, they are the eigenstates of vibrating string, right?

A very important, more or less established property of string theory is that it cannot be deformed. Every truly "mainstream" string theorist who has gone through the material that is believed to be standard and important knows it (it's included in the first chapters of some rudimentary textbooks) – and can reconstruct a partial proof of a sort to show that the non-deformability isn't just some faith. Perhaps one may ignore some knowledge or lore – but that's surely only how contrarians approach it. The statement that the real world theory should be just "inspired by the mathematically precise" theory known as string theory, but that theory needs to be "deformed", seems like a crazy statement not backed by any persuasive scientific work. Susskind starts by describing that statement as "his guess" but a few sentences later, he even says that "we know it". Sorry, we surely don't.

The interview continues with an enjoyable but mixed baggage of comments about GUT, cosmological constant, holography, simulations, Feynman (I agree with Susskind that Feynman was actually a deep philosopher and a deep mathematician – he just hated the formal crap and unnecessary complexity in the presentation that is spread in both of these disciplines), the bomb (Susskind says that physicists should naturally try to discover everything and politicians – someone else – should regulate them if there are immoral or dangerous implications), quantum computers (Susskind says that they may be built but have no applications – which just shows that he isn't an expert), teaching (to teach is a great way to learn; Susskind believes that if he could go back in time and teach something to his father, the father would stop being a crackpot – because of some experience, I greatly doubt it, I even greatly doubt that Lenny would be listened to at all), and many other things. It's all very interesting but it's also a mixture of scientific results, scientific beliefs that are supported by some good reasons, beliefs that aren't supported by good reasons, and some plain incorrect assertions.

Susskind is and has been a brilliant and prolific fountain of ideas but physics has absolutely depended on the existence of other people who can deal with the "half-baked stuff" coming out of Susskind's mind and turn it into physical insights that make sense.

by Luboš Motl ( at December 09, 2018 08:07 AM

December 07, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

The last day of term

There is always a great sense of satisfaction on the last day of the teaching semester. That great moment on a Friday afternoon when the last lecture is over, the last presentation is marked, and the term’s teaching materials can be transferred from briefcase to office shelf. I’m always tempted to jump in the car, and drive around the college carpark beeping madly. Of course there is the small matter of marking, from practicals, assignments and assessments to the end-of-semester exams, but that’s a very different activity!

Image result for waterford institute of technology

The last day of term at WIT

For me, the semesterisation of teaching is one of the best aspects of life as an academic. I suppose it’s the sense of closure, of things finished – so different from research, where one paper just leads to another in a never-ending cycle. There never seems to be a good moment for a pause in the world of research, just a ton of papers I would like to write if I had the time.

In recent years, I’ve started doing a brief tally of research outputs at the end of each semester. Today, the tally is 1 book chapter, 1 journal article, 2 conference presentations and 1 magazine article (plus 2 newspaper columns). All seems ok until I remember that most of this material was in fact written over the summer. On reflection, the semester’s ‘research’ consisted of carrying out corrections to the articles above and preparing slides for conferences.

The reason for this is quite simple – teaching. On top of my usual lecturing duties, I had to prepare and deliver a module in 4th-year particle physics this term. It was a very interesting experience and I learnt a lot, but preparing the module took up almost every spare moment of my time, nuking any chances of doing any meaningful research during the teaching term. And now I hear that I will be involved in the delivery of yet another new module next semester, oh joy.

This has long been my problem with the Institutes of Technology. With contact hours set at a minimum of 16 hours/week, there is simply far too much teaching (a situation that harks back to a time when lecturers taught to Diploma level only). While the high-ups in education in our capital city make noises about the importance of research and research-led teaching, they refuse to countenance any change in this for research-active staff in the IoTs. If anything, one has the distinct impression everyone would much rather we didn’t bother.  I don’t expect this situation to change anytime soon  – in all the talk about technological universities, I have yet to hear a single mention of new lecturer contracts.


by cormac at December 07, 2018 06:09 PM

Clifford V. Johnson - Asymptotia

Physics Plans

After a conversation over coffee with one of the event planners over at the Natural History Museum, I had an idea and wandered over to talk to Angella Johnson (no relation), our head of demo labs. Within seconds we were looking at some possible props I might use in an event at the NHM in February. Will tell you more about it later!

-cvj Click to continue reading this post

The post Physics Plans appeared first on Asymptotia.

by Clifford at December 07, 2018 05:01 AM

John Baez - Azimuth

Second Symposium on Compositional Structures

I’ve been asleep at the switch; this announcement is probably too late for anyone outside the UK. But still, it’s great to see how applied category theory is taking off! And this conference is part of a series, so if you miss this one you can still go to the next.

Second Symposium on Compositional Structures (SYCO2), 17-18 December 2018, University of Strathclyde, Glasgow.

Accepted presentations


Please register asap so that catering can be arranged. Late registrants
might go hungry.

Invited speakers

• Corina Cirstea, University of Southampton – Quantitative Coalgebras for
Optimal Synthesis

• Martha Lewis, University of Amsterdam – Compositionality in Semantic Spaces


The Symposium on Compositional Structures (SYCO) is an interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language. The first SYCO was held at the School of Computer Science, University of Birmingham, 20-21 September, 2018, attracting 70 participants.

We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

Submission is easy, with no format requirements or page restrictions. The meeting does not have proceedings, so work can be submitted even if it has been submitted or published elsewhere.

While no list of topics could be exhaustive, SYCO welcomes submissions with a compositional focus related to any of the following areas, in particular from the perspective of category theory:

• logical methods in computer science, including classical and quantum programming, type theory, concurrency, natural language processing and machine learning;

• graphical calculi, including string diagrams, Petri nets and reaction networks;

• languages and frameworks, including process algebras, proof nets, type theory and game semantics;

• abstract algebra and pure category theory, including monoidal category
theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;

• quantum algebra, including quantum computation and representation theory;

• tools and techniques, including rewriting, formal proofs and proof assistants, and game theory;

• industrial applications, including case studies and real-world problem

This new series aims to bring together the communities behind many previous successful events which have taken place over the last decade, including “Categories, Logic and Physics”, “Categories, Logic and Physics (Scotland)”, “Higher-Dimensional Rewriting and Applications”, “String Diagrams in Computation, Logic and Physics”, “Applied Category Theory”, “Simons Workshop on Compositionality”, and the “Peripatetic Seminar in Sheaves and Logic”.

SYCO will be a regular fixture in the academic calendar, running regularly throughout the year, and becoming over time a recognized venue for presentation and discussion of results in an informal and friendly atmosphere. To help create this community, and to avoid the need to make difficult choices between strong submissions, in the event that more good-quality submissions are received than can be accommodated in the timetable, the programme committee may choose to defer some submissions to a future meeting, rather than reject them. This would be done based largely on submission order, giving an incentive for early submission, but would also take into account other requirements, such as ensuring a broad scientific programme. Deferred submissions would be accepted for presentation at any future SYCO meeting without the need for peer review. This will allow us to ensure that speakers have enough time to present their ideas, without creating an unnecessarily competitive reviewing process. Meetings would be held sufficiently frequently to avoid a backlog of deferred papers.


Ross Duncan, University of Strathclyde
Fabrizio Romano Genovese, Statebox and University of Oxford
Jules Hedges, University of Oxford
Chris Heunen, University of Edinburgh
Dominic Horsman, University of Grenoble
Aleks Kissinger, Radboud University Nijmegen
Eliana Lorch, University of Oxford
Guy McCusker, University of Bath
Samuel Mimram, École Polytechnique
Koko Muroya, RIMS, Kyoto University & University of Birmingham
Paulo Oliva, Queen Mary
Nina Otter, UCLA
Simona Paoli, University of Leicester
Robin Piedeleu, University of Oxford and UCL
Julian Rathke, University of Southampton
Bernhard Reus, Univeristy of Sussex
David Reutter, University of Oxford
Mehrnoosh Sadrzadeh, Queen Mary
Pawel Sobocinski, University of Southampton (chair)
Jamie Vicary, University of Birmingham and University of Oxford (co-chair)

by John Baez at December 07, 2018 12:28 AM

December 06, 2018

Lubos Motl - string vacua and pheno

Hossenfelder's pathetic attack against CERN's future collider
Sabine Hossenfelder became notorious for her obnoxiously demagogic and scientifically ludicrous diatribes against theoretical physics – she effectively became a New Age Castrated Peter Woit – but that doesn't mean that she doesn't hate the rest of particle physics.

Her latest target is CERN's new project for a collider after the LHC, the Future Circular Collider (FCC), an alternative to the Japanese linear ILC collider and the Chinese circular CEPC collider (the Nimatron).

This is just a 75-second-long FCC promotional video. It shows just some LHC-like pictures with several of the usual questions in fundamental physics that experiments such as this one are trying to help to answer. The video isn't excessively original but you can see some updated state-of-the-art fashions in computer graphics as well as the visual comparison of the FCC and its smaller but more real sister, the LHC.

But when you see an innocent standard video promoting particle accelerators, Ms Hossenfelder may be looking at the very same video and see something entirely different: a reason to write an angry rant, CERN produces marketing video for new collider and it’s full with [sic] lies.

What are the lies that this video is claimed to be full of? The first lie is that the video asks what is 96% of the Universe made of. Now, this question is listed with the implicit assertion that this is the question that the people behind this project and similar projects would help to answer. It's what drives them. No one is really promising that the FCC will answer this question.

The figure 96% refers to the dark energy (70%) plus dark matter (26%) combined. Hossenfelder complains:
Particle colliders don’t probe dark energy.
Maybe they don't but maybe they do. This is really a difficult question. They don't test the dark energy directly but whenever we learn new things about particles that may be seen through particle accelerators, we are constraining the theories of fundamental physics. And because the theories' explanation for particular particles and for dark-energy-like effects are correlated in general, the discoveries at particle accelerators generally favor or disfavor our hypotheses about dark energy (or whatever replaces it), too.

My point is that at the level of fundamental physics, particle species and forces are interconnected and cannot quite be separated. So her idea that these things are strictly separated so that the FCC only tests one part and not the other reflect some misunderstanding of the "unity of physics" that has already been established to a certain extent. Also, she writes:
Dark energy is a low-energy, long-distance phenomenon, the entire opposite from high-energy physics.
This is surely not how particle physicists view dark energy. Dark energy is related to the energy density of the vacuum which is calculable in particle physics. In the effective field theory approximation, contributions include vacuum diagrams – Feynman diagrams with no external legs. According to the rules of effective field theories as we know them, loops including any known or unknown particle species influence the magnitude of the dark energy.

For this reason, the claim that the dark energy belongs to the "entirely opposite" corner of physics than high-energy physics, seems rather implausible from any competent theoretical high-energy physicist's viewpoint. The total vacuum energy ends up being extremely tiny relatively to (any of) the individual contributions we seem to know – and this is known as the cosmological constant problem. But we don't know any totally convincing solution of that problem. The anthropic explanation assuming the landscape and the multiverse where googols of possible values of dark energy are allowed is the most complete known possibility – but it is so disappointing and/or unpredictive that many people refuse to agree that this is the established final word about the question.

The right solution could include some complete separation of the dark energy from high-energy physics, as suggested by Hossenfelder. But this is just one possible category of proposals. It's surely not an established scientific fact. And there's no known convincing theory of this Hossenfelder's type.

The discovery or non-discovery of superpartners at a higher energy scale would certainly influence the search for promising explanations of the cosmological constant problem, and so would other discoveries. For example, the dark energy may be due to quintessence and quintessence models may predict additional phenomena that are testable by the particle colliders. None of the findings are guaranteed to emerge from the FCC but that's how experiments usually work. We don't really know the results in advance, otherwise the experiment would be a waste of time.
What the FCC will reliably probe are the other 4%, the same 4% that we have probed for the past 50 years.
Indeed, a collider may only be promised to test the visible matter "reliably". But science would get nowhere if it were only trying to probe things that can be probed "reliably". That statement is very important not just for science. When Christopher Columbus sailed to the West, he claimed to reach India from the other side or stuff like that. But he couldn't promise them to reach India reliably. After all, indeed he had found another continent that almost covers the whole space between the North Pole and South Pole and prevents you from sailing from Europe to India in this direction.

But that didn't mean that Columbus' voyage was a waste of resources, did it? It is absolutely essential for the scientific spirit to try lots of things, both in theoretical and experimental science, whose successful outcome is not guaranteed, probes of things that are "unreliable". Scientists simply have to take the risk, to "experiment" in the colloquial sense.

It's truly ironic that Sabine Hossenfelder has been "created" as an appendix of Lee Smolin, her older fellow critic of science, who always claimed that science needed to fund much more risky directions and stuff like that (needless to say, the "most courageous directions" were meant to represent a euphemism for cowardly crackpots such as himself). Where does it end when Lee Smolin pulls a female clone from his dirty aß and she drags all the anti-scientific gibberish he used to emit through several more years of evolution? What does she do with all the "courage" that Smolin's mouth – not behavior – was full of? And with the value of risk-taking? She says that only experiments with a "reliable" outcome should be funded! Isn't it ironic?

The next collider, and even the LHC in the already collected data or in the new run starting in 2021, may learn something about dark matter. If the dark matter is a light enough neutralino, the LHC or the next collider is likely enough to see it. If the dark matter is an axion, the outcome may be different. But if we won't try anything, we won't learn anything, either. Indeed, her criticism of the tests of dark matter theories is identical:
What is dark matter? We have done dozens of experiments that search for dark matter particles, and none has seen anything. It is not impossible that we get lucky and the FCC will produce a particle that fits the bill, but there is no knowing it will be the case.
A malicious enough person could have made the exact same hostile and stupid comment before every important experiment in the history of science. There was no knowing that Galileo would see anything new with the telescopes. Faraday and Ampere and Hertz and others weren't guaranteed to see any electromagnetic inductions, forces, electromagnetic waves. The CERN colliders weren't certain to discover the heavy gauge bosons in the 1980s and the Tevatron didn't have to discover the top quark. The LHC didn't have to discover the Higgs boson, at least not by 2012, because its mass could have been less accessible. And so on.

Does it mean that experiments shouldn't ever be tried? Does it mean that there is a "lie" in the FCC video? No. With Hossenfelder's mindset, we would still be jumping from one palm to another and eating bananas only. Another "lie" is about the matter-antimatter asymmetry:
Why is there no more antimatter? Because if there was, you wouldn’t be here to ask the question. Presumably this item refers to the baryon asymmetry. This is a fine-tuning problem which simply may not have an answer. And even if it has, the FCC may not answer it.
The answer to the question "because you wouldn't be here" may be said to be an "anthropic" answer. And it's a possible answer and a true one. But it doesn't mean that it is the answer in the sense of the only answer or the most physically satisfying answer. In fact, it's almost certain that Hossenfelder's anthropic answer cannot be the deepest one.

Why? Simply because every deep enough theory of particle physics does predict some baryon asymmetry. So the very simple observed fact that the Solar System hasn't annihilated with some antimatter is capable of disproving a huge fraction of possible theories that people could propose and that people have actually proposed.

Her claim that it is a "fine-tuning problem" is implausible. What she has in mind is clearly a theory that predicts the same amount of baryons and antibaryons in average – while the excess of baryons in our Universe is a statistical upward fluctuation (she uses the word "fine-tuning" which isn't what physicists would use but the context makes her point rather obvious). But one can calculate the probability of such a large enough fluctuation (seen all over the visible Universe!) for specific models and the probability is usually insanely low. For that reason, the very theory that predicts no asymmetry in average becomes very unlikely, too. By simple Bayesian inference, a theory that actually explains the asymmetry – that has a reason why the mean value of this asymmetry is nonzero and large enough – is almost guaranteed to win! Fundamental physicists still agree that you should better obey the Sakharov conditions (needed for an explanation of the asymmetry to exist).

It is rather transparent that she doesn't understand any of these questions. She doesn't understand how scientists think. She misunderstands the baryon asymmetry and tons of other technical topics but she also misunderstands something much more universal and fundamental – how scientists think and infer in the presence of some uncertainty which is almost omnipresent. Whenever there's some anomaly or anything that doesn't add up, but it plausible with a tiny probability, she just calls it "fine-tuning", throws "fine-tuning" at the problem, and concludes that there's nothing to explain. Sorry, this is not how a scientist thinks. If this attitude had been adopted by everyone for centuries, science wouldn't have made any progress at all. Visible enough anomalies simply do require genuine explanations, not just "it's fine-tuning and fine-tuning is always OK because naturalness is beauty and beauty is rubbish", which is Hossenfelder's totally flawed "methodology" in all of physics.

On top of that, she repeats her favorite "reliability" theme: "And even if it has, the FCC may not answer it." Right, the FCC may fail to answer one question or another, and it will almost certainly fail to answer most questions that people label as questions with a chance to be answered. But the other part of the story is that the FCC also may answer one of these questions or several of these questions.

Note that Hossenfelder only presents one possible scenario: science will fail to answer the questions. She never discusses the opposite possibility. Why? Because she is a hater of science who would love science to fail. Every time science learns something new, vitriolic science haters such as Ms Sabine Hossenfelder or Mr Peter Woit shrink. After every discovery, they realize that they're even more worthless than previously thought. While science makes progress, they can only produce hateful tirades addressed to brainwashed morons. While the gap is getting larger and deeper, and more obvious to everybody who watches the progress in science, the likes of Ms Hossenfelder escalate their hostility towards science because they believe that this escalation will be better to mask their own worthlessness.

The fact that the FCC has a chance to answer at least one of these questions is much more important than the possibility that it won't answer one of them or any of them.

Hossenfelder also claims that the FCC won't probe how the Universe began because the energy density at the FCC is "70 orders of magnitude lower". This is a randomly picked number – she probably compared some FCC-like energy with the Planck energy. But the statement about the beginning of the Universe doesn't necessarily talk about the "Planck time" after the Big Bang. It may talk about somewhat later epochs. But if the FCC has a higher energy than the LHC, it will be capable of emulating some processes that are closer to the true beginning than the processes repeated by the LHC.

She has also attacked the claims about the research of neutrinos:
On the accompanying website, I further learned that the FCC “is a bold leap into completely uncharted territory that would probe… the puzzling masses of neutrinos.”

The neutrino-masses are a problem in the Standard Model because either you need right-handed neutrinos which have never been seen, or because the neutrinos are different from the other fermions, by being “Majorana-particles” (I explained this here).
The FCC is relevant because new observations of the neutrino physics are possible – whether right-handed neutrinos or the rest mass of neutrinos (whether they are Dirac or Majorana) or new species of neutrinos etc. – and, on top of that, the very fact that the neutrino masses are nonzero may be viewed as physics beyond the Standard Model.

Why is it so? Because the neutrino masses, at least the Majorana ones, can't come from the Yukawa interactions. The term \(y h \bar \nu \nu\) isn't an \(SU(2)\) singlet because the term contains the product of three doublets, an odd number. You need dimension-five operators. Those are non-renormalizable. A theory with them breaks at some energies. At that energy scale, some new phenomena must kick in to restore the consistency of the theory.

Alternatively, Dirac masses could come from renormalizable Yukawa dimension-4 operators but the new right-handed neutrinos components may be said to be beyond the Standard Model. Some new interactions could be measured etc. Whatever is true in Nature, the FCC may clearly produce neutrinos and detect them in the form of the missing energy, like the LHC. It's unreasonable to attack the statement that the new collider would allow to test neutrinos in a new regime.
We presently have no reliable prediction for new physics at any energy below the Planck energy. A next larger collider may find nothing new. That may be depressing, but it’s true.
But the FCC video is simply not saying that we are guaranteed to get such answers. The big desert between the Standard Model and (nearly?) the Planck scale has always been a possibility. If we had the "duty" to have a reliable prediction of some new physical phenomenon at an intermediate energy scale, it would have to be found by theoretical particle physicists or similar folks.

But curiously enough, she's hysterically fighting against that (theoretical) part of the research, too. To summarize, she is fighting against particle, fundamental, or high-energy physics in any form. She hates it, she hates people who are asking questions, she hates people who are proposing possible answers, and she hates the people who do any work – theoretical or experimental work – that may pick the right answers or at least favor or disfavor some of them.

Nevertheless, due to the extreme political correctness, this absolute hater of science who doesn't do anything except for lame efforts to hurt the image of science is sometimes presented as a physicist by the popular media. She is nothing of the sort.

by Luboš Motl ( at December 06, 2018 05:06 PM

December 01, 2018

John Baez - Azimuth

Geometric Quantization (Part 1)

I can’t help thinking about geometric quantization. I feel it holds some lessons about the relation between classical and quantum mechanics that we haven’t fully absorbed yet. I want to play my cards fairly close to my chest, because there are some interesting ideas I haven’t fully explored yet… but still, there are also plenty of ‘well-known’ clues that I can afford to explain.

The first one is this. As beginners, we start by thinking of geometric quantization as a procedure for taking a symplectic manifold and constructing a Hilbert space: that is, taking a space of classical states and contructing the corresponding space of quantum states. We soon learn that this procedure requires additional data as its input: a symplectic manifold is not enough. We learn that it works much better to start with a Kähler manifold equipped with a holomorphic hermitian line bundle with a connection whose curvature is the imaginary part of the Kähler structure. Then the space of holomorphic sections of that line bundle gives the Hilbert space we seek.

That’s quite a mouthful—but it makes for such a nice story that I’d love to write a bunch of blog articles explaining it with lots of examples. Unfortunately I don’t have time, so try these:

• Matthias Blau, Symplectic geometry and geometric quantization.

• A. Echeverria-Enriquez, M.C. Munoz-Lecanda, N. Roman-Roy, C. Victoria-Monge, Mathematical foundations of geometric quantization.

But there’s a flip side to this story which indicates that something big and mysterious is going on. Geometric quantization is not just a procedure for converting a space of classical states into a space of quantum states. It also reveals that a space of quantum states can be seen as a space of classical states!

To reach this realization, we must admit that quantum states are not really vectors in a Hilbert space H; from a certain point of view they are really 1-dimensonal subspaces of a Hilbert space, so the set of quantum states I’m talking about is the projective space PH. But this projective space, at least when it’s finite-dimensional, turns out to be the simplest example of that complicated thing I mentioned: a Kähler manifold equipped with a holomorphic hermitian line bundle whose curvature is the imaginary part of the Kähler structure!

So a space of quantum states is an example of a space of classical states—equipped with precisely all the complicated extra structure that lets us geometrically quantize it!

At this point, if you don’t already know the answer, you should be asking: and what do we get when we geometrically quantize it?

The answer is exciting only in that it’s surprisingly dull: when we geometrically quantize PH, we get back the Hilbert space H.

You may have heard of ‘second quantization’, where we take a quantum system, treat it as classical, and quantize it again. In the usual story of second quantization, the new quantum system we get is more complicated than the original one… and we can repeat this procedure again and again, and keep getting more interesting things:

• John Baez, Nth quantization.

The story I’m telling now is different. I’m saying that when we take a quantum system with Hilbert space H, we can think of it as a classical system whose symplectic manifold of states is PH, but then we can geometrically quantize this and get H back.

The two stories are not in contradiction, because they rely on two different notions of what it means to ‘think of a quantum system as classical’. In today’s story that means getting a symplectic manifold PH from a Hilbert space H. In the other story we use the fact that H itself is a symplectic manifold!

I should explain the relation of these two stories, but that would be a big digression from today’s intended blog article: indeed I’m already regretting having drifted off course. I only brought up this other story to heighten the mystery I’m talking about now: namely, that when we geometrically quantize the space PH, we get H back.

The math is not mysterious here; it’s the physical meaning of the math that’s mysterious. The math seems to be telling us that contrary to what they say in school, quantum systems are special classical systems, with the special property that when you quantize them nothing new happens!

This idea is not mine; it goes back at least to Kibble, the guy who with Higgs invented the method whereby the Higgs boson does its work:

• Tom W. B. Kibble, Geometrization of quantum mechanics, Comm. Math. Phys. 65 (1979), 189–201.

This led to a slow, quiet line of research that continues to this day. I find this particular paper especially clear and helpful:

• Abhay Ashtekar, Troy A. Schilling, Geometrical formulation of quantum mechanics, in On Einstein’s Path, Springer, Berlin, 1999, pp. 23–65.

so if you’re wondering what the hell I’m talking about, this is probably the best place to start. To whet your appetite, here’s the abstract:

Abstract. States of a quantum mechanical system are represented by rays in a complex Hilbert space. The space of rays has, naturally, the structure of a Kähler manifold. This leads to a geometrical formulation of the postulates of quantum mechanics which, although equivalent to the standard algebraic formulation, has a very different appearance. In particular, states are now represented by points of a symplectic manifold (which happens to have, in addition, a compatible Riemannian metric), observables are represented by certain real-valued functions on this space and the Schrödinger evolution is captured by the symplectic flow generated by a Hamiltonian function. There is thus a remarkable similarity with the standard symplectic formulation of classical mechanics. Features—such as uncertainties and state vector reductions—which are specific to quantum mechanics can also be formulated geometrically but now refer to the Riemannian metric—a structure which is absent in classical mechanics. The geometrical formulation sheds considerable light on a number of issues such as the second quantization procedure, the role of coherent states in semi-classical considerations and the WKB approximation. More importantly, it suggests generalizations of quantum mechanics. The simplest among these are equivalent to the dynamical generalizations that have appeared in the literature. The geometrical reformulation provides a unified framework to discuss these and to correct a misconception. Finally, it also suggests directions in which more radical generalizations may be found.

Personally I’m not interested in the generalizations of quantum mechanics: I’m more interested in what this circle of ideas means for quantum mechanics.

One rather cynical thought is this: when we start our studies with geometric quantization, we naively hope to extract a space of quantum states from a space of classical states, e.g. a symplectic manifold. But we then discover that to do this in a systematic way, we need to equip our symplectic manifold with lots of bells and whistles. Should it really be a surprise that when we’re done, the bells and whistles we need are exactly what a space of quantum states has?

I think this indeed dissolves some of the mystery. It’s a bit like the parable of ‘stone soup’: you can make a tasty soup out of just a stone… if you season it with some vegetables, some herbs, some salt and such.

However, perhaps because by nature I’m an optimist, I also think there are interesting things to be learned from the tight relation between quantum and classical mechanics that appears in geometric quantization. And I hope to talk more about those in future articles.

by John Baez at December 01, 2018 08:38 PM

November 30, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Is science influenced by politics?

“Most scientists and historians would agree that Einstein’s quest was driven by scientific curiosity.” Photograph:  Getty Images)

“Science is always political,” asserted a young delegate at an international conference on the history of physics earlier this month. It was a very enjoyable meeting, but I noticed the remark caused a stir among many of the physicists in the audience.

In truth, the belief that the practice of science is never entirely free of politics has been a steady theme of historical scholarship for some years now, as can be confirmed by a glance at any scholarly journal on the history of science. At a conference specifically designed to encourage interaction between scientists, historians and sociologists of science, it was interesting to see a central tenet of modern scholarship openly questioned.

Famous debate

Where does the idea come from? A classic example of the hypothesis can be found in the book Leviathan and the Air-Pump by Steven Shapin and Simon Schaffer. In this highly influential work, the authors considered the influence of the politics of the English civil war and the restoration on the famous debate between scientist Robert Boyle and philosopher Thomas Hobbesconcerning the role of experimentation in science. More recently, many American historians of science have suggested that much of the success of 20th century American science, from aeronautics to particle physics, was driven by the politics of the cold war.

Similarly, there is little question that CERN, the famous inter-European particle physics laboratory at Geneva, was constructed to stem the brain-drain of European physicists to the United States after the second World War. CERN has proved itself many times over as an outstanding example of successful international scientific collaboration, although Ireland has yet to join.

But do such examples imply that science is always influenced by politics? Some scientists and historians doubt this assertion. While one can see how a certain field or technology might be driven by national or international political concerns, the thesis seems less tenable when one considers basic research. In what way is the study of the expanding universe influenced by politics? Surely the study of the elementary particles is driven by scientific curiosity?


In addition, it is difficult to definitively prove a link between politics and a given scientific advance – such assertions involve a certain amount of speculation. For example, it is interesting to note that many of the arguments in Leviathan have been seriously questioned, although these criticisms have not received the same attention as the book itself.

That said, few could argue that research into climate science in the United States suffered many setbacks during the presidency of George W Bush, and a similar situation pertains now. But the findings of American climate science are no less valid than they were at other time and the international character of scientific enquiry ensures a certain objectivity and continuity of research. Put bluntly, there is no question that resistance to the findings of climate science is often politically motivated, but there is little evidence that climate science itself is political.

Another factor concerns the difference between the development of a given field and the dawning of an entirely new field of scientific inquiry. In a recent New York Times article titled “How politics shaped general relativity”, the American historian of science David Kaiser argued convincingly for the role played by national politics in the development of Einstein’s general theory of relativity in the United States. However, he did not argue that politics played a role in the original gestation of the theory – most scientists and historians would agree that Einstein’s quest was driven by scientific curiosity.

All in all, I think there is a danger of overstating the influence of politics on science. While national and international politics have an impact on every aspect our lives, the innate drive of scientific progress should not be overlooked. Advances in science are generally propelled by the engine of internal logic, by observation, hypothesis and theory-testing. No one is immune from political upheaval, but science has a way of weeding out incorrect hypotheses over time.

Cormac O’Raifeartaigh lectures in physics at Waterford Institute of Technology and is a visiting associate professor at University College Dublin

by cormac at November 30, 2018 04:43 PM

ZapperZ - Physics and Physicists

Quantum Entanglement of 10 Billion Atoms!
Not only is the Schrodinger Cat getting fatter, but the EPR/Bell bulldog is also putting on mass.

New report out of Delft University has shown the successful demonstration of quantum entanglement of two strips of silicon resonators, consisting of roughly 10 billion atoms!

They demonstrated quantum entanglement and violations of Bell’s inequality—a canonical test of the principle that all influences on a particle are local and that particle states exist independently of the observer. They used two mechanical resonators, each containing roughly 10 billion atoms.

If you do not have access to the PRL paper, you may read the arXiv version here.

This is quite a feat, and I think that things can only get bigger, literally and figuratively.


by ZapperZ ( at November 30, 2018 03:26 PM

November 29, 2018

ZapperZ - Physics and Physicists

It Does NOT Defy 156-Year-Old Law of Physics!
Often times, popular accounts of physics and physics discoveries/advancements are dramatized and sensationalized to catch the eyes of the public. I'm all for catching their attention in this day and age, but really, many of these are highly misleading and tend to over-dramatize certain things.

This is one such example. It started off with an eye-catching title:

"Energy Efficiency Breakthrough Defies 156-Year-Old Law of Physics"

Really? Do we have a Nobel Prize already lined up for these people? After all, what could be more astounding and impactful than a discovery that "defies" an old and established law of physics?

Turns out, as I suspected, that it is a new solution to the well-known Maxwell equation that had never been discovered before. But even if you don't know anything about Maxwell equation and what the discovery is all about, if you pay attention to what they wrote, you would have noticed something contradictory to what the title claimed:

The first several efforts were unsuccessful until the team conceived of using an electrical conductor in movement. They proceeded to solve Maxwell’s equations analytically in order to demonstrate that not only could reciprocity be broken but that coupling could also be made maximally asymmetric.

Notice that they USED Maxwell's equations (i.e. the 156-year-old law of physics) and found new solutions that hadn't been thought to be possible. So how could they be defying it when they actually used it? They may have defined previous notion that there are no solutions of that type, but they did not defy Maxwell equations, not in the least bit!

Sussex University press office needs to get their act together and not go for such cheap thrills. And I'm surprised that the researchers involved in this actually let a title like that go through.

Edit 11/29/2018: THIS is how this discovery should have been reported, as done by Physics World. Notice that nowhere in there was there any claim of any laws of physics that has been violated!


by ZapperZ ( at November 29, 2018 03:12 PM

November 28, 2018

John Baez - Azimuth

Stratospheric Controlled Perturbation Experiment

I have predicted for a while that as the issue of climate change becomes ever more urgent, the public attitude regarding geoengineering will at some point undergo a phase transition. For a long time it seems the general attitude has been that deliberately interfering with the Earth’s climate on a large scale is “unthinkable”: beyond the pale. I predict that at some point this will flip and the general attitude will become: “how soon can we do it?”

The danger then is that we rush headlong into something untested that we’ll regret.

For a while I’ve been advocating research in geoengineering, to prevent a big mistake like this. Those who consider it “unthinkable” often object to such research, but I think preventing research is not a good long-term policy. I think it actually makes it more likely that at some point, when enough people become really desperate about climate change, we will do something rash without enough information about the possible effects.

Anyway, one can argue about this all day: I can see the arguments for both sides. But here is some news: scientists will soon study how calcium carbonate disperses when you dump a little into the atmosphere:

First sun-dimming experiment will test a way to cool Earth, Nature, 27 November 2018.

It’s a good article—read it! Here’s the key idea:

If all goes as planned, the Harvard team will be the first in the world to move solar geoengineering out of the lab and into the stratosphere, with a project called the Stratospheric Controlled Perturbation Experiment (SCoPEx). The first phase — a US$3-million test involving two flights of a steerable balloon 20 kilometres above the southwest United States — could launch as early as the first half of 2019. Once in place, the experiment would release small plumes of calcium carbonate, each of around 100 grams, roughly equivalent to the amount found in an average bottle of off-the-shelf antacid. The balloon would then turn around to observe how the particles disperse.

The test itself is extremely modest. Dai, whose doctoral work over the past four years has involved building a tabletop device to simulate and measure chemical reactions in the stratosphere in advance of the experiment, does not stress about concerns over such research. “I’m studying a chemical substance,” she says. “It’s not like it’s a nuclear bomb.”

Nevertheless, the experiment will be the first to fly under the banner of solar geoengineering. And so it is under intense scrutiny, including from some environmental groups, who say such efforts are a dangerous distraction from addressing the only permanent solution to climate change: reducing greenhouse-gas emissions. The scientific outcome of SCoPEx doesn’t really matter, says Jim Thomas, co-executive director of the ETC Group, an environmental advocacy organization in Val-David, near Montreal, Canada, that opposes geoengineering: “This is as much an experiment in changing social norms and crossing a line as it is a science experiment.”

Aware of this attention, the team is moving slowly and is working to set up clear oversight for the experiment, in the form of an external advisory committee to review the project. Some say that such a framework, which could pave the way for future experiments, is even more important than the results of this one test. “SCoPEx is the first out of the gate, and it is triggering an important conversation about what independent guidance, advice and oversight should look like,” says Peter Frumhoff, chief climate scientist at the Union of Concerned Scientists in Cambridge, Massachusetts, and a member of an independent panel that has been charged with selecting the head of the advisory committee. “Getting it done right is far more important than getting it done quickly.”

For more on SCoPEx, including a FAQ, go here:

Stratospheric Controlled Perturbation Experiment (SCoPEx), Keutsch Group, Harvard.

by John Baez at November 28, 2018 04:23 PM

November 22, 2018

Sean Carroll - Preposterous Universe


This year we give thanks for an historically influential set of celestial bodies, the moons of Jupiter. (We’ve previously given thanks for the Standard Model Lagrangian, Hubble’s Law, the Spin-Statistics Theorem, conservation of momentum, effective field theory, the error bar, gauge symmetry, Landauer’s Principle, the Fourier Transform, Riemannian Geometry, the speed of light, and the Jarzynski equality.)

For a change of pace this year, I went to Twitter and asked for suggestions for what to give thanks for in this annual post. There were a number of good suggestions, but two stood out above the rest: @etandel suggested Noether’s Theorem, and @OscarDelDiablo suggested the moons of Jupiter. Noether’s Theorem, according to which symmetries imply conserved quantities, would be a great choice, but in order to actually explain it I should probably first explain the principle of least action. Maybe some other year.

And to be precise, I’m not going to bother to give thanks for all of Jupiter’s moons. 78 Jovian satellites have been discovered thus far, and most of them are just lucky pieces of space debris that wandered into Jupiter’s gravity well and never escaped. It’s the heavy hitters — the four Galilean satellites — that we’ll be concerned with here. They deserve our thanks, for at least three different reasons!

Reason One: Displacing Earth from the center of the Solar System

Galileo discovered the four largest moons of Jupiter — Io, Europa, Ganymede, and Callisto — back in 1610, and wrote about his findings in Sidereus Nuncius (The Starry Messenger). They were the first celestial bodies to be discovered using that new technological advance, the telescope. But more importantly for our present purposes, it was immediately obvious that these new objects were orbiting around Jupiter, not around the Earth.

All this was happening not long after Copernicus had published his heliocentric model of the Solar System in 1543, offering an alternative to the prevailing Ptolemaic geocentric model. Both models were pretty good at fitting the known observations of planetary motions, and both required an elaborate system of circular orbits and epicycles — the realization that planetary orbits should be thought of as ellipses didn’t come along until Kepler published Astronomia Nova in 1609. As everyone knows, the debate over whether the Earth or the Sun should be thought of as the center of the universe was a heated one, with the Roman Catholic Church prohibiting Copernicus’s book in 1616, and the Inquisition putting Galileo on trial in 1633.

Strictly speaking, the existence of moons orbiting Jupiter is equally compatible with a heliocentric or geocentric model. After all, there’s nothing wrong with thinking that the Earth is the center of the Solar System, but that other objects can have satellites. However, the discovery brought about an important psychological shift. Sure, you can put the Earth at the center and still allow for satellites around other planets. But a big part of the motivation for putting Earth at the center was that the Earth wasn’t “just another planet.” It was supposed to be the thing around which everything else moved. (Remember that we didn’t have Newtonian mechanics at the time; physics was still largely an Aristotelian story of natures and purposes, not a bunch of objects obeying mindless differential equations.)

The Galilean moons changed that. If other objects have satellites, then Earth isn’t that special. And if it’s not that special, why have it at the center of the universe? Galileo offered up other arguments against the prevailing picture, from the phases of Venus to mountains on the Moon, and of course once Kepler’s ellipses came along the whole thing made much more mathematical sense than Ptolemy’s epicycles. Thus began one of the great revolutions in our understanding of our place in the cosmos.

Reason Two: Measuring the speed of light

Time is what clocks measure. And a clock, when you come right down to it, is something that does the same thing over and over again in a predictable fashion with respect to other clocks. That sounds circular, but it’s a nontrivial fact about our universe that it is filled with clocks. And some of the best natural clocks are the motions of heavenly bodies. As soon as we knew about the moons of Jupiter, scientists realized that they had a new clock to play with: by accurately observing the positions of all four moons, you could work out what time it must be. Galileo himself proposed that such observations could be used by sailors to determine their longitude, a notoriously difficult problem.

Danish astronomer Ole Rømer noted a puzzle when trying to use eclipses of Io to measure time: despite the fact that the orbit should be an accurate clock, the actual timings seemed to change with the time of year. Being a careful observational scientist, he deduced that the period between eclipses was longer when the Earth was moving away from Jupiter, and shorter when the two planets were drawing closer together. An obvious explanation presented itself: the light wasn’t traveling instantaneously from Jupiter and Io to us here on Earth, but rather took some time. By figuring out exactly how the period between eclipses varied, we could then deduce what the speed of light must be.

Rømer’s answer was that light traveled at about 220,000 kilometers per second. That’s pretty good! The right answer is 299,792 km/sec, about 36% greater than Rømer’s value. For comparison purposes, when Edwin Hubble first calculated the Hubble constant, he derived a value of about 500 km/sec/Mpc, whereas now we know the right answer is about 70 km/sec/Mpc. Using astronomical observations to determine fundamental parameters of the universe isn’t easy, especially if you’re the first one to to it.

Reason Three: Looking for life

Here in the present day, Jupiter’s moons have not lost their fascination or importance. As we’ve been able to study them in greater detail, we’ve learned a lot about the history and nature of the Solar System more generally. And one of the most exciting prospects is that one or more of these moons might harbor life.

It used to be common to think about the possibilities for life outside Earth in terms of a “habitable zone,” the region around a star where temperatures allowed planets to have liquid water. (Many scientists think that liquid water is a necessity for life to exist — but maybe we’re just being parochial about that.) In our Solar System, Earth is smack-dab in the middle of the habitable zone, and Mars just sneaks in. Both Venus and Jupiter are outside, on opposite ends.

But there’s more than one way to have liquid water. It turns out that both Europa and Ganymede, as well as Saturn’s moons Titan and Enceladus, are plausible homes for large liquid oceans. Europa, in particular, is thought to possess a considerable volume of liquid water underneath an icy crust — approximately two or three times as much water as in all the oceans on Earth. The point is that solar radiation isn’t the only way to heat up water and keep it at liquid temperatures. On Europa, it’s likely that heat is generated by the tidal pull from Jupiter, which stretches and distorts the moon’s crust as it rotates.

Does that mean there could be life there? Maybe! Nobody really knows. Smart money says that we’re more likely to find life on a wet environment like Europa than a dry one like Mars. And we’re going to look — the Europa Clipper mission is scheduled for launch by 2025.

If you can’t wait for then, go back and watch the movie Europa Report. And while you do, give thanks to Galileo and his discovery of these fascinating celestial bodies.

by Sean Carroll at November 22, 2018 10:59 PM

November 21, 2018

Lubos Motl - string vacua and pheno

Swampland refinement of higher-spin no-go theorems
Dieter Lüst and two co-authors from Monkberg (Munich) managed to post the first hep-th paper today at 19:00:02 (a two-second lag is longer than usual, the timing contest wasn't too competitive):
A Spin-2 Conjecture on the Swampland
They articulate an interesting conjecture about the spin-two fields in quantum gravity – a conjecture of the Swampland type that is rather close to the Weak Gravity Conjecture and, in fact, may be derived from the Weak Gravity Conjecture under a mild additional assumption.

In particular, they claim that whenever there are particles whose spin is two or higher, they have to be massive and there has to be a whole tower of massive states. More precisely, if there is mass \(m\) spin-two particle in quantum gravity which is self-interacting, the strength of the interaction may be parameterized by a new mass scale \(M_W\) and the effective field theory has to break down at the mass scale \(\Lambda\) where\[

\frac{\Lambda}{M_{\rm Planck}} = \frac{m}{M_W}

\] You see that the Planck scale enters. The breakdown scale \(\Lambda\) of the effective theory is basically the lowest mass of the next-to-lightest state in the predicted massive tower.

So if the self-interaction of the massive field is \(M_W\approx M_{\rm Planck}\), then we get \(\Lambda\approx m\) and all the lighter states in the tower are parameterically "comparably light" to the lightest spin-two boson. However, you can try to make the self-interaction stronger, by making \(M_W\) smaller than the Planck scale, and then the tower may become more massive than the lightest representative.

They may derive the conjecture from the Weak Gravity Conjecture if they rewrite the self-interaction of the spin-two field through an interaction with a "gauge field" which is treated analogously to the electromagnetic gauge field in the Weak Gravity Conjecture – although it is the Stückelberg gauge field. It's not quite obvious to me that the Weak Gravity Conjecture must apply to gauge fields that are "unnecessary" or "auxiliary" in this sense but maybe there's a general rule saying that general principles such as the Weak Gravity Conjecture have to apply even in such "optional" cases.

I think that these conjectures – and evidence and partial proofs backing them – represent a clear progress of our knowledge beyond effective field theory. You know, in quantum field theory, we have theorems such as the Weinberg-Witten theorem. This particular one says that higher-spin particles can't be composite and similar things. That's only true in full-blown quantum field theories. But quantum gravity isn't strictly a quantum field theory (in the bulk). When you add gravity, things get generalized in a certain way. And things that were possible or impossible without gravity may become impossible or possible with quantum gravity.

Some "impossible scenarios" from QFTs may be suddenly allowed – but one pays with the need to allow an infinite tower of states and similar things. Note that if you look at\[

\frac{\Lambda}{M_{\rm Planck}} = \frac{m}{M_W}

\] and send \(M_{\rm Planck}\to \infty\) i.e. if you turn the gravity off, the Bavarian conjecture says that \(\Lambda\to\infty\), too. So it becomes vacuous because it says that the effective theory "must break" at energy scales higher than infinity. Needless to say, the same positive power of the Planck mass appears in the original Weak Gravity Conjecture, too. That conjecture also becomes vacuous if you turn the gravity off.

When quantum gravity is turned on, there are new interactions, new states (surely the black hole microstates), and new mandatory interactions of these states. These new states and duties guarantee that theories where you would only add some fields or particles "insensitively" would be inconsistent. People are increasingly understanding what is the "new stuff" that simply has to happen in quantum gravity. And this new mandatory stuff may be understood either by some general consistency-based considerations assuming quantum gravity; or by looking at much more specific situations in the stringy vacua. Like in most of the good Swampland papers, Lüst et al. try to do both.

So far these two lines of reasoning are consistent with one another. They are increasingly compatible and increasingly equivalent – after all, string theory seems to be the only consistent theory of quantum gravity although we don't have any "totally canonical and complete" proof of this uniqueness (yet). The Swampland conjectures may be interpreted as another major direction of research that makes this point – that string theory is the only game in town – increasingly certain.

by Luboš Motl ( at November 21, 2018 01:20 PM

November 18, 2018

The n-Category Cafe

Modal Types Revisited

We’ve discussed the prospects for adding modalities to type theory for many a year, e.g., here at the Café back at Modal Types, and frequently at the nLab. So now I’ve written up some thoughts on what philosophy might make of modal types in this preprint. My debt to the people who helped work out these ideas will be acknowledged when I publish the book.

This is to be the fourth chapter of a book which provides reasons for philosophy to embrace modal homotopy type theory. The book takes in order the components: types, dependency, homotopy, and finally modality.

The chapter ends all too briefly with mention of Mike Shulman et al.’s project, which he described in his post – What Is an n-Theory?. I’m convinced this is the way to go.

PS. I already know of the typo on line 8 of page 4.

by david ( at November 18, 2018 09:34 AM

November 16, 2018

Clifford V. Johnson - Asymptotia

Stan Lee’s Contributions to Science!!

I'm late to the party. Yes, I use the word party, because the outpouring of commentary noting the passing of Stan Lee has been, rightly, marked with a sense of celebration of his contributions to our culture. Celebration of a life full of activity. In the spirit of a few of the "what were you doing when you heard..." stories I've heard, involving nice coincidences and ironies, I've got one of my own. I'm not exactly sure when I heard the announcement on Monday, but I noticed today that it was also on Monday that I got an email giving me some news* about the piece I wrote about the Black Panther earlier this year for the publication The Conversation. The piece is about the (then) pending big splash the movie about the character (co-created by Stan Lee in the 60s) was about to make in the larger culture, the reasons for that, and why it was also a tremendous opportunity for science. For science? Yes, because, as I said there:

Vast audiences will see black heroes of both genders using their scientific ability to solve problems and make their way in the world, at an unrivaled level.


Improving science education for all is a core endeavor in a nation’s competitiveness and overall health, but outcomes are limited if people aren’t inspired to take an interest in science in the first place. There simply are not enough images of black scientists – male or female – in our media and entertainment to help inspire. Many people from underrepresented groups end up genuinely believing that scientific investigation is not a career path open to them.

Moreover, many people still see the dedication and study needed to excel in science as “nerdy.” A cultural injection of Black Panther heroics could help continue to erode the crumbling tropes that science is only for white men or reserved for people with a special “science gene.”

And here we are many months later, and I was delighted to see that people did get a massive dose of science inspiration from T'Challa and his sister Shuri, and the whole of the Wakanda nation, not just in Black Panther, but also in the Avengers: Infinity War movie a short while after.

But my larger point here is that so much of this goes back to Stan Lee's work with collaborators in not just making "relatable" superheroes, as you've heard said so many times --- showing their flawed human side so much more than the dominant superhero trope (represented by Superman, Wonder Woman, Batman, etc.,) allowed for at the time -- but making science and scientists be at the forefront of much of it. So many of the characters either were scientists (Banner (Hulk), Richards (Mr.Fantastic), T'Challa (BlackPanther), Pym (Ant Man), Stark (Ironman), etc) or used science actively to solve problems (e.g. Parker/Spiderman).

This was hugely influential on young minds, I have no doubt. This is not a small number of [...] Click to continue reading this post

The post Stan Lee’s Contributions to Science!! appeared first on Asymptotia.

by Clifford at November 16, 2018 07:05 PM

Lubos Motl - string vacua and pheno

AdS/CFT as the swampland/bootstrap duality
Last June, I discussed machine learning approaches to the search for realistic vacua.

Computers may do a lot of work and lots of assumptions that some tasks may be "impossibly hard" may be shown incorrect with some help of computers that think and look for patterns. Today, a new paper was published on that issue, Deep learning in the heterotic orbifold landscape. Mütter, Parr, and Vaudrevange use "autoencoder neural networks" as their brain supplements.

The basic idea of the bootstrap program in physics.

But I want to mention another preprint,
Putting the Boot into the Swampland
The authors, Conlon (Oxford) and Quevedo (Trieste), have arguably belonged to the Stanford camp in the Stanford-vs-Swampland polemics. But they decided to study Cumrun Vafa's conjectures seriously and extended it in an interesting way.

Cumrun's "swampland" reasoning feels like a search for new, simple enough, universal principles of Nature that are obeyed in every theory of quantum gravity – or in every realization of string theory. These two "in" are a priori unequivalent and they represent slightly different papers or parts of papers as we know them today. But Cumrun Vafa and others, including me, believe that ultimately, "consistent theory of quantum gravity" and "string/M-theory" describe the same entity – they're two ways to look at the same beast. Why? Because, most likely, string theory really is the only game in town.

Some of the inequalities and claims that discriminate the consistent quantum gravity vacua against the "swampland" sound almost like the uncertainty principle, like some rather simple inequalities or existence claims. In one of them, Cumrun claims that a tower of states must exist whenever the quantum gravity moduli space has some extreme regions.

Conlon and Quevedo assume that this quantum gravitational theory lives in the anti de Sitter space and study the limit \(R_{AdS}\to\infty\). The hypothesized tower on the bulk side gets translated to a tower of operators in the CFT, by the AdS/CFT correspondence. They argue that some higher-point interactions are fully determined on the AdS side and that the constraints they obey may be translated, via AdS/CFT, to known, older "bootstrap" constraints that have been known in CFT for a much longer time. Well, this is the more "conjectural" part of their paper – but it's the more interesting one and they have some evidence.

If that reasoning is correct, string theory is in some sense getting where it was 50 years ago. String theory partly arose from the "bootstrap program", the idea that mere general consistency conditions are enough to fully identify the S-matrix and similar things. That big assumption was basically ruled out – especially when "constructive quarks and gluons" were accepted as the correct description of the strong nuclear force. String theory has basically violated the "bootstrap wishful thinking" as well because it became analogously "constructive" as QCD and many other quantum field theories.

However, there has always been a difference. String theory generates low-energy effective field theories from different solutions of the same underlying theory. The string vacua may be mostly connected with each other on the moduli space or through some physical processes (topology changing transitions etc.). That's different from quantum field theories which are diverse and truly disconnected from each other. So string theory has always preserved the uniqueness and the potential to be fully derived from some general consistency condition(s). We don't really know what these conditions precisely are yet.

The bootstrap program has been developed decades ago and became somewhat successful for conformal field theories – especially but not only the two-dimensional conformal field theories similar to those that live on the stringy world sheets. Cumrun's swampland conditions seem far more tied to gravity and the dynamical spacetime. But by the AdS/CFT, some of the swampland conditions may be mapped to the older bootstrap constraints. Conlon and Quevedo call the map "bootland", not that it matters. ;-)

The ultimate consistency-based definition of quantum gravity or "all of string/M-theory" could be some clever generalization of the conditions we need in CFTs – and the derived bootstrap conditions they obey. We need some generalization in the CFT approach, I guess. Because CFTs are local, we may always distinguish "several particles" from "one particle". That's related to our ability to "count the number of strings" in perturbative string theory i.e. to distinguish single-string and multi-string states, and to count loops in the loop diagrams (by the topology of the world sheet).

It seems clear to me that this reduction to the one-string "simplified theory" must be abandoned in the gravitational generalization of the CFT calculus. The full universal definition of string theory must work with one-object and multi-object states on the same footing from the very beginning. Even though it looks much more complicated, there could be some analogies of the state-operator correspondence, operator product expansions, and other things in the "master definition of string/M-theory". In the perturbative stringy limits, one should be able to derive the world sheet CFT axioms as a special example.

by Luboš Motl ( at November 16, 2018 12:01 PM

November 15, 2018

Jon Butterworth - Life and Physics

The Standard Model – TEDEd Lesson
I may have mentioned before, the Standard Model is about 50 years old now. It embodies a huge amount of human endeavour and understanding, and I try to explain it in my book, A Map of the Invisible (or Atom … Continue reading

by Jon Butterworth at November 15, 2018 04:50 PM

The n-Category Cafe

Magnitude: A Bibliography

I’ve just done something I’ve been meaning to do for ages: compiled a bibliography of all the publications on magnitude that I know about. More people have written about it than I’d realized!

This isn’t an exercise in citation-gathering; I’ve only included a paper if magnitude is the central subject or a major theme.

I’ve included works on magnitude of ordinary, un-enriched, categories, in which context magnitude is usually called Euler characteristic. But I haven’t included works on the diversity measures that are closely related to magnitude.

Enjoy! And let me know in the comments if I’ve missed anything.

by leinster ( at November 15, 2018 12:51 AM

November 13, 2018

ZapperZ - Physics and Physicists

Muons And Special Relativity
For those of us who studied physics or have taken a course involving Special Relativity, this is nothing new. The case of a lot of muons being detected on the earth's surface has been used as an example of the direct result of SR's time dilation and length contraction.

Still, it bears repeating, and presenting to those who are not aware of this, and this is what this MinutePhysics video has done.


by ZapperZ ( at November 13, 2018 09:51 PM

November 12, 2018

Jon Butterworth - Life and Physics

James Stirling
Today I got the terrible news of the untimely death of Professor James Stirling. A distinguished particle physicist and until August the Provost of Imperial College London, he will be remembered with fondness and admiration by many. Even astronomers – … Continue reading

by Jon Butterworth at November 12, 2018 05:13 PM

The n-Category Cafe

A Well Ordering Is A Consistent Choice Function

Well orderings have slightly perplexed me for a long time, so every now and then I have a go at seeing if I can understand them better. The insight I’m about to explain doesn’t resolve my perplexity, it’s pretty trivial, and I’m sure it’s well known to lots of people. But it does provide a fresh perspective on well orderings, and no one ever taught me it, so I thought I’d jot it down here.

In short: the axiom of choice allows you to choose one element from each nonempty subset of any given set. A well ordering on a set is a way of making such a choice in a consistent way.

Write <semantics>P(X)<annotation encoding="application/x-tex">P'(X)</annotation></semantics> for the set of nonempty subsets of a set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. One formulation of the axiom of choice is that for any set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, there is a function <semantics>h:P(X)X<annotation encoding="application/x-tex">h: P'(X) \to X</annotation></semantics> such that <semantics>h(A)A<annotation encoding="application/x-tex">h(A) \in A</annotation></semantics> for all <semantics>AP(X)<annotation encoding="application/x-tex">A \in P'(X)</annotation></semantics>.

But if we think of <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> as a piece of algebraic structure on the set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, it’s natural to ask that <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> behaves in a consistent way. For example, given two nonempty subsets <semantics>A,BX<annotation encoding="application/x-tex">A, B \subseteq X</annotation></semantics>, how can we choose an element of <semantics>AB<annotation encoding="application/x-tex">A \cup B</annotation></semantics>?

  • We could, quite simply, take <semantics>h(AB)AB<annotation encoding="application/x-tex">h(A \cup B) \in A \cup B</annotation></semantics>.

  • Alternatively, we could take first take <semantics>h(A)A<annotation encoding="application/x-tex">h(A) \in A</annotation></semantics> and <semantics>h(B)B<annotation encoding="application/x-tex">h(B) \in B</annotation></semantics>, then use <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> to choose an element of <semantics>{h(A),h(B)}<annotation encoding="application/x-tex">\{h(A), h(B)\}</annotation></semantics>. The result of this two-step process is <semantics>h({h(A),h(B)})<annotation encoding="application/x-tex">h(\{ h(A), h(B) \})</annotation></semantics>.

A weak form of the “consistency” I’m talking about is that these two methods give the same outcome:

<semantics>h(AB)=h({h(A),h(B)})<annotation encoding="application/x-tex"> h(A \cup B) = h(\{h(A), h(B)\}) </annotation></semantics>

for all <semantics>A,BP(X)<annotation encoding="application/x-tex">A, B \in P'(X)</annotation></semantics>. The strong form is similar, but with arbitrary unions instead of just binary ones:

<semantics>h(Ω)=h({h(A):AΩ})<annotation encoding="application/x-tex"> h\Bigl( \bigcup \Omega \Bigr) = h\Bigl( \bigl\{ h(A) : A \in \Omega \bigr\} \Bigr) </annotation></semantics>

for all <semantics>ΩPP(X)<annotation encoding="application/x-tex">\Omega \in P'P'(X)</annotation></semantics>.

Let’s say that a function <semantics>h:P(X)X<annotation encoding="application/x-tex">h: P'(X) \to X</annotation></semantics> satisfying the weak or strong consistency law is a weakly or strongly consistent choice function on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

The central point is this:

A consistent choice function on a set <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is the same thing as a well ordering on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.

That’s true for consistent choice functions in both the weak and the strong sense — they turn out to be equivalent.

The proof is a pleasant little exercise. Given a well ordering <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics> on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, define <semantics>h:P(X)X<annotation encoding="application/x-tex">h: P'(X) \to X</annotation></semantics> by taking <semantics>h(A)<annotation encoding="application/x-tex">h(A)</annotation></semantics> to be the least element of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>. It’s easy to see that this is a consistent choice function. In the other direction, given a consistent choice function <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>, define <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics> by

<semantics>xyh({x,y})=x.<annotation encoding="application/x-tex"> x \leq y \Leftrightarrow h(\{x, y\}) = x. </annotation></semantics>

You can convince yourself that <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics> is a well ordering and that <semantics>h(A)<annotation encoding="application/x-tex">h(A)</annotation></semantics> is the least element of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, for any nonempty <semantics>AX<annotation encoding="application/x-tex">A \subseteq X</annotation></semantics>. The final task, also easy, is to show that the two constructions (of a consistent choice function from a well ordering and vice versa) are mutually inverse. And that’s that.

(For anyone following in enough detail to wonder about the difference between weak and strong: you only need to assume that <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> is a weakly consistent choice function in order to prove that the resulting relation <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics> is a well ordering, but if you start with a well ordering <semantics><annotation encoding="application/x-tex">\leq</annotation></semantics>, it’s clear that the resulting function <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> is strongly consistent. So weak is equivalent to strong.)

For me, the moral of the story is as follows. As everyone who’s done some set theory knows, if we assume the axiom of choice then every set can be well ordered. Understanding well orderings as consistent choice functions, this says the following:

If we’re willing to assume that it’s possible to choose an element of each nonempty subset of a set, then in fact it’s possible to make the choice in a consistent way.

People like to joke that the axiom of choice is obviously true, and that the well orderability of every set is obviously false. (Or they used to, at least.) The theorem on well ordering is derived from the axiom of choice by an entirely uncontroversial chain of reasoning, so I’ve always taken that joke to be the equivalent of throwing one’s hands up in despair: isn’t math weird! Look how this highly plausible statement implies an implausible one!

So the joke expresses a breakdown in many people’s intuitions. And with well orderings understood in the way I’ve described, we can specify the point at which the breakdown occurs: it’s in the gap between making a choice and making a consistent choice.

by leinster ( at November 12, 2018 02:08 PM

Jon Butterworth - Life and Physics

Brief Answers to the Big Questions by Stephen Hawking – review
Back in the Guardian (well, the Observer actually) with a review of Stephen Hawking’s final book . A couple of paragraphs didn’t make the edit; no complaints from me about that, but I put them here mainly for the sake of … Continue reading

by Jon Butterworth at November 12, 2018 10:35 AM

November 11, 2018

Lubos Motl - string vacua and pheno

New veins of science can't be found by a decree
Edwin has pointed out that a terrifying anti-science article was published in The Japan Times yesterday:
Scientists spend too much time on the old.
The author, the Bloomberg opinion columnist named Noah Smith (later I noticed that the rant was first published by Bloomberg), starts by attacking Ethan Siegel's text that had supported a new particle collider. Smith argues that because too many scientists are employed in science projects that extend the previous knowledge which leads to diminishing returns, all the projects extending the old science should be defunded and the money should be distributed to completely new small projects that have far-reaching practical consequences.

What a pile of toxic garbage!

Let's discuss the content of Smith's diatribe in some detail:
In a recent Forbes article, astronomer and writer Ethan Siegel called for a big new particle collider. His reasoning was unusual. Typically, particle colliders are created to test theories [...] But particle physics is running out of theories to test. [...] But fortunately governments seem unlikely to shell out the tens of billions of dollars required, based on nothing more than blind hope that interesting things will appear.
First of all, Smith says that it's "unusual" to say that the new collider should search for deviations from the Standard Model even if we don't know which ones we should expect. But there is nothing unusual about it at all and by his anxiety, Smith only shows that he doesn't have the slightest clue what science is.

The falsification of existing theories is how science makes progress – pretty much the only way how experimenters contribute to progress in science. This statement boils down to the fact that science can never prove theories to be completely right – after all, with the exception of the truly final theory, theories of physics are never quite right.

Instead, what an experiment can do reliably enough is to show that a theory is wrong. When the deviations from the old theoretical predictions are large enough so that we can calculate that it is extremely unlikely for such large deviations to occur by chance, we may claim with certainty that something that goes beyond the old theory has been found.

This is how the Higgs boson was found, too. The deviation of the measured data from the truncated Standard model prediction that assumed that "no Higgs boson exists" grew to 5 sigma at which point the Higgs boson discovery was officially announced.

The only true dichotomy boils down to the question whether the new theories and phenomena are first given some particular shape by theorists or by experimenters. The history of physics is full of both examples. Sometimes theorists have reasons to become sufficiently certain that a new phenomenon should exist because of theoretical reasons, and that phenomenon is later found by an experiment. Sometimes an experiment sees a new and surprising phenomenon and theorists only develop a good theory that explains the phenomenon later.

Theorists are surely not running out of theories to test. There are thousands of models – often resulting from very deep and highly motivated theories such as string theory or at least the grand unification – with tens of thousands of predictions and all of them may be tested. The recent frequency of discoveries just makes it sure that we shouldn't expect a new phenomenon that goes beyond the Standard Model to be discovered every other day. This is how Nature works.

In this lovely video promoting a location for the ILC project (another one has won), I think that the English subtitles were only added recently. The girl is a bored positron waiting for an electron.

Smith says that the expectation that new interesting things may be seen by a new collider is a "blind hope". But it is not a hope, let alone a blind one. It is a genuine possibility. It is a fact of physics that we don't know whether the Standard Model works around the collision energy of \(100\TeV\). It either does or it does not. Indeed, because new physics is more interesting, physicists may "hope" that this is the answer that the collider will give. But the collider will give us some nontrivial information in either case.

Because the "new physics" answer is more interesting, one may say that the construction of the colliders is partially a bet, a lottery ticket, too. But most of progress just couldn't have emerged without experimenting, betting, taking a risk. If you want to avoid all risks, if you insist on certainty, you will have to rely on the welfare (or, if you are deciding how to invest your money, you need to rely on cash holdings or saving accounts with very low interest rates). You are a coward. You are not an important person for the world and you shouldn't get away with attempts to pretend that you are one.

Also, Smith says that governments are "unlikely to shell out the tens of billions". That's rubbish. Just like in the past, governments are very likely to reserve these funds because those are negligible amounts of money relatively to the overall budgets – and at least the symbolic implications of these modest expenses are far-reaching. When America was building the space research, a great fraction of the GDP was being spent on it – the fraction went up to 5% in a peak year. Compared to that, the price of a big collider is negligible. All governments have some people who know enough to be sure that rants by anti-science activists similar to Smith are worth nothing. Smith lives in a social bubble where his delusions are probably widespread but all the people in that bubble are largely disconnected from the most important things in the world and the society.

Japan is just deciding about the ILC in Japan.
Particle physicists have referred to this seeming dead end as a nightmare scenario. But it illustrates a deep problem with modern science. Too often, scientists expect to do bigger, more expensive versions of the research that worked before. Instead, what society often needs is for researchers to strike out in entirely new directions.
The non-discovery of new physics at the LHC has been described by disappointing phrases because people prefer when the experiments stimulate their own thinking and curiosity – and that of other physicists. Of course scientists prefer to do things where the chance for a discovery of something really new is higher. However, in fundamental physics, building a collider with a higher energy is the best known way to do it. You may be ignorant about this fact, Mr Smith, but it's just because you are an idiot, not because of some hypothetical flaw of high energy physics which is called high energy physics for a good reason. It's called in this way because increasing the energy is largely equivalent to making progress: higher energy is equivalent to shorter distance scales where we increasingly understand what is going on with an improving resolution.

If it were possible and easy to "strike out in entirely new directions", scientists would do it for obvious reasons – it would surely be great for the career of the author who finds a new direction. But qualitatively new discoveries are rare and cannot be ordered by a decree. We don't know in what exact directions "something new and interesting is hiding" which is why people must sort of investigate all promising enough directions. And looking in all the similar directions of "various new phenomena that may be seen at even higher energies" is simply the most promising strategy in particle physics according to what we know.

Equally importantly, extending the research strategies "that have worked before" isn't a sin. It's really how science always works. Scientific discoveries are never quite disconnected from the previous ones. Isaac Newton has found quite a revolutionary new direction – the quantitative basis for physics as we know it. He's still known for the proposition
If I have seen further it is by standing on the shoulders of giants.
Newton was partly joking – he wanted to mock some smaller and competing minds, namely Gottfried Leibniz and especially Robert Hooke who was short – but he was (and everyone was) aware of the fact that the new discoveries don't take place in the vacuum. Newton still had to build on the mathematics that was developed before him. When showing that the laws of gravity worked, he found Kepler's laws of planetary motion to be a very helpful summary of what his theory should imply, and so on.

Every new scientific advance is a "twist" in some previous ideas. It just cannot be otherwise. All the people who are claiming to make groundbreaking discoveries that are totally disconnected from the science of the recent century or so are full-blown crackpots.
During the past few decades, a disturbing trend has emerged in many scientific fields: The number of researchers required to generate new discoveries has steadily risen.
Yup. In some cases, the numbers may be reduced but in others, they cannot. For example, and this example is still rather typical for modern theoretical physics, M-theory was still largely found by one person, Edward Witten. It's unquestionable that most of the theoretical physicists have contributed much less science than Witten, even much less "science output per dollar". On the other hand, it's obvious that Witten has only discovered a small minority of the physics breakthroughs. If the number of theoretical physicists were one or comparable to one, the progress would be almost non-existent.

Experimental particle physics requires many more people for a single paper – like the 3,000 members of the ATLAS Collaboration (and extra 3,000 in CMS). But there are rather good reasons for that. ATLAS and CMS don't really differ from a company that produces something. For example, the legendary soft drink maker Kofola Czechoslovakia also has close to 3,000 employees. In Kofola, ATLAS, as well as CMS, the people do different kinds of work and if there's an obvious way to fire some of them while keeping all the vital processes going, it's being done.

You may compare Kofola, ATLAS, and CMS and decide which of them is doing a better job for the society. People in Czechoslovakia and Yugoslavia drink lots of Kofola products. People across the world are inspired to think about the collisions at the Large Hadron Collider. From a global perspective, Kofola, ATLAS, and CMS are negligible groups of employees. Each of them employs less than one-half of one millionth of the world population.

Think about the millions of people in the world who are employed in tax authorities although most of them could be fired and the tax collection could be done much more effectively with relatively modest improvements. Why does Mr Smith attack the teams working for the most important particle accelerator and not the tax officials? Because he is actually not motivated by any efficiency. He is driven by his hatred towards science.
In the 1800s, a Catholic monk named Gregor Mendel was able to discover some of the most fundamental concepts of genetic inheritance by growing pea plants.
Mendel was partly lucky – like many others. But his work cannot be extracted from the context. Mendel was one employee in the abbey in Brno, University of Olomouc, and perhaps other institutions in Czechia whose existence was at least partly justified by the efforts to deepen the human knowledge (or by efforts to breed better plants for economic reasons). At any rate, fundamental discoveries such as Newton's or Mendel's were waiting – they were the low-hanging fruits.

Indeed, one faces diminishing returns after the greatest discoveries are made, and this is true in every line of research and other activities. But this is a neutral and obvious fact, not something that can be rationally used against the whole fields. It's really a tautology – returns are diminishing after the greatest discoveries, otherwise they wouldn't be greatest. ;-) Particle physics didn't become meaningless after some event – any event, let's say the theoretical discovery of quantum field theory or the experimental discovery of W and Z bosons – just like genetics didn't become meaningless after Mendel discovered his fundamental laws. On the contrary, these important events were the beginnings when things actually started to be fun.

Smith complains that biotech companies have grown into multi-billion enterprises while Mendel was just playing in his modest garden. Why are billions spent for particle physics or genetics? Because they can. The mankind produces almost $100 trillion in GDP every year. Of course some fraction of it simply has to be genetics and particle physics because they're important, relatively speaking. It is ludicrous to compare the spending for human genome projects or the new colliders with Mendel's garden because no one actually has the choice of funding either Mendel's research or the International Particle Collider. These are not true competitors of one another because they're separated by 150 years! People across the epochs can't compete for funds. On top of that, the world GDP was smaller than today by orders of magnitude 150 years ago.

Instead, we must compare whether we pay more money for a collider and less money e.g. for soldiers in Afghanistan (the campaign has cost over $1 trillion; or anything else, I don't want this text to be focused on interventionism) or vice versa. These are actually competing options. Of course particle physics and genetics deserve tens of billions every decade, to say the least. Ten billion dollars is just 0.01% of the world GDP, an incredibly tiny fraction. Even if there were almost no results, studying science is a part of what makes us human. Nations that don't do such things are human to a lesser degree and animals to a higher degree and they can be more legitimately treated as animals by others – e.g. eradicated. For this reason, paying something for science (even pure science) also follows from the survival instincts.
The universe of scientific fields isn’t fixed. Today, artificial intelligence is an enormously promising and rapidly progressing area, but back in 1956...
Here we see one thing that might support instead. But I don't think that most people who work on artificial intelligence should be called scientists. They're really engineers – or even further from science. Their goal isn't to describe how Nature works. Their task is to invent and build new things that can do certain new things but that exploit the known pieces that work according to known laws.
To keep rapid progress going, it makes sense to look for new veins of scientific discovery. Of course, there’s a limit to how fast that process can be forced...
The main problem isn't "how fast that process can be forced". The main problem with Smith's diatribe is that the discovery itself cannot be forced or pre-programmed; and that the search for some things and according to some strategy shouldn't be forced by the laymen such as Mr Smith at all because such an enforced behavior reduces the freedom of the scientists which slows down progress. And the rate of progress is whatever it is. There aren't any trivial ways to make it much faster and claims to the contrary are a pure wishful thinking. No one should be allowed to harass other people just because the world disagrees with his wishful thinking. wasn’t until computers became sufficiently powerful, and data sets sufficiently big, that AI really took off.
The real point is that it just cannot be clear to everybody (or anybody!) from the beginning which research strategy or direction is likely to become interesting. But the scientists themselves are still more likely to make the right guess about the hot directions of future research than some ignorant laymen similar to Mr Smith who are obsessed with "forcing things" on everyone else.
But the way that scientists now are trained and hired seems to discourage them from striking off in bold new directions.
Mr Smith could clearly crawl into Mr Sm*lin's rectum and vice versa, to make it more obvious that allowing scum like that is a vicious circle.

What is actually discouraging scientists from striking off in bold new directions are anti-science rants such as this one by Mr Smith that clearly try to restrict what science can do (and maybe even think). If you think that you can make some groundbreaking discovery in a new direction, why don't you just do it yourself? Or together with thousands of similar inkspillers who are writing similar cr*p? And if you can't, why don't you exploit your rare opportunity to shut up? You don't have the slightest clue about science and the right way to do it and your influence over these matters is bound to be harmful.
This means that as projects like the Hadron Collider require ever-more particle physicists, ...
It is called the Large Hadron Collider, not just Hadron Collider, you Little Noam Smith aßhole.
With climate change a looming crisis, the need to discover sustainable energy technology...
Here we go. Only scientifically illiterate imbeciles like you believe that "climate change is a looming crisis". (I have already written several blog posts about dirty scumbags who would like to add physics funds to the climate hysteria.)

Just the would-be "research" into climate change has devoured over $100 billion – like ten Large Hadron Colliders – and the scientific fruits of this spending are non-existent. The only actual consequence of this "research" is that millions of stupid laymen such as Mr Smith have been fooled into believing that we face a problem with the climate. It wasn't really research, it has been a propaganda industry.

The money that has been wasted for the insane climate change hysteria is an excellent example of the crazy activities and funding that societies degenerate into if they start to be influenced by arrogant yet absolutely ignorant people similar to Mr Smith. That wasting (and the funds wasted for the actual policies are much higher, surely trillions) is an excellent example showing how harmful the politicization of science is.

The $10 billion Large Hadron Collider has still measured the mass of the Higgs boson – the only elementary spinless particle we know – as \(125\GeV\). The theoretically allowed interval was between \(50\GeV\) and \(800\GeV\) or so. What is a similar number that we have learned from the $100 billion climate change research in the recent 30 years?
So what science needs isn’t an even bigger particle collider; it needs something that scientists haven’t thought of yet.
The best way is to pick the most brilliant, motivated, and hard-working people as the scientists, allow them to do research as they see fit, and add extra funds to those that have made some significant achievements and who display an increased apparent likelihood of new breakthroughs or at least valuable advances to come – while making sure that aggressive yet stupid filth such as Mr Noah Smith doesn't intimidate them in any way.

Off-topic: It looks good for the Czech girls in the finals of the Fed Cup against the U.S. – 2-to-0 after Saturday matches. Both teams are without their top stars. On the Czech side, Plíšková is injured while Kvitová is ill. Incidentally, I noticed that the U.S. players and coach are super-excited whenever some Czechs play the most popular Czech fans' melody in any sports – which just happens to be When the Saints Go Marching In. ;-)

Sunday starts with the match of the "Russian" players from both teams – Kenin and Siniaková. Update: Siniaková looked much more playful and confident all the time but it ended up to be an incredibly tied and dramatic 4-hour match. But Siniaková became a new Czech heroine and my homeland has increased the number of Fed Cups from 10 to 11 – from superstring theory to M-theory.

As the video above suggests, Kvitová would be no good because she has lost to a Swiss retired painter (her fan) Mr Hubert Schmidt. You may see that she understands her occupation even as a theorist – she could rate him properly (if someone plays like that, he has to be famous, she correctly reasoned) and after the handshake, she was also able to identify the real name. ;-) The height and voice helped, too, she admitted. A touching prank.

by Luboš Motl ( at November 11, 2018 02:58 PM

November 09, 2018

ZapperZ - Physics and Physicists

Comparing Understanding of Graphs Between Physics and Psychology Students
I ran across this paper a while back, but didn't get to reading it carefully till now.

If you have followed this blog for any considerable period of time, you would have seen several posts where I emphasized the importance of physics education, NOT just for the physics knowledge, but also for the intangible skills that comes along with it. Skills such as analytical ability and deciding on the validity of what causes what are all skills that transcends the subject of physics. These are skills that are important no matter what the students end up doing in life.

While I had mentioned such things to my students during our first day of class each semester, it is always nice when there are EVIDENCE (remember that?) to back such claim. In this particular study, the researchers compare how students handle and understand the information that they can acquire from graphs on topics outside of their area of study.

The students involved are physics and psychology students in Zagreb, Croatia. They were tested on their understanding of the concept of slope and area under the graph, their qualitative and quantitative understanding of graphs, and comparing their understanding of graphs in the context of physics and finance. For the latter area (finance), both groups of students did not receive kind of lessons in that subject area and thus, are presumably unfamiliar with both groups.

Before we proceed, I found that in Croatia, physics is a compulsory subject in pre-college education there, which is quite heartening.

Physics is taught as a compulsory subject in the last two grades of all elementary schools and throughout four years of most of high schools in Croatia. Pupils are taught kinematics graphs at the age 15 and 16 (last grade of elementary school and first year of high school). Psychology students were not exposed to the teaching on kinematics graphs after high school, while physics students learned about kinematics graphs also in several university courses. Physics and psychology students had not encountered graphs related to prices, money, etc., in their formal education.
So the psychology students in college are already familiar with basic kinematics and graphs, but did not go further into it once they are in college, unlike physics students. I'd say that this is more than what most high school students in the US have gone through, since Physics is typically not required in high schools here.

In any case, the first part of the study wasn't too surprising, that physics students did better overall at physics questions related to the slope and area under the graph. But it was interesting that the understanding of what "area under the graph" tends to be problematic for both groups. And when we got to the graphs related to finance, it seems clear that physics students were able to extract the necessary information better than psychology students. This is especially true when it comes to the quantitative aspect of it.

You should read the in-depth analysis and discussion of the result. I'll quote part of their conclusion here:

All students solved the questions about graph slope better than the questions about the area under a graph. Psychology students had rather low scores on the questions about area under a graph, and physics students spent more time than psychology students on questions about area under a graph. These results indicate that area under a graph is quite a difficult concept that is unlikely to be developed without formal teaching and learning, and that more attention should be given to this topic in physics courses.

Physics and psychology students had comparable scores on the qualitative questions on slope which indicates that the idea of slope is rather intuitive. However, many psychology students were not able to calculate the slope, thus indicating that their idea of slope was rather vague. This suggests that the intuitive idea of slope, probably held by most students, should be further developed in physics courses and strongly linked to the mathematical concept of slope that enables students to quantify slope.

Generally, physics students solved the qualitative and the quantitative questions equally well, whereas psychology students solved qualitative questions much better than the quantitative questions. This is further evidence that learning physics helps students to develop deeper understanding of concepts and the ability to quantitatively express relationships between quantities.

The key point here is the "transfer" of knowledge that they have into an area that they are not familiar with. It is clear that physics students were able to extract the information in the area of finance better than psychology students. This is an important point that should be highlighted, because it shows how skills learned from a physics course can transfer to other areas, and that a student need not be a physics major to gain something important and relevant from a physics class.


by ZapperZ ( at November 09, 2018 02:23 PM

November 08, 2018

ZapperZ - Physics and Physicists

The Origin Of Matter's Mass
I can't believe it. I'm reporting on Ethan Siegel's article two days in a row! The last one yesterday was a doozy, wasn't it? :)

This one is a bit different and interesting. The first part of the article describes our understanding of where mass comes from for matter. I want to highlight this because it clarify one very important misconception that many people have, especially the general public. After all the brouhaha surrounding the Higgs and its discovery, a lot of people seem to think that all the masses of every particle and entity can be explained using the Higgs. This is clearly false as stated in the article.

Yet if we take a look at the proton (made of two up and one down quark) and the neutron (made of one up and two down quarks), a puzzle emerges. The three quarks within a proton or neutron, even when you add them all up, comprise less than 0.2% of the known masses of these composite particles. The gluons themselves are massless, while the electrons are less than 0.06% of a proton's mass. The whole of matter, somehow, weighs much, much more than the sum of its parts.

The Higgs may be responsible for the rest mass of these fundamental constituents of matter, but the whole of a single atom is nearly 100 times heavier than the sum of everything known to make it up. The reason has to do with a force that's very counterintuitive to us: the strong nuclear force. Instead of one type of charge (like gravity, which is always attractive) or two types (the "+" and "-" charges of electromagnetism), the strong force has three color charges (red, green and blue), where the sum of all three charges is colorless.

So while we may use the Higgs to point to the origin of  mass in, say, leptons, for hadrons/partons, this is not sufficient. The strong force itself contributes a significant amount to the origin of mass for these particles. The so-called "God Particles" are not that godly, because it can't do and explain everything.

The other interesting part of the article is that he included a "live blog" of the talk by Phiala Shanahan at occurred yesterday at the Perimeter Institute, related to this topic. So you may want to read through the transcript and see if you get anything new.


by ZapperZ ( at November 08, 2018 03:34 PM

November 04, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A welcome mid-term break

Today marks the end of the mid-term break for many of us in the third level sector in Ireland. While a non-teaching week in the middle of term has been a stalwart of secondary schools for many years, the mid-term break only really came to the fore in the Irish third level sector when our universities, Institutes of Technology (IoTs) and other colleges adopted the modern model of 12-week teaching semesters.

Also known as ‘reading week’ in some colleges, the break marks a precious respite in the autumn/winter term. A chance to catch one’s breath, a chance to prepare teaching notes for the rest of term and a chance to catch up on research. Indeed, it is the easiest thing in the world to let the latter slide during the teaching term – only to find that deadlines for funding, book chapters and conference abstracts quietly slipped past while one was trying to keep up with teaching and administration duties.


A quiet walk in Foxrock on the last day of the mid-term break

Which brings me to a pet peeve. All those years later, teaching loads in the IoT sector remain far too high. Lecturers are typically assigned four teaching modules per semester, a load that may have been reasonable in the early days of teaching to Certificate and Diploma level, but makes little sense in the context of today’s IoT lecturer who may teach several modules at 3rd and 4th year degree level, with typically at least one brand new module each year – all of this whilst simultaneously attempting to keep up the research. It’s a false economy if ever there was one, as many a new staff member, freshly graduated from a top research group, will simply abandon research after a few busy years.

Of course, one might have expected to hear a great deal about this issue in the governments plan to ‘upgrade’ IoTs to technological university status. Actually, I have yet to see any public discussion of a prospective change in the teaching contracts of IoT lecturers – a question of money, no doubt. But this is surely another indication that we are talking about a change in name, rather than substance…

by cormac at November 04, 2018 05:15 PM

November 02, 2018

The n-Category Cafe

More Papers on Magnitude

I’ve been distracted by other things for the last few months, but in that time several interesting-looking papers on magnitude (co)homology have appeared on the arXiv. I will just list them here with some vague comments. If anyone (including the author!) would like to write a guest post on any of them then do email me.

For years a standing question was whether magnitude was connected with persistent homology, as both had a similar feel to them. Here Nina relates magnitude homology with persistent homology.

In both mine and Richard’s paper on graphs and Tom Leinster and Mike Shulman’s paper on general enriched categories, it was magnitude homology that was considered. Here Richard introduces the dual theory which he shows has the structure of a non-commutative ring.

I haven’t looked at this yet as I only discovered it last night. However, when I used to think a lot about gerbes and Deligne cohomology I was a fan of Kiyonori Gomi’s work with Yuji Terashima on higher dimensional parallel transport.

This is the write-up of some results he announced in a discussion here at the Café. These results answered questions asked by me and Richard in our original magnitude homology for graphs paper, for instance proving the expression for magnitude homology of cyclic graphs that we’d conjectured and giving pairs of graphs with the same magnitude but different magnitude homology.

by willerton ( at November 02, 2018 10:12 AM

November 01, 2018

Clifford V. Johnson - Asymptotia

Trick or Treat

Maybe a decade or so ago* I made a Halloween costume which featured this simple mask decorated with symbols. “The scary face of science” I called it, mostly referring to people’s irrational fear of mathematics. I think I was being ironic. In retrospect, I don’t think it was funny at all.

(Originally posted on Instagram here.)


(*I've since found the link. Seems it was actually 7 years ago.) Click to continue reading this post

The post Trick or Treat appeared first on Asymptotia.

by Clifford at November 01, 2018 05:38 AM

The n-Category Cafe

2-Groups in Condensed Matter Physics

This blog was born in 2006 when a philosopher, a physicist and a mathematician found they shared an interest in categorification — and in particular, categorical groups, also known as 2-groups. So it’s great to see 2-groups showing up in theoretical condensed matter physics. From today’s arXiv papers:

Abstract. Sigma models effectively describe ordered phases of systems with spontaneously broken symmetries. At low energies, field configurations fall into solitonic sectors, which are homotopically distinct classes of maps. Depending on context, these solitons are known as textures or defect sectors. In this paper, we address the problem of enumerating and describing the solitonic sectors of sigma models. We approach this problem via an algebraic topological method – combinatorial homotopy, in which one models both spacetime and the target space with algebraic objects which are higher categorical generalizations of fundamental groups, and then counts the homomorphisms between them. We give a self-contained discussion with plenty of examples and a discussion on how our work fits in with the existing literature on higher groups in physics.

The fun will really start when people actually synthesize materials described by these materials! Condensed matter physicists are doing pretty well at realizing theoretically possible phenomena in the lab, so I’m optimistic. But I don’t think it’s happened yet.

My friend Chenchang Zhu, a mathematician, has also been working on these things with two physicists. The abstract only briefly mentions 2-groups, but they play a fundamental role in the paper:

Abstract. A discrete non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-model is obtained by triangulate both the space-time <semantics>M d+1<annotation encoding="application/x-tex">M^{d+1}</annotation></semantics> and the target space <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics>. If the path integral is given by the sum of all the complex homomorphisms <semantics>ϕ:M d+1K<annotation encoding="application/x-tex">\phi \colon M^{d+1} \to K</annotation></semantics>, with an partition function that is independent of space-time triangulation, then the corresponding non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-model will be called a topological non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-model which is exactly soluble. Those exactly soluble models suggest that phase transitions induced by fluctuations with no topological defects (i.e. fluctuations described by homomorphisms <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>) usually produce a topologically ordered state and are topological phase transitions, while phase transitions induced by fluctuations with all the topological defects give rise to trivial product states and are not topological phase transitions. If <semantics>K<annotation encoding="application/x-tex">K</annotation></semantics> is a space with only non-trivial first homotopy group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> which is finite, those topological non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-models can realize all <semantics>(3+1)d<annotation encoding="application/x-tex">(3+1)d</annotation></semantics> bosonic topological orders without emergent fermions, which are described by Dijkgraaf-Witten theory with gauge group <semantics>π 1(K)=G<annotation encoding="application/x-tex">\pi_1(K)=G</annotation></semantics>. Here, we show that the <semantics>(3+1)d<annotation encoding="application/x-tex">(3+1)d</annotation></semantics> bosonic topological orders with emergent fermions can be realized by topological non-linear σ-models with <semantics>π 1(K)=<annotation encoding="application/x-tex">\pi_1(K) = </annotation></semantics> finite groups, <semantics>π 2(K)= 2<annotation encoding="application/x-tex">\pi_2(K)=\mathbb{Z}_2</annotation></semantics>, and <semantics>π n>2(K)=0<annotation encoding="application/x-tex">\pi_{n &gt; 2}(K)=0</annotation></semantics>. A subset of those topological non-linear <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>-models corresponds to 2-gauge theories, which realize and classify bosonic topological orders with emergent fermions that have no emergent Majorana zero modes at triple string intersections. The classification of <semantics>(3+1)<annotation encoding="application/x-tex">(3+1)</annotation></semantics>d bosonic topological orders may correspond to a classification of unitary fully dualizable fully extended topological quantum field theories in 4-dimensions.

The cobordism hypothesis, too, is getting into the act in the last sentence!

by john ( at November 01, 2018 05:15 AM

October 30, 2018

Jon Butterworth - Life and Physics

Dark Matters
This is great. I had nothing to do with it, it happened in the 95% of the Physics Department (and of the lives of my PhD students) about which I know nothing. I recommend you watch it and form your … Continue reading

by Jon Butterworth at October 30, 2018 08:57 PM

October 27, 2018

Robert Helling - atdotde

Interfere and it didn't happen
I am a bit late for the party, but also wanted to share my two cents on the paper "Quantum theory cannot consistently describe the use of itself" by Frauchiger and Renner. After sitting down and working out the math for myself, I found that the analysis in this paper and the blogpost by Scot (including many of the the 160+ comments, some by Renner) share a lot with what I am about to say but maybe I can still contribute a slight twist.

Coleman on GHZS

My background is the talk "Quantum Mechanics In Your Face" by Sidney Coleman which I consider as the best argument why quantum mechanics cannot be described by a local and realistic theory (from which I would conclude it is not realistic). In a nutshell, the argument goes like this: Consider the three qubit state state 

$$\Psi=\frac 1{\sqrt 2}(\uparrow\uparrow\uparrow-\downarrow\downarrow\downarrow)$$

which is both an eigenstate of eigenvalue -1 for $\sigma_z\otimes\sigma_z\otimes\sigma_z$ and an eigenstate of eigenvalue +1 for $\sigma_x\otimes\sigma_x\otimes\sigma_z$ or any permutation. This means that, given that the individual outcomes of measuring a $\sigma$-matrix on a qubit is $\pm 1$, when measuring all in the z-direction there will be an odd number of -1 results but if two spins are measured in x-direction and one in z-direction there is an even number of -1's. 

The latter tells us that the outcome of one z-measurement is the product of the two x-measurements on the other two spins. But multiplying this for all three spins we get that in shorthand $ZZZ=(XXX)^2=+1$ in contradiction to the -1 eigenvalue for all z-measurments. 

The conclusion is (unless you assume some non-local conspiracy between the spins) that one has to take serious the fact that on a given spin I cannot measure both $\sigma_x$ and $\sigma_z$ and thus when actually measuring the latter I must not even assume that $X$ has some (although unknown) value $\pm 1$ as it leads to the contradiction. Stuff that I cannot measure does not have a value (that is also my understanding of what "not realistic" means).

Fruchtiger and Renner

Now to the recent Nature paper. In short, they are dealing with two qubits (by which I only mean two state systems). The first is in a box L' (I will try to use the somewhat unfortunate nomenclature from the paper) and the second in in a box L (L stands for lab). For L, we use the usual z-basis of $\uparrow$ and $\downarrow$ as well as the x-basis $\leftarrow = \frac 1{\sqrt 2}(\downarrow - \uparrow)$  and $\rightarrow  = \frac 1{\sqrt 2}(\downarrow + \uparrow)$ . Similarly, for L' we use the basis $h$ and $t$ (heads and tails as it refers to a coin) as well as $o = \frac 1{\sqrt 2}(h - t)$ and $f  = \frac 1{\sqrt 2}(h+f)$.  The two qubits are prepared in the state

$$\Phi = \frac{h\otimes\downarrow + \sqrt 2 t\otimes \rightarrow}{\sqrt 3}$$.

Clearly, a measurement of $t$ in box L' implies that box L has to contain the state $\rightarrow$. Call this observation A.

Let's re-express $\rightarrow$ in the x-basis:

$$\Phi =\frac {h\otimes \downarrow + t\otimes \downarrow + t\otimes\uparrow}{\sqrt 3}$$

From which one concludes that an observer inside box L that measures $\uparrow$ concludes that the qubit in box L' is in state $t$. Call this observation B.

Similarly, we can express the same state in the x-basis for L':

$$\Phi = \frac{4 f\otimes \downarrow+ f\otimes \uparrow - o\otimes \uparrow}{\sqrt 3}$$

From this once can conclude that measuring $o$ for the state of L' one can conclude that L is in the state $\uparrow$. Call this observation C.

Using now C, B and A one is tempted to conclude that observing L' to be in state $o$ implies that L is in state $\rightarrow$. When we express the state in the $ht\leftarrow\rightarrow$-basis, however, we get

$$\Phi = \frac{f\otimes\leftarrow+ 3f\otimes \rightarrow + o\otimes\leftarrow - o\otimes \rightarrow}{\sqrt{12}}.$$

so with probability 1/12 we find both $o$  and $\leftarrow$. Again, we hit a contradiction.

One is tempted to use the same way out as above in the three qubit case and say one should not argue about contrafactual measurements that are incompatible with measurements that were actually performed. But Frauchiger and Renner found a set-up which seems to avoid that.

They have observers F and F' ("friends") inside the boxes that do the measurements in the $ht$ and $\uparrow\downarrow$ basis whereas later observers W and W' measure the state of the boxes including the observer F and F' in the $of$ and $\leftarrow\rightarrow$ basis.  So, at each stage of A,B,C the corresponding measurement has actually taken place and is not contrafactual!

Interference and it did not happen

I believe the way out is to realise that at least from a retrospective perspective, this analysis stretches the language and in particular the word "measurement" to the extreme. In order for W' to measure the state of L' in the $of$-basis, he has to interfere the contents including F' coherently such that there is no leftover of information from F''s measurement of $ht$ remaining. Thus, when W''s measurement is performed one should not really say that F''s measurement has in any real sense happened as no possible information is left over. So it is in any practical sense contrafactual.

To see the alternative, consider a variant of the experiment where a tiny bit of information (maybe the position of one air molecule or the excitation of one of F''s neutrons) escapes the interference. Let's call the two possible states of that qubit of information $H$ and $T$ (not necessarily orthogonal) and consider instead the state where that neutron is also entangled with the first qubit:

$$\tilde \Phi =  \frac{h\otimes\downarrow\otimes H + \sqrt 2 t\otimes \rightarrow\otimes T}{\sqrt 3}$$.

Then, the result of step C becomes

$$\tilde\Phi = \frac{f\otimes \downarrow\otimes H+ o\otimes \downarrow\otimes H+f\otimes \downarrow\otimes T-o\otimes\downarrow\otimes T + f\otimes \uparrow\otimes T-o \otimes\uparrow\times T}{\sqrt 6}.$$

We see that now there is a term containing $o\otimes\downarrow\otimes(H-T)$. Thus, as long as the two possible states of the air molecule/neuron are actually different, observation C is no longer valid and the whole contradiction goes away.

This makes it clear that the whole argument relies of the fact that when W' is doing his measurement any remnant of the measurement by his friend F' is eliminated and thus one should view the measurement of F' as if it never happened. Measuring L' in the $of$-basis really erases the measurement of F' in the complementary $ht$-basis.

by Robert Helling ( at October 27, 2018 08:39 AM

October 24, 2018

Jon Butterworth - Life and Physics

The trouble-makers of particle physics
The chances are you have heard quite a bit about the Higgs boson. The goody-two-shoes of particle physics, it may have been hard to find, but when it was discovered it was just as the theory – the Standard Model … Continue reading

by Jon Butterworth at October 24, 2018 11:47 AM

Axel Maas - Looking Inside the Standard Model

Looking for something when no one knows how much is there
This time, I want to continue the discussion from some months ago. Back then, I was rather general on how we could test our most dramatic idea. This idea is connected to what we regard as elementary particles. So far, our idea is that those you have heard about, the electrons, the Higgs, and so on are truly the basic building blocks of nature. However, we have found a lot of evidence that indicate that we see in experiment, and call these names, are actually not the same as the elementary particles themselves. Rather, they are a kind of bound state of the elementary ones, which only look at first sight like they themselves would be the elementary ones. Sounds pretty weird, huh? And if it sounds weird, it means it needs to be tested. We did so with numerical simulations. They all agreed perfectly with the ideas. But, of course, its physics, and thus we need also an experiment. The only question is which one.

We had some ideas already a while back. One of them will be ready soon, and I will talk again about it in due time. But this will be rather indirect, and somewhat qualitative. The other, however, required a new experiment, which may need two more decades to build. Thus, both cannot be the answer alone, and we need something more.

And this more is what we are currently closing in. Because one has this kind of weird bound state structure to make the standard model consistent, not only exotic particles are more complicated than usually assumed. Ordinary ones are too. And most ordinary are protons, the nucleus of the hydrogen atom. More importantly, protons is what is smashed together at the LHC at CERN. So, we have a machine already, which may be able to test it. But this is involved, as protons are very messy. They are already in the conventional picture bound states of quarks and gluons. Our results just say there are more components. Thus, we have somehow to disentangle old and new components. So, we have to be very careful in what we do.

Fortunately, there is a trick. All of this revolves around the Higgs. The Higgs has the property that interacts stronger with particles the heavier they are. The heaviest particles we know are the top quark, followed by the W and Z bosons. And the CMS experiment (and other experiments) at CERN has a measurement campaign to look at the production of these particles together! That is exactly where we expect something interesting can happen. However, our ideas are not the only ones leading to top quarks and Z bosons. There are many known processes which produce them as well. So we cannot just check whether they are there. Rather, we need to understand if there are there as expected. E.g., if they fly away from the interaction in the expected direction and with the expected speeds.

So what a master student and myself do is the following. We use a program, called HERWIG, which simulates such events. One of the people who created this program helped us to modify this program, so that we can test our ideas with it. What we now do is rather simple. An input to such simulations is how the structure of the proton looks like. Based on this, it simulates how the top quarks and Z bosons produced in a collision are distributed. We now just add our conjectured additional contributions to the proton, essentially a little bit of Higgs. We then check, how the distributions change. By comparing the changes to what we get in experiment, we can then deduced how large the Higgs contribution in the proton is. Moreover, we can even indirectly deduce its shape, i.e. how in the proton the Higgs is located.

And this we now study. We iterate modifications of the proton structure with comparison to experimental results and predictions without this Higgs contribution. Thereby, we constraint the Higgs contribution in the proton bit by bit. At the current time, we know that the data is only sufficient to provide an upper bound to this amount inside the proton. Our first estimates show already that this bound is actually not that strong, and quite a lot of Higgs could be inside the proton. But on the other hand, this is good, because that means that the expected data in the next couple of years from the experiments will be able to actually either constraint the contribution further, or could even detect it, if it is large enough. At any rate, we now know that we have a sensitive leverage to understand this new contribution.

by Axel Maas ( at October 24, 2018 07:26 AM

October 20, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

The thrill of a good conference

One of the perks of academia is the thrill of presenting results, thoughts and ideas at international conferences. Although the best meetings often fall at the busiest moment in the teaching semester and the travel can be tiring, there is no doubt that interacting directly with one’s peers is a huge shot in the arm for any researcher – not to mention the opportunity to travel to interesting locations and experience different cultures.


The view from my hotel in San Sebastian this morning.

This week, I travelled to San Sebastian in Spain to attend the Third International Conference on the History of Physics, the latest in a series of conferences that aim to foster dialogue between physicists with an interest in the history of their subject and professional historians of science. I think it’s fair to say the conference was a great success, with lots of interesting talks on a diverse range of topics. It didn’t hurt that the meeting took place in the Palacio Mirimar, a beautiful building in a fantastic location.

Image result for palacio miramar san sebastian

The Palacio Mirimar in San Sebastian. 

The conference programme can be found here. I didn’t get to all the talks due to parallel timetabling, but three major highlights for me were ‘Structure or Agent? Max Planck and the Birth of Quantum Theory’ by Massimiliano Badino of the University of Verona, ‘The Principle of Plenitude as a Guiding Theme in Modern Physics’ by Helge Kragh of the University of Copenhagen, and ‘Rutherford’s Favourite Radiochemist: Bertram Borden’ by Edward Davis of the University of Cambridge.


A slide from the paper ‘Max Planck and the Birth of Quantum Theory’

My own presentation was titled The Dawning of Cosmology – Internal vs External Histories’ (the slides are here). In it, I considered the story of the emergence of the ‘big bang’ theory of the universe from two different viewpoints, the professional physicist vs. the science historian. (The former approach is sometimes termed ‘internal history’ as scientists tend to tell the story of scientific discovery as an interplay of theory and experiment within the confines of science. The latter approach is termed  ‘external’ because the professional historian will consider external societal factors such the prestige of researchers and their institutions and the relevance of national or international contexts). Nowadays, it is generally accepted that both internal and external factors usually often a role in a given scientific advance, a  process that has been termed the co-production of scientific knowledge.


Giving my paper in the conference room

As it was a short talk, I focused on three key stages in the development of the big bang model; the first (static) models of the cosmos that arose from relativity, the switch to expanding cosmologies in the 1930s, and finally the transition (much more gradual) to the idea of a universe that was once small, dense and hot. In preparing the paper, I found that the first stage was driven almost entirely by theoretical considerations (namely, Einstein’s wish to test his newly-minted general theory of relativity by applying it to the universe as a whole), with little evidence of co-production. Similarly, I found that the switch to expanding cosmologies was driven by almost entirely by developments in astronomy (namely, Hubble’s observations of the recession of the galaxies). Finally, I found the long rejection of Lemaître’s ‘fireworks’ universe was driven by obvious theoretical problems associated with the model (such as the problem of the singularity and the age paradox), while the eventual acceptance of the model was driven by major astronomical advances such as the discovery of the cosmic microwave background. Overall, my conclusion was that one could give a reasonably coherent account of the early development of modern cosmology in terms of the traditional narrative of an interplay of theory and experiment, with little evidence that social considerations played an important role in this particular story. As I once heard the noted historian Hasok Chang remark in a seminar, Sometimes science is the context’.

Can one draw any general conclusions from this little study? I think it would be interesting to investigate the matter further. One possibility is that social considerations become more important ‘as a field becomes a field’, i.e., as a new area of physics coalesces into its own distinct field, with specialized journals, postgraduate positions and undergraduate courses etc. Could it be that the traditional narrative works surprisingly well when considering the dawning of a field because the co-production effect is less pronounced then? Certainly, I have also found it hard to discern any major societal influence in the dawning of other theories such as special relativity or general relativity.


As a coda, I discussed a pet theme of mine; that the co-productive nature of scientific discovery presents a special problem for the science historian. After all, in order to weigh the relative impact of internal vs external considerations on a given scientific advance, one must presumably have a good understanding of each. But it takes many years of specialist training to attempt to place a scientific advance in its true scientific context, an impossible ask for a historian trained in the humanities. Some science historians avoid this problem by ‘black-boxing’ the science and focusing on social context alone. However, this means the internal scientific aspects of the story are either ignored or repeated from secondary sources, rather than offering new insights from perusing primary materials. Besides, how can one decide whether a societal influence is significant or not without considering the science? For example, Paul Forman’s argument concerning the influence of contemporaneous German culture on the acceptance of the Uncertainty Principle in quantum theory is interesting, but pays little attention to physics; a physicist might point out that it quickly became clear to the quantum theorists (many of whom were not German) that the Uncertainty Principle arose inevitably from wave-particle duality in all three formulations of the theory (see Hendry on this for example).

Indeed, now that it is accepted one needs to consider both internal and external factors in studying a given scientific advance, it’s not obvious to me what the professionalization of science history should look like, i.e., how the next generation of science historians should be trained. In the meantime, I think there is a good argument for the use of multi-disciplinary teams of collaborators in the study of the history of science.

All in all, a very enjoyable conference. I wish there had been time to relax and have a swim in the bay, but I never got a moment. On the other hand, I managed to stock up on some free issues of my favourite publication in this area, the European Physical Journal (H).  On the plane home, I had a great read of a seriously good EPJH article by S.M. Bilenky on the history of neutrino physics. Consider me inspired….

by cormac at October 20, 2018 09:51 PM

October 17, 2018

Robert Helling - atdotde

Bavarian electoral system
Last Sunday, we had the election for the federal state of Bavaria. Since the electoral system is kind of odd (but not as odd as first past the post), I would like to analyse how some variations (assuming the actual distribution of votes) in the rule would have worked out. So, first, here is how actually, the seats are distributed: Each voter gets two ballots: On the first ballot, each party lists one candidate from the local constituency and you can select one. On the second ballot, you can vote for a party list (it's even more complicated because also there, you can select individual candidates to determine the position on the list but let's ignore that for today).

Then in each constituency, the votes on ballot one are counted. The candidate with the most votes (like in first past the pole) gets elected for parliament directly (and is called a "direct candidate"). Then over all, the votes for each party on both ballots (this is where the system differs from the federal elections) are summed up. All votes for parties with less then 5% of the grand total of all votes are discarded (actually including their direct candidates but this is not of a partial concern). Let's call the rest the "reduced total". According to the fraction of each party in this reduced total the seats are distributed.

Of course the first problem is that you can only distribute seats in integer multiples of 1. This is solved using the Hare-Niemeyer-method: You first distribute the integer parts. This clearly leaves fewer seats open than the number of parties. Those you then give to the parties where the rounding error to the integer below was greatest. Check out the wikipedia page explaining how this can lead to a party losing seats when the total number of seats available is increased.

Because this is what happens in the next step: Remember that we already allocated a number of seats to constituency winners in the first round. Those count towards the number of seats that each party is supposed to get in step two according to the fraction of votes. Now, it can happen, that a party has won more direct candidates than seats allocated in step two. If that happens, more seats are added to the total number of seats and distributed according to the rules of step two until each party has been allocated at least the number of seats as direct candidates. This happens in particular if one party is stronger than all the other ones leading to that party winning almost all direct candidates (as in Bavaria this happened to the CSU which won all direct candidates except five in Munich and one in Würzburg which were won by the Greens).

A final complication is that Bavaria is split into seven electoral districts and the above procedure is for each district separately. So there are seven times rounding and adding seats procedures.

Sunday's election resulted in the following distribution of seats:

After the whole procedure, there are 205 seats distributed as follows

  • CSU 85 (41.5% of seats)
  • SPD 22 (10.7% of seats)
  • FW 27 (13.2% of seats)
  • GREENS 38 (18.5% of seats)
  • FDP 11 (5.4% of seats)
  • AFD 22 (10.7% of seats)
You can find all the total of votes on this page.

Now, for example one can calculate the distribution without districts throwing just everything in a single super-district. Then there are 208 seats distributed as

  • CSU 85 (40.8%)
  • SPD 22 (10.6%)
  • FW 26 (12.5%)
  • GREENS 40 (19.2%)
  • FDP 12 (5.8%)
  • AFD 23 (11.1%)
You can see that in particular the CSU, the party with the biggest number of votes profits from doing the rounding 7 times rather than just once and the last three parties would benefit from giving up districts.

But then there is actually an issue of negative weight of votes: The greens are particularly strong in Munich where they managed to win 5 direct seats. If instead those seats would have gone to the CSU (as elsewhere), the number of seats for Oberbayern, the district Munich belongs to would have had to be increased to accommodate those addition direct candidates for the CSU increasing the weight of Oberbayern compared to the other districts which would then be beneficial for the greens as they are particularly strong in Oberbayern: So if I give all the direct candidates to the CSU (without modifying the numbers of total votes), I get the follwing distribution:
221 seats
  • CSU 91 (41.2%)
  • SPD 24 (10.9%)
  • FW 28 (12,6%)
  • GREENS 42 (19.0%)
  • FDP 12 (5.4%)
  • AFD 24 (10.9%)
That is, there greens would have gotten a higher fraction of seats if they had won less constituencies. Voting for green candidates in Munich actually hurt the party as a whole!

The effect is not so big that it actually changes majorities (CSU and FW are likely to form a coalition) but still, the constitutional court does not like (predictable) negative weight of votes. Let's see if somebody challenges this election and what that would lead to.

The perl script I used to do this analysis is here.

The above analysis in the last point is not entirely fair as not to win a constituency means getting fewer votes which then are missing from the grand total. Taking this into account makes the effect smaller. In fact, subtracting the votes from the greens that they were leading by in the constituencies they won leads to an almost zero effect:

Seats: 220
  • CSU  91 41.4%
  • SPD  24 10.9%
  • FW  28 12.7%
  • GREENS  41 18.6%
  • FDP  12 5.4%
  • AFD  24 10.9%
Letting the greens win München Mitte (a newly created constituency that was supposed to act like a bad bank for the CSU taking up all central Munich more left leaning voters, do I hear somebody say "Gerrymandering"?) yields

Seats: 217
  • CSU  90 41.5%
  • SPD  23 10.6%
  • FW  28 12.9%
  • GREENS  41 18.9%
  • FDP  12 5.5%
  • AFD  23 10.6%
Or letting them win all but Moosach and Würzbug-Stadt where the lead was the smallest:

Seats: 210

  • CSU  87 41.4%
  • SPD  22 10.5%
  • FW  27 12.9%
  • GREENS  40 19.0%
  • FDP  11 5.2%
  • AFD  23 11.0%

by Robert Helling ( at October 17, 2018 06:55 PM

October 15, 2018

Clifford V. Johnson - Asymptotia

Mindscape Interview!

And then two come along at once... Following on yesterday, another of the longer interviews I've done recently has appeared. This one was for Sean Carroll's excellent Mindscape podcast. This interview/chat is all about string theory, including some of the core ideas, its history, what that "quantum gravity" thing is anyway, and why it isn't actually a theory of (just) strings. Here's a direct link to the audio, and here's a link to the page about it on Sean's blog.

The whole Mindscape podcast has had some fantastic conversations, by the way, so do check it out on iTunes or your favourite podcast supplier!

I hope you enjoy it!!

-cvj Click to continue reading this post

The post Mindscape Interview! appeared first on Asymptotia.

by Clifford at October 15, 2018 06:47 PM

October 14, 2018

Clifford V. Johnson - Asymptotia

Futuristic Podcast Interview

For your listening pleasure: I've been asked to do a number of longer interviews recently. One of these was for the "Futuristic Podcast of Mark Gerlach", who interviews all sorts of people from the arts (normally) over to the sciences (well, he hopes to do more of that starting with me). Go and check out his show on iTunes. The particular episode with me can be found as episode 31. We talk about a lot of things, from how people get into science (including my take on the nature vs nurture discussion), through the changes in how people get information about science to the development of string theory, to black holes and quantum entanglement - and a host of things in between. We even talked about The Dialogues, you'll be happy to hear. I hope you enjoy listening!

(The picture? Not immediately relevant, except for the fact that I did cycle to the place the recording took place. I mostly put it there because I was fixing my bike not long ago and it is good to have a photo in a post. That is all.)

-cvj Click to continue reading this post

The post Futuristic Podcast Interview appeared first on Asymptotia.

by Clifford at October 14, 2018 07:22 PM

October 13, 2018

John Baez - Azimuth

Category Theory Course

I’m teaching a course on category theory at U.C. Riverside, and since my website is still suffering from reduced functionality I’ll put the course notes here for now. I taught an introductory course on category theory in 2016, but this one is a bit more advanced.

The hand-written notes here are by Christian Williams. They are probably best seen as a reminder to myself as to what I’d like to include in a short book someday.

Lecture 1: What is pure mathematics all about? The importance of free structures.

Lecture 2: The natural numbers as a free structure. Adjoint functors.

Lecture 3: Adjoint functors in terms of unit and counit.

Lecture 4: 2-Categories. Adjunctions.

Lecture 5: 2-Categories and string diagrams. Composing adjunctions.

Lecture 6: The ‘main spine’ of mathematics. Getting a monad from an adjunction.

Lecture 7: Definition of a monad. Getting a monad from an adjunction. The augmented simplex category.

Lecture 8: The walking monad, the augmented simplex category and the simplex category.

Lecture 9: Simplicial abelian groups from simplicial sets. Chain complexes from simplicial abelian groups.

Lecture 10: The Dold-Thom theorem: the category of simplicial abelian groups is equivalent to the category of chain complexes of abelian groups. The homology of a chain complex.

Lecture 7: Definition of a monad. Getting a monad from an adjunction. The augmented simplex category.

Lecture 8: The walking monad, the
augmented simplex category and the simplex category.

Lecture 9: Simplicial abelian groups from simplicial sets. Chain complexes from simplicial abelian groups.

Lecture 10: Chain complexes from simplicial abelian groups. The homology of a chain complex.

Lecture 12: The bar construction: getting a simplicial objects from an adjunction. The bar construction for G-sets, previewed.

Lecture 13: The adjunction between G-sets and sets.

Lecture 14: The bar construction for groups.

Lecture 15: The simplicial set \mathbb{E}G obtained by applying the bar construction to the one-point G-set, its geometric realization EG = |\mathbb{E}G|, and the free simplicial abelian group \mathbb{Z}[\mathbb{E}G].

Lecture 16: The chain complex C(G) coming from the simplicial abelian group \mathbb{Z}[\mathbb{E}G], its homology, and the definition of group cohomology H^n(G,A) with coefficients in a G-module.

Lecture 17: Extensions of groups. The Jordan-Hölder theorem. How an extension of a group G by an abelian group A gives an action of G on A and a 2-cocycle c \colon G^2 \to A.

Lecture 18: Classifying abelian extensions of groups. Direct products, semidirect products, central extensions and general abelian extensions. The groups of order 8 as abelian extensions.

Lecture 19: Group cohomology. The chain complex for the cohomology of G with coefficients in A, starting from the bar construction, and leading to the 2-cocycles used in classifying abelian extensions. The classification of extensions of G by A in terms of H^2(G,A).

Lecture 20: Examples of group cohomology: nilpotent groups and the fracture theorem. Higher-dimensional algebra and homotopification: the nerve of a category and the nerve of a topological space. \mathbb{E}G as the nerve of the translation groupoid G/\!/G. BG = EG/G as the walking space with fundamental group G.

Lecture 21: Homotopification and higher algebra. Internalizing concepts in categories with finite products. Pushing forward internalized structures using functors that preserve finite products. Why the ‘discrete category on a set’ functor \mathrm{Disc} \colon \mathrm{Set} \to \mathrm{Cat}, the ‘nerve of a category’ functor \mathrm{N} \colon \mathrm{Cat} \to \mathrm{Set}^{\Delta^{\mathrm{op}}}, and the ‘geometric realization of a simplicial set’ functor |\cdot| \colon \mathrm{Set}^{\Delta^{\mathrm{op}}} \to \mathrm{Top} preserve products.

Lecture 22: Monoidal categories. Strict monoidal categories as monoids in \mathrm{Cat} or one-object 2-categories. The periodic table of strict n-categories. General ‘weak’ monoidal categories.

Lecture 23: 2-Groups. The periodic table of weak n-categories. The stabilization hypothesis. The homotopy hypothesis. Classifying 2-groups with G as the group of objects and A as the abelian group of automorphisms of the unit object in terms of H^3(G,A). The Eckmann–Hilton argument.

by John Baez at October 13, 2018 11:35 PM

September 29, 2018

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

History of Physics at the IoP

This week saw a most enjoyable conference on the history of physics at the Institute of Physics in London. The IoP has had an active subgroup in the history of physics for many years, complete with its own newsletter, but this was the group’s first official workshop for a long while. It proved to be a most enjoyable and informative occasion, I hope it is the first of many to come.


The Institute of Physics at Portland Place in London (made famous by writer Ian McEwan in the novel ‘Solar’, as the scene of a dramatic clash between a brilliant physicist of questionable integrity and a Professor of Science Studies)

There were plenty of talks on what might be called ‘classical history’, such as Maxwell, Kelvin and the Inverse Square law of Electrostatics (by Isobel Falconer of the University of St. Andrews) and Newton’s First Law – a History (by Paul Ranford of University College London), while the more socially-minded historian might have enjoyed talks such as Psychical and Optical Research; Between Lord Rayleigh’s Naturalism and Dualism (by Gregory Bridgman of the University of Cambridge) and The Paradigm Shift of Physics -Religion-Unbelief Relationship from the Renaissance to the 21st Century (by Elisabetta Canetta of St Mary’s University). Of particular interest to me were a number of excellent talks drawn from the history of 20th century physics, such as A Partial History of Cosmic Ray Research in the UK (by the leading cosmic ray physicist Alan Watson), The Origins and Development of Free-Electron Lasers in the UK (by Elaine Seddon of Daresbury Laboratory),  When Condensed Matter became King (by Joseph Martin of the University of Cambridge), and Symmetries: On Physical and Aesthetic Argument in the Development of Relativity (by Richard Staley of the University of Cambridge). The official conference programme can be viewed here.

My own talk, Interrogating the Legend of Einstein’s “Biggest Blunder”, was a brief synopsis of our recent paper on this topic, soon to appear in the journal Physics in Perspective. Essentially our finding is that, despite recent doubts about the story, the evidence suggests that Einstein certainly did come to view his introduction of the cosmological constant term to the field equations as a serious blunder and almost certainly did declare the term his “biggest blunder” on at least one occasion. Given his awareness of contemporaneous problems such as the age of the universe predicted by cosmologies without the term, this finding has some relevance to those of today’s cosmologists who seek to describe the recently-discovered acceleration in cosmic expansion without a cosmological constant. The slides for the talk can be found here.

I must admit I missed a trick at question time. Asked about other  examples of ‘fudge factors’ that were introduced and later regretted, I forgot the obvious one. In 1900, Max Planck suggested that energy transfer between oscillators somehow occurs in small packets or ‘quanta’ of energy in order to successfully predict the spectrum of radiation from a hot body. However, he saw this as a mathematical device and was not at all supportive of the more general postulate of the ‘light quantum’ when it was proposed by a young Einstein in 1905.  Indeed, Planck rejected the light quantum for many years.

All in all, a superb conference. It was also a pleasure to visit London once again. As always, I booked a cheap ‘ n’ cheerful hotel in the city centre, walkable to the conference. On my way to the meeting, I walked past Madame Tussauds, the Royal Academy of Music, and had breakfast at the tennis courts in Regent’s Park. What a city!

IMG_1937 (1)

Walking past the Royal Academy on my way to the conference


Views of London over a quick dinner after the conference

by cormac at September 29, 2018 09:07 PM

September 27, 2018

Axel Maas - Looking Inside the Standard Model

Unexpected connections
The history of physics is full of stuff developed for one purpose ending up being useful for an entirely different purpose. Quite often they also failed their original purpose miserably, but are paramount for the new one. Newer examples are the first attempts to describe the weak interactions, which ended up describing the strong one. Also, string theory was originally invented for the strong interactions, and failed for this purpose. Now, well, it is the popular science star, and a serious candidate for quantum gravity.

But failing is optional for having a second use. And we just start to discover a second use for our investigations of grand-unified theories. There our research used a toy model. We did this, because we wanted to understand a mechanism. And because doing the full story would have been much too complicated before we did not know, whether the mechanism works. But it turns out this toy theory may be an interesting theory on its own.

And it may be interesting for a very different topic: Dark matter. This is a hypothetical type of matter of which we see a lot of indirect evidence in the universe. But we are still mystified of what it is (and whether it is matter at all). Of course, such mysteries draw our interests like a flame the moth. Hence, our group in Graz starts to push also in this direction, being curious on what is going on. For now, we follow the most probable explanation that there are additional particles making up dark matter. Then there are two questions: What are they? And do they, and if yes how, interact with the rest of the world? Aside from gravity, of course.

Next week I will go to a workshop in which new ideas on dark matter will be explored, to get a better understanding of what is known. And in the course of preparing for this workshop I noted that there is this connection. I will actually present this idea at the workshop, as it forms a new class of possible explanations of dark matter. Perhaps not the right one, but at the current time an equally plausible one as many others.

And here is how it works. Theories of the type of grand-unified theories were for a long time expected to have a lot of massless particles. This was not bad for their original purpose, as we know quite some of them, like the photon and the gluons. However, our results showed that with an improved treatment and shift in paradigm that this is not always true. At least some of them do not have massless particles.

But dark matter needs to be massive to influence stars and galaxies gravitationally. And, except for very special circumstances, there should not be additional massless dark particles. Because otherwise the massive ones could decay into the massless ones. And then the mass is gone, and this does not work. Thus the reason why such theories had been excluded. But with our new results, they become feasible. Even more so, we have a lot of indirect evidence that dark matter is not just a single, massive particle. Rather, it needs to interact with itself, and there could be indeed many different dark matter particles. After all, if there is dark matter, it makes up four times more stuff in the universe than everything we can see. And what we see consists out of many particles, so why should not dark matter do so as well. And this is also realized in our model.

And this is how it works. The scenario I will describe (you can download my talk already now, if you want to look for yourself - though it is somewhat technical) finds two different types of stable dark matter. Furthermore, they interact. And the great thing about our approach is that we can calculate this quite precisely, giving us a chance to make predictions. Still, we need to do this, to make sure that everything works with what astrophysics tells us. Moreover, this setup gives us two more additional particles, which we can couple to the Higgs through a so-called portal. Again, we can calculate this, and how everything comes together. This allows to test this model not only by astronomical observations, but at CERN. This gives the basic idea. Now, we need to do all the detailed calculations. I am quite excited to try this out :) - so stay tuned, whether it actually makes sense. Or whether the model will have to wait for another opportunity.

by Axel Maas ( at September 27, 2018 11:53 AM

September 25, 2018

Sean Carroll - Preposterous Universe

Atiyah and the Fine-Structure Constant

Sir Michael Atiyah, one of the world’s greatest living mathematicians, has proposed a derivation of α, the fine-structure constant of quantum electrodynamics. A preprint is here. The math here is not my forte, but from the theoretical-physics point of view, this seems misguided to me.

(He’s also proposed a proof of the Riemann conjecture, I have zero insight to give there.)

Caveat: Michael Atiyah is a smart cookie and has accomplished way more than I ever will. It’s certainly possible that, despite the considerations I mention here, he’s somehow onto something, and if so I’ll join in the general celebration. But I honestly think what I’m saying here is on the right track.

In quantum electrodynamics (QED), α tells us the strength of the electromagnetic interaction. Numerically it’s approximately 1/137. If it were larger, electromagnetism would be stronger, atoms would be smaller, etc; and inversely if it were smaller. It’s the number that tells us the overall strength of QED interactions between electrons and photons, as calculated by diagrams like these.
As Atiyah notes, in some sense α is a fundamental dimensionless numerical quantity like e or π. As such it is tempting to try to “derive” its value from some deeper principles. Arthur Eddington famously tried to derive exactly 1/137, but failed; Atiyah cites him approvingly.

But to a modern physicist, this seems like a misguided quest. First, because renormalization theory teaches us that α isn’t really a number at all; it’s a function. In particular, it’s a function of the total amount of momentum involved in the interaction you are considering. Essentially, the strength of electromagnetism is slightly different for processes happening at different energies. Atiyah isn’t even trying to derive a function, just a number.

This is basically the objection given by Sabine Hossenfelder. But to be as charitable as possible, I don’t think it’s absolutely a knock-down objection. There is a limit we can take as the momentum goes to zero, at which point α is a single number. Atiyah mentions nothing about this, which should give us skepticism that he’s on the right track, but it’s conceivable.

More importantly, I think, is the fact that α isn’t really fundamental at all. The Feynman diagrams we drew above are the simple ones, but to any given process there are also much more complicated ones, e.g.

And in fact, the total answer we get depends not only on the properties of electrons and photons, but on all of the other particles that could appear as virtual particles in these complicated diagrams. So what you and I measure as the fine-structure constant actually depends on things like the mass of the top quark and the coupling of the Higgs boson. Again, nowhere to be found in Atiyah’s paper.

Most importantly, in my mind, is that not only is α not fundamental, QED itself is not fundamental. It’s possible that the strong, weak, and electromagnetic forces are combined into some Grand Unified theory, but we honestly don’t know at this point. However, we do know, thanks to Weinberg and Salam, that the weak and electromagnetic forces are unified into the electroweak theory. In QED, α is related to the “elementary electric charge” e by the simple formula α = e2/4π. (I’ve set annoying things like Planck’s constant and the speed of light equal to one. And note that this e has nothing to do with the base of natural logarithms, e = 2.71828.) So if you’re “deriving” α, you’re really deriving e.

But e is absolutely not fundamental. In the electroweak theory, we have two coupling constants, g and g’ (for “weak isospin” and “weak hypercharge,” if you must know). There is also a “weak mixing angle” or “Weinberg angle” θW relating how the original gauge bosons get projected onto the photon and W/Z bosons after spontaneous symmetry breaking. In terms of these, we have a formula for the elementary electric charge: e = g sinθW. The elementary electric charge isn’t one of the basic ingredients of nature; it’s just something we observe fairly directly at low energies, after a bunch of complicated stuff happens at higher energies.

Not a whit of this appears in Atiyah’s paper. Indeed, as far as I can tell, there’s nothing in there about electromagnetism or QED; it just seems to be a way to calculate a number that is close enough to the measured value of α that he could plausibly claim it’s exactly right. (Though skepticism has been raised by people trying to reproduce his numerical result.) I couldn’t see any physical motivation for the fine-structure constant to have this particular value

These are not arguments why Atiyah’s particular derivation is wrong; they’re arguments why no such derivation should ever be possible. α isn’t the kind of thing for which we should expect to be able to derive a fundamental formula, it’s a messy low-energy manifestation of a lot of complicated inputs. It would be like trying to derive a fundamental formula for the average temperature in Los Angeles.

Again, I could be wrong about this. It’s possible that, despite all the reasons why we should expect α to be a messy combination of many different inputs, some mathematically elegant formula is secretly behind it all. But knowing what we know now, I wouldn’t bet on it.

by Sean Carroll at September 25, 2018 08:03 AM

August 13, 2018

Andrew Jaffe - Leaves on the Line

Planck: Demographics and Diversity

Another aspect of Planck’s legacy bears examining.

A couple of months ago, the 2018 Gruber Prize in Cosmology was awarded to the Planck Satellite. This was (I think) a well-deserved honour for all of us who have worked on Planck during the more than 20 years since its conception, for a mission which confirmed a standard model of cosmology and measured the parameters which describe it to accuracies of a few percent. Planck is the latest in a series of telescopes and satellites dating back to the COBE Satellite in the early 90s, through the MAXIMA and Boomerang balloons (among many others) around the turn of the 21st century, and the WMAP Satellite (The Gruber Foundation seems to like CMB satellites: COBE won the Prize in 2006 and WMAP in 2012).

Well, it wasn’t really awarded to the Planck Satellite itself, of course: 50% of the half-million-dollar award went to the Principal Investigators of the two Planck instruments, Jean-Loup Puget and Reno Mandolesi, and the other half to the “Planck Team”. The Gruber site officially mentions 334 members of the Collaboration as recipients of the Prize.

Unfortunately, the Gruber Foundation apparently has some convoluted rules about how it makes such group awards, and the PIs were not allowed to split the monetary portion of the prize among the full 300-plus team. Instead, they decided to share the second half of the funds amongst “43 identified members made up of the Planck Science Team, key members of the Planck editorial board, and Co-Investigators of the two instruments.” Those words were originally on the Gruber site but in fact have since been removed — there is no public recognition of this aspect of the award, which is completely appropriate as it is the whole team who deserves the award. (Full disclosure: as a member of the Planck Editorial Board and a Co-Investigator, I am one of that smaller group of 43, chosen not entirely transparently by the PIs.)

I also understand that the PIs will use a portion of their award to create a fund for all members of the collaboration to draw on for Planck-related travel over the coming years, now that there is little or no governmental funding remaining for Planck work, and those of us who will also receive a financial portion of the award will also be encouraged to do so (after, unfortunately, having to work out the tax implications of both receiving the prize and donating it back).

This seems like a reasonable way to handle a problem with no real fair solution, although, as usual in large collaborations like Planck, the communications about this left many Planck collaborators in the dark. (Planck also won the Royal Society 2018 Group Achievement Award which, because there is no money involved, could be uncontroversially awarded to the ESA Planck Team, without an explicit list. And the situation is much better than for the Nobel Prize.)

However, this seemingly reasonable solution reveals an even bigger, longer-standing, and wider-ranging problem: only about 50 of the 334 names on the full Planck team list (roughly 15%) are women. This is already appallingly low. Worse still, none of the 43 formerly “identified” members officially receiving a monetary prize are women (although we would have expected about 6 given even that terrible fraction). Put more explicitly, there is not a single woman in the upper reaches of Planck scientific management.

This terrible situation was also noted by my colleague Jean-Luc Starck (one of the larger group of 334) and Olivier Berné. As a slight corrective to this, it was refreshing to see Nature’s take on the end of Planck dominated by interviews with young members of the collaboration including several women who will, we hope, be dominating the field over the coming years and decades.

by Andrew at August 13, 2018 10:07 PM

Axel Maas - Looking Inside the Standard Model

Fostering an idea with experience
In the previous entry I wrote how hard it is to establish a new idea, if the only existing option to get experimental confirmation is to become very, very precise. Fortunately, this is not the only option we have. Besides experimental confirmation, we can also attempt to test an idea theoretically. How is this done?

The best possibility is to set up a situation, in which the new idea creates a most spectacular outcome. In addition, it should be a situation in which older ideas yield a drastically different outcome. This sounds actually easier than it is. There are three issues to be taken care of.

The first two have something to do with a very important distinction. That of a theory and that of an observation. An observation is something we measure in an experiment or calculate if we play around with models. An observation is always the outcome if we set up something initially, and then look at it some time later. The theory should give a description of how the initial and the final stuff are related. This means that we look for every observation for a corresponding theory to give it an explanation. To this comes the additional modern idea of physics that there should not be an own theory for every observation. Rather, we would like to have a unified theory, i.e. one theory which explains all observations. This is not yet the case. But at least we have reduced it to a handful of theories. In fact, for anything going on inside our solar system we need so far just two: The standard-model of particle physics and general relativity.

Coming back to our idea, we have now the following problem. Since we do a gedankenexperiment, we are allowed to chose any theory we like. But since we are just a bunch of people with a bunch of computers we are not able to calculate all the possible observations a theory can describe. Not to mention all possible observations of all theories. And it is here, where the problem starts. The older ideas still exist, because they are not bad, but rather explain a huge amount of stuff. Hence, for many observations in any theory they will be still more than good enough. Thus, to find spectacular disagreement, we do not only need to find a suitable theory. We also need to find a suitable observation to show disagreement.

And now enters the third problem: We actually have to do the calculation to check whether our suspicion is correct. This is usually not a simple exercise. In fact, the effort needed can make such a calculation a complete master thesis. And sometimes even much more. Only after the calculation is complete we know whether the observation and theory we have chosen was a good choice. Because only then we know whether the anticipated disagreement is really there. And it may be that our choice was not good, and we have to restart the process.

Sounds pretty hopeless? Well, this is actually one of the reasons why physicists are famed for their tolerance to frustration. Because such experiences are indeed inevitable. But fortunately it is not as bad as it sounds. And that has something to do with how we chose the observation (and the theory). This I did not specify yet. And just guessing would indeed lead to a lot of frustration.

The thing which helps us to hit more often than not the right theory and observation is insight and, especially, experience. The ideas we have tell us about how theories function. I.e., our insights give us the ability to estimate what will come out of a calculation even without actually doing it. Of course, this will be a qualitative statement, i.e. one without exact numbers. And it will not always be right. But if our ideas are correct, it will work out usually. In fact, if we would regularly not estimate correctly, this should require us to reevaluate our ideas. And it is our experience which helps us to get from insights to estimates.

This defines our process to test our ideas. And this process can actually be well traced out in our research. E.g. in a paper from last year we collected many of such qualitative estimates. They were based on some much older, much more crude estimates published several years back. In fact, the newer paper already included some quite involved semi-quantitative statements. We then used massive computer simulations to test our predictions. They were indeed as good confirmed as possible with the amount of computers we had. This we reported in another paper. This gives us hope to be on the right track.

So, the next step is to enlarge our testbed. For this, we already came up with some new first ideas. However, these will be even more challenging to test. But it is possible. And so we continue the cycle.

by Axel Maas ( at August 13, 2018 02:46 PM

July 26, 2018

Sean Carroll - Preposterous Universe

Mindscape Podcast

For anyone who hasn’t been following along on other social media, the big news is that I’ve started a podcast, called Mindscape. It’s still young, but early returns are promising!

I won’t be posting each new episode here; the podcast has a “blog” of its own, and episodes and associated show notes will be published there. You can subscribe by RSS as usual, or there is also an email list you can sign up for. For podcast aficionados, Mindscape should be available wherever finer podcasts are served, including iTunes, Google Play, Stitcher, Spotify, and so on.

As explained at the welcome post, the format will be fairly conventional: me talking to smart people about interesting ideas. It won’t be all, or even primarily, about physics; much of my personal motivation is to get the opportunity to talk about all sorts of other interesting things. I’m expecting there will be occasional solo episodes that just have me rambling on about one thing or another.

We’ve already had a bunch of cool guests, check these out:

And there are more exciting episodes on the way. Enjoy, and spread the word!

by Sean Carroll at July 26, 2018 04:15 PM

July 19, 2018

Andrew Jaffe - Leaves on the Line

(Almost) The end of Planck

This week, we released (most of) the final set of papers from the Planck collaboration — the long-awaited Planck 2018 results (which were originally meant to be the “Planck 2016 results”, but everything takes longer than you hope…), available on the ESA website as well as the arXiv. More importantly for many astrophysicists and cosmologists, the final public release of Planck data is also available.

Anyway, we aren’t quite finished: those of you up on your roman numerals will notice that there are only 9 papers but the last one is “XII” — the rest of the papers will come out over the coming months. So it’s not the end, but at least it’s the beginning of the end.

And it’s been a long time coming. I attended my first Planck-related meeting in 2000 or so (and plenty of people had been working on the projects that would become Planck for a half-decade by that point). For the last year or more, the number of people working on Planck has dwindled as grant money has dried up (most of the scientists now analysing the data are doing so without direct funding for the work).

(I won’t rehash the scientific and technical background to the Planck Satellite and the cosmic microwave background (CMB), which I’ve been writing about for most of the lifetime of this blog.)

Planck 2018: the science

So, in the language of the title of the first paper in the series, what is the legacy of Planck? The state of our science is strong. For the first time, we present full results from both the temperature of the CMB and its polarization. Unfortunately, we don’t actually use all the data available to us — on the largest angular scales, Planck’s results remain contaminated by astrophysical foregrounds and unknown “systematic” errors. This is especially true of our measurements of the polarization of the CMB, unfortunately, which is probably Planck’s most significant limitation.

The remaining data are an excellent match for what is becoming the standard model of cosmology: ΛCDM, or “Lambda-Cold Dark Matter”, which is dominated, first, by a component which makes the Universe accelerate in its expansion (Λ, Greek Lambda), usually thought to be Einstein’s cosmological constant; and secondarily by an invisible component that seems to interact only by gravity (CDM, or “cold dark matter”). We have tested for more exotic versions of both of these components, but the simplest model seems to fit the data without needing any such extensions. We also observe the atoms and light which comprise the more prosaic kinds of matter we observe in our day-to-day lives, which make up only a few percent of the Universe.

All together, the sum of the densities of these components are just enough to make the curvature of the Universe exactly flat through Einstein’s General Relativity and its famous relationship between the amount of stuff (mass) and the geometry of space-time. Furthermore, we can measure the way the matter in the Universe is distributed as a function of the length scale of the structures involved. All of these are consistent with the predictions of the famous or infamous theory of cosmic inflation), which expanded the Universe when it was much less than one second old by factors of more than 1020. This made the Universe appear flat (think of zooming into a curved surface) and expanded the tiny random fluctuations of quantum mechanics so quickly and so much that they eventually became the galaxies and clusters of galaxies we observe today. (Unfortunately, we still haven’t observed the long-awaited primordial B-mode polarization that would be a somewhat direct signature of inflation, although the combination of data from Planck and BICEP2/Keck give the strongest constraint to date.)

Most of these results are encoded in a function called the CMB power spectrum, something I’ve shown here on the blog a few times before, but I never tire of the beautiful agreement between theory and experiment, so I’ll do it again: PlanckSpectra (The figure is from the Planck “legacy” paper; more details are in others in the 2018 series, especially the Planck “cosmological parameters” paper.) The top panel gives the power spectrum for the Planck temperature data, the second panel the cross-correlation between temperature and the so-called E-mode polarization, the left bottom panel the polarization-only spectrum, and the right bottom the spectrum from the gravitational lensing of CMB photons due to matter along the line of sight. (There are also spectra for the B mode of polarization, but Planck cannot distinguish these from zero.) The points are “one sigma” error bars, and the blue curve gives the best fit model.

As an important aside, these spectra per se are not used to determine the cosmological parameters; rather, we use a Bayesian procedure to calculate the likelihood of the parameters directly from the data. On small scales (corresponding to 𝓁>30 since 𝓁 is related to the inverse of an angular distance), estimates of spectra from individual detectors are used as an approximation to the proper Bayesian formula; on large scales (𝓁<30) we use a more complicated likelihood function, calculated somewhat differently for data from Planck’s High- and Low-frequency instruments, which captures more of the details of the full Bayesian procedure (although, as noted above, we don’t use all possible combinations of polarization and temperature data to avoid contamination by foregrounds and unaccounted-for sources of noise).

Of course, not all cosmological data, from Planck and elsewhere, seem to agree completely with the theory. Perhaps most famously, local measurements of how fast the Universe is expanding today — the Hubble constant — give a value of H0 = (73.52 ± 1.62) km/s/Mpc (the units give how much faster something is moving away from us in km/s as they get further away, measured in megaparsecs (Mpc); whereas Planck (which infers the value within a constrained model) gives (67.27 ± 0.60) km/s/Mpc . This is a pretty significant discrepancy and, unfortunately, it seems difficult to find an interesting cosmological effect that could be responsible for these differences. Rather, we are forced to expect that it is due to one or more of the experiments having some unaccounted-for source of error.

The term of art for these discrepancies is “tension” and indeed there are a few other “tensions” between Planck and other datasets, as well as within the Planck data itself: weak gravitational lensing measurements of the distortion of light rays due to the clustering of matter in the relatively nearby Universe show evidence for slightly weaker clustering than that inferred from Planck data. There are tensions even within Planck, when we measure the same quantities by different means (including things related to similar gravitational lensing effects). But, just as “half of all three-sigma results are wrong”, we expect that we’ve mis- or under-estimated (or to quote the no-longer-in-the-running-for-the-worst president ever, “misunderestimated”) our errors much or all of the time and should really learn to expect this sort of thing. Some may turn out to be real, but many will be statistical flukes or systematic experimental errors.

(If you were looking a briefer but more technical fly-through the Planck results — from someone not on the Planck team — check out Renee Hlozek’s tweetstorm.)

Planck 2018: lessons learned

So, Planck has more or less lived up to its advanced billing as providing definitive measurements of the cosmological parameters, while still leaving enough “tensions” and other open questions to keep us cosmologists working for decades to come (we are already planning the next generation of ground-based telescopes and satellites for measuring the CMB).

But did we do things in the best possible way? Almost certainly not. My colleague (and former grad student!) Joe Zuntz has pointed out that we don’t use any explicit “blinding” in our statistical analysis. The point is to avoid our own biases when doing an analysis: you don’t want to stop looking for sources of error when you agree with the model you thought would be true. This works really well when you can enumerate all of your sources of error and then simulate them. In practice, most collaborations (such as the Polarbear team with whom I also work) choose to un-blind some results exactly to be able to find such sources of error, and indeed this is the motivation behind the scores of “null tests” that we run on different combinations of Planck data. We discuss this a little in an appendix of the “legacy” paper — null tests are important, but we have often found that a fully blind procedure isn’t powerful enough to find all sources of error, and in many cases (including some motivated by external scientists looking at Planck data) it was exactly low-level discrepancies within the processed results that have led us to new systematic effects. A more fully-blind procedure would be preferable, of course, but I hope this is a case of the great being the enemy of the good (or good enough). I suspect that those next-generation CMB experiments will incorporate blinding from the beginning.

Further, although we have released a lot of software and data to the community, it would be very difficult to reproduce all of our results. Nowadays, experiments are moving toward a fully open-source model, where all the software is publicly available (in Planck, not all of our analysis software was available to other members of the collaboration, much less to the community at large). This does impose an extra burden on the scientists, but it is probably worth the effort, and again, needs to be built into the collaboration’s policies from the start.

That’s the science and methodology. But Planck is also important as having been one of the first of what is now pretty standard in astrophysics: a collaboration of many hundreds of scientists (and many hundreds more of engineers, administrators, and others without whom Planck would not have been possible). In the end, we persisted, and persevered, and did some great science. But I learned that scientists need to learn to be better at communicating, both from the top of the organisation down, and from the “bottom” (I hesitate to use that word, since that is where much of the real work is done) up, especially when those lines of hoped-for communication are usually between different labs or Universities, very often between different countries. Physicists, I have learned, can be pretty bad at managing — and at being managed. This isn’t a great combination, and I say this as a middle-manager in the Planck organisation, very much guilty on both fronts.

by Andrew at July 19, 2018 06:51 PM

Andrew Jaffe - Leaves on the Line

Loncon 3

Briefly (but not brief enough for a single tweet): I’ll be speaking at Loncon 3, the 72nd World Science Fiction Convention, this weekend (doesn’t that website have a 90s retro feel?).

At 1:30 on Saturday afternoon, I’ll be part of a panel trying to answer the question “What Is Science?” As Justice Potter Stewart once said in a somewhat more NSFW context, the best answer is probably “I know it when I see it” but we’ll see if we can do a little better than that tomorrow. My fellow panelists seem to be writers, curators, philosophers and theologians (one of whom purports to believe that the “the laws of thermodynamics prove the existence of God” — a claim about which I admit some skepticism…) so we’ll see what a proper physicist can add to the discussion.

At 8pm in the evening, for participants without anything better to do on a Saturday night, I’ll be alone on stage discussing “The Random Universe”, giving an overview of how we can somehow learn about the Universe despite incomplete information and inherently random physical processes.

There is plenty of other good stuff throughout the convention, which runs from 14 to 18 August. Imperial Astrophysics will be part of “The Great Cosmic Show”, with scientists talking about some of the exciting astrophysical research going on here in London. And Imperial’s own Dave Clements is running the whole (not fictional) science programme for the convention. If you’re around, come and say hi to any or all of us.

by Andrew at July 19, 2018 12:02 PM

July 16, 2018

Tommaso Dorigo - Scientificblogging

A Beautiful New Spectroscopy Measurement
What is spectroscopy ? 
(A) the observation of ghosts by infrared visors or other optical devices
(B) the study of excited states of matter through observation of energy emissions

If you answered (A), you are probably using a lousy internet search engine; and btw, you are rather dumb. Ghosts do not exist. 

Otherwise you are welcome to read on. We are, in fact, about to discuss a cutting-edge spectroscopy measurement, performed by the CMS experiment using lots of proton-proton collisions by the CERN Large Hadron Collider (LHC). 

read more

by Tommaso Dorigo at July 16, 2018 09:13 AM

July 12, 2018

Matt Strassler - Of Particular Significance

“Seeing” Double: Neutrinos and Photons Observed from the Same Cosmic Source

There has long been a question as to what types of events and processes are responsible for the highest-energy neutrinos coming from space and observed by scientists.  Another question, probably related, is what creates the majority of high-energy cosmic rays — the particles, mostly protons, that are constantly raining down upon the Earth.

As scientists’ ability to detect high-energy neutrinos (particles that are hugely abundant, electrically neutral, very light-weight, and very difficult to observe) and high-energy photons (particles of light, though not necessarily of visible light) have become more powerful and precise, there’s been considerable hope of getting an answer to these question.  One of the things we’ve been awaiting (and been disappointed a couple of times) is a violent explosion out in the universe that produces both high-energy photons and neutrinos at the same time, at a high enough rate that both types of particles can be observed at the same time coming from the same direction.

In recent years, there has been some indirect evidence that blazars — narrow jets of particles, pointed in our general direction like the barrel of a gun, and created as material swirls near and almost into giant black holes in the centers of very distant galaxies — may be responsible for the high-energy neutrinos.  Strong direct evidence in favor of this hypothesis has just been presented today.   Last year, one of these blazars flared brightly, and the flare created both high-energy neutrinos and high-energy photons that were observed within the same period, coming from the same place in the sky.

I have written about the IceCube neutrino observatory before; it’s a cubic kilometer of ice under the South Pole, instrumented with light detectors, and it’s ideal for observing neutrinos whose motion-energy far exceeds that of the protons in the Large Hadron Collider, where the Higgs particle was discovered.  These neutrinos mostly pass through Ice Cube undetected, but one in 100,000 hits something, and debris from the collision produces visible light that Ice Cube’s detectors can record.   IceCube has already made important discoveries, detecting a new class of high-energy neutrinos.

On Sept 22 of last year, one of these very high-energy neutrinos was observed at IceCube. More precisely, a muon created underground by the collision of this neutrino with an atomic nucleus was observed in IceCube.  To create the observed muon, the neutrino must have had a motion-energy tens of thousand times larger than than the motion-energy of each proton at the Large Hadron Collider (LHC).  And the direction of the neutrino’s motion is known too; it’s essentially the same as that of the observed muon.  So IceCube’s scientists knew where, on the sky, this neutrino had come from.

(This doesn’t work for typical cosmic rays; protons, for instance, travel in curved paths because they are deflected by cosmic magnetic fields, so even if you measure their travel direction at their arrival to Earth, you don’t then know where they came from. Neutrinos, beng electrically neutral, aren’t affected by magnetic fields and travel in a straight line, just as photons do.)

Very close to that direction is a well-known blazar (TXS-0506), four billion light years away (a good fraction of the distance across the visible universe).

The IceCube scientists immediately reported their neutrino observation to scientists with high-energy photon detectors.  (I’ve also written about some of the detectors used to study the very high-energy photons that we find in the sky: in particular, the Fermi/LAT satellite played a role in this latest discovery.) Fermi/LAT, which continuously monitors the sky, was already detecting high-energy photons coming from the same direction.   Within a few days the Fermi scientists had confirmed that TXS-0506 was indeed flaring at the time — already starting in April 2017 in fact, six times as bright as normal.  With this news from IceCube and Fermi/LAT, many other telescopes (including the MAGIC cosmic ray detector telescopes among others) then followed suit and studied the blazar, learning more about the properties of its flare.

Now, just a single neutrino on its own isn’t entirely convincing; is it possible that this was all just a coincidence?  So the IceCube folks went back to their older data to snoop around.  There they discovered, in their 2014-2015 data, a dramatic flare in neutrinos — more than a dozen neutrinos, seen over 150 days, had come from the same direction in the sky where TXS-0506 is sitting.  (More precisely, nearly 20 from this direction were seen, in a time period where normally there’d just be 6 or 7 by random chance.)  This confirms that this blazar is indeed a source of neutrinos.  And from the energies of the neutrinos in this flare, yet more can be learned about this blazar, and how it makes  high-energy photons and neutrinos at the same time.  Interestingly, so far at least, there’s no strong evidence for this 2014 flare in photons, except perhaps an increase in the number of the highest-energy photons… but not in the total brightness of the source.

The full picture, still emerging, tends to support the idea that the blazar arises from a supermassive black hole, acting as a natural particle accelerator, making a narrow spray of particles, including protons, at extremely high energy.  These protons, millions of times more energetic than those at the Large Hadron Collider, then collide with more ordinary particles that are just wandering around, such as visible-light photons from starlight or infrared photons from the ambient heat of the universe.  The collisions produce particles called pions, made from quarks and anti-quarks and gluons (just as protons are), which in turn decay either to photons or to (among other things) neutrinos.  And its those resulting photons and neutrinos which have now been jointly observed.

Since cosmic rays, the mysterious high energy particles from outer space that are constantly raining down on our planet, are mostly protons, this is evidence that many, perhaps most, of the highest energy cosmic rays are created in the natural particle accelerators associated with blazars. Many scientists have suspected that the most extreme cosmic rays are associated with the most active black holes at the centers of galaxies, and now we have evidence and more details in favor of this idea.  It now appears likely that that this question will be answerable over time, as more blazar flares are observed and studied.

The announcement of this important discovery was made at the National Science Foundation by Francis Halzen, the IceCube principal investigator, Olga Botner, former IceCube spokesperson, Regina Caputo, the Fermi-LAT analysis coordinator, and Razmik Mirzoyan, MAGIC spokesperson.

The fact that both photons and neutrinos have been observed from the same source is an example of what people are now calling “multi-messenger astronomy”; a previous example was the observation in gravitational waves, and in photons of many different energies, of two merging neutron stars.  Of course, something like this already happened in 1987, when a supernova was seen by eye, and also observed in neutrinos.  But in this case, the neutrinos and photons have energies millions and billions of times larger!


by Matt Strassler at July 12, 2018 04:59 PM

July 08, 2018

Marco Frasca - The Gauge Connection

ICHEP 2018

The great high-energy physics conference ICHEP 2018 is over and, as usual, I spend some words about it. The big collaborations of CERN presented their last results. I think the most relevant of this is about the evidence (3\sigma) that the Standard Model is at odds with the measurement of spin correlation between top-antitop pair of quarks. More is given in the ATLAS communicate. As expected, increasing precision proves to be rewarding.

About the Higgs particle, after the important announcement about the existence of the ttH process, both ATLAS and CMS are pursuing further their improvement of precision. About the signal strength they give the following results. For ATLAS (see here)

\mu=1.13\pm 0.05({\rm stat.})\pm 0.05({\rm exp.})^{+0.05}_{-0.04}({\rm sig. th.})\pm 0.03({\rm bkg. th})

and CMS (see here)

\mu=1.17\pm 0.06({\rm stat.})^{+0.06}_{-0.05}({\rm sig. th.})\pm 0.06({\rm other syst.}).

The news is that the error is diminished and both agrees. They show a small tension, 13% and 17% respectively, but the overall result is consistent with the Standard Model.

When the different contributions are unpacked in the respective contributions due to different processes, CMS claims some tensions in the WW decay that should be taken under scrutiny in the future (see here). They presented the results from 35.9{\rm fb}^{-1} data and so, there is no significant improvement, for the moment, with respect to Moriond conference this year. The situation is rather better for the ZZ decay where no tension appears and the agreement with the Standard Model is there in all its glory (see here). Things are quite different, but not too much, for ATLAS as in this case they observe some tensions but these are all below 2\sigma (see here). For the WW decay, ATLAS does not see anything above 1\sigma (see here).

So, although there is something to take under attention with the increase of data, that will reach 100 {\rm fb}^{-1} this year, but the Standard Model is in good health with respect to the Higgs sector even if there is a lot to be answered yet and precision measurements are the main tool. The correlation in the tt pair is absolutely promising and we should hope this will be confirmed a discovery.


by mfrasca at July 08, 2018 10:58 AM