Particle Physics Planet


August 24, 2016

Christian P. Robert - xi'an's og

ABC by subset simulation

Last week, Vakilzadeh, Beck and Abrahamsson arXived a paper entitled “Using Approximate Bayesian Computation by Subset Simulation for Efficient Posterior Assessment of Dynamic State-Space Model Classes”. It follows an earlier paper by Beck and coauthors on ABC by subset simulation, paper that I did not read. The model of interest is a hidden Markov model with continuous components and covariates (input), eg a stochastic volatility model. There is however a catch in the definition of the model, namely that the observable part of the HMM includes an extra measurement error term linked with the tolerance level of the ABC algorithm. Error term that is dependent across time, the vector of errors being within a ball of radius ε. This reminds me of noisy ABC, obviously (and as acknowledged by the authors), but also of some ABC developments of Ajay Jasra and coauthors. Indeed, as in those papers, Vakilzadeh et al. use the raw data sequence to compute their tolerance neighbourhoods, which obviously bypasses the selection of a summary statistic [vector] but also may drown signal under noise for long enough series.

“In this study, we show that formulating a dynamical system as a general hierarchical state-space model enables us to independently estimate the model evidence for each model class.”

Subset simulation is a nested technique that produces a sequence of nested balls (and related tolerances) such that the conditional probability to be in the next ball given the previous one remains large enough. Requiring a new round of simulation each time. This is somewhat reminding me of nested sampling, even though the two methods differ. For subset simulation, estimating the level probabilities means that there also exists a converging (and even unbiased!) estimator for the evidence associated with different tolerance levels. Which is not a particularly natural object unless one wants to turn it into a tolerance selection principle, which would be quite a novel perspective. But not one adopted in the paper, seemingly. Given that the application section truly compares models I must have missed something there. (Blame the long flight from San Francisco to Sydney!) Interestingly, the different models as in Table 4 relate to different tolerance levels, which may be an hindrance for the overall validation of the method.

I find the subsequent part on getting rid of uncertain prediction error model parameters of lesser [personnal] interest as it essentially replaces the marginal posterior on the parameters of interest by a BIC approximation, with the unsurprising conclusion that “the prior distribution of the nuisance parameter cancels out”.


Filed under: Books, Statistics, Travel Tagged: ABC, Bayesian model comparison, BIC, evidence, hidden Markov models, Laplace approximation, nested sampling, San Francisco, subset simulation, Sydney Harbour

by xi'an at August 24, 2016 10:16 PM

astrobites - astro-ph reader's digest

The Great Wall (of Galaxies, in Sloan)

Title:  Sloan Great Wall as a complex of superclusters with collapsing cores
Authors:  M. Einasto, H. Lietzen, M. Gramann, E. Tempel, E. Saar, L. J. Liivamägi, P. Heinämäki, P. Nurmi, J. Einasto
First Author’s Institution:  Tartu Observatory, Tõravere, Estonia
Status:  Accepted for publication in Astronomy and Astrophysics

The sheer scale of the universe is overwhelming.  Much of the universe is filled with a complex web of matter—what cosmologists like to call the “structure” of the universe.  Most everything we know and love—the wispy cloud in the sky, the bright nebulas that pierce the natal darkness of giant molecular clouds,  the faint and distant galaxies—are gravitationally bound to something.  One can continually zoom out to larger and larger scales, and you’ll see that objects that looked lonely on one scale are often surrounded with similar objects: our Sun is but one of the 200 billion stars the Milky Way Galaxy, which in turn is but one of about 50 galaxies in the Local Group, which in turn is one of 300-500 galaxy groups and clusters in the Laniakea Supercluster.  But this is where the cosmic, fractal structure of hierarchically gravitationally bound objects ends.  The largest structures we see in the universe—superclusters—enter into territory where gravity no longer reins, and all collections of matter at larger scales are subject to the expansion of the universe.

These vast and massive superclusters are the objects of study of the authors of today’s paper.  Clues to uncover the universe’s remaining secrets—such as the cosmological model and the processes that formed the present-day web—are encoded in the structures of superclusters.  Superclusters are also the birthplaces of clusters, and the resultant structures therein.  Thus the authors seek to study in detail such properties of superclusters.  They turn to the closest collection of superclusters bursting with galaxies discovered in the Sloan Digital Sky Survey (SDSS), the eponym for the Sloan Great Wall (SGW) of galaxies, which contain galaxies spanning a redshift z of 0.04 to 0.12.

einasto2016-gal_group_distrib

Figure 1. Galaxy groups (circles) in the Sloan Great Wall.  The groups have been color coded by the supercluster they were identified to be in.  The size of the circles denotes the spatial extent of the group as we would see them in the sky.  Note the elongated morphologies of the superclusters.  Figure taken from today’s paper.

The authors uncovered a rich hierarchy of structure in the Great Wall.  They find five superclusters with a luminosity density cutoff, all massive—accounting for invisible gas and galaxies too faint to detect, they estimate that these superclusters range in mass from about 1015 M☉ to a few 1016 M—one to ten thousand times the mass of the Milky Way.  Two of the superclusters are visibly “rich” and contain 2000-3000 galaxies each (superclusters 1 and 2 in Figure 1), while three are “poor,” containing just a few hundred visible galaxies.

Using a novel method to identify how the galaxies cluster, they find that each supercluster in turn contains highly dense “cores” of galaxies.  The rich superclusters contain several cores with tens to hundreds of galaxy groups and range from 1014 M☉ to a few 1015 M.  These cores in turn contain galaxy clusters, comprising a single galaxy cluster or containing multiple clusters.  Within these cores are what astronomers would consider the first “true” structures—extremely dense regions which no longer grow with the expansion of the universe but instead collapse into bound objects.  The authors derive density profiles for the cores in the rich superclusters and find that the inner 8 h-1 Mpc (about 2000 times larger than the Milky Way; h-1 denotes the normalized Hubble constant) of each core is or will soon be collapsing.

The superclusters are lush with mysterious order beyond their spatial hierarchy.  One of the rich superclusters (#1 in Figure 1) appears to have a filamentary shape and which contains many red, old galaxies.  The other rich supercluster (#2) is more spidery, a conglomeration of chains and clusters of galaxies, all connected, and contains more blue, young galaxies.  These differences in shape and color could indicate that the superclusters have different dynamical histories.  The largest objects of our universe remain to be further explored!

by Stacy Kim at August 24, 2016 07:01 PM

Emily Lakdawalla - The Planetary Society Blog

Proxima Centauri b: Have we just found Earth’s cousin right on our doorstep?
What began as a tantalizing rumor has just become an astonishing fact. Today a group of thirty-one scientists announced the discovery of a terrestrial exoplanet orbiting Proxima Centauri. The discovery of this planet, Proxima Centauri b, is a huge breakthrough not just for astronomers but for all of us. Here’s why.

August 24, 2016 05:01 PM

Tommaso Dorigo - Scientificblogging

Anomaly!: Book News And A Clip
The book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab" is going to press as we speak, and its distribution in bookstores is foreseen for the beginning of November. In the meantime, I am getting ready to present it in several laboratories and institutes. I am posting here the coordinates of events which are already scheduled, in case anybody lives nearby and/or has an interest in attending.
- On November 29th at 4PM there will be a presentation at CERN (more details will follow).

read more

by Tommaso Dorigo at August 24, 2016 12:52 PM

Peter Coles - In the Dark

Interlude

Rather later than originally planned I’ve finally got the nod to be a guest of the National Health Service for a while. I’ll therefore  be taking a break from blogging until they’re done with me. Normal services will be resumed as soon as possible, probably but, for the time being, there will now follow a short intermission.

 


by telescoper at August 24, 2016 07:10 AM

August 23, 2016

Christian P. Robert - xi'an's og

Think-A-Lot-Tots [book review]

I got contacted by an author, Thomai Dion, toward writing a review of her children books, The Animal Cell, The Neuron, and a Science Lab’ Notebook. And I thus asked for the books to get a look. Which I get prior to my long flight from San Francisco to Sydney, most conveniently. [This is the second time this happens: I have been contacted once by an author of a most absurd book, a while ago.]

I started with the cell, which is a 17 pages book with a few dozen sentences, and one or more pictures per page. Pictures drawn in a sort of naïve fashion that should appeal to young children. Being decades away from being a kid and more than a decade away from raising a kid (happy 20th birthday, Rachel!), I have trouble assessing the ideal age of the readership or the relevance of introducing to them [all] 13 components of an animal cell, from the membrane to the cytoplasm. Mentioning RNA and DNA without explaining what it is. Each of these components gets added to the cell picture as it comes, with a one line description of its purpose. I wonder how much a kid can remember of this list, while (s)he may wonder where those invisible cells stand. And why they are for. (When checking on Google, I found this sequence of pages more convincing, if much more advanced. Again, I am not the best suited for assessing how kids would take it!)

The 21 pages book about the neurons is more explanatory than descriptive and I thus found it more convincing (again with not much of an idea of how a kid would perceive it!). It starts from the brain sending signals, to parts of the body and requiring a medium to do so, which happens to be made of neurons. Once again, though, I feel the book spends too much time on the description rather than on the function of the neurons, e.g., with no explanation of how the signal moves from the brain to the neuron sequence or from the last neuron to the muscle involved.

The (young) scientist notebook is the best book in the series in my opinion: it reproduces a lab book and helps a young kid to formalise what (s)he thinks is a scientific experiment. As a kid, I did play at conducting “scientific” “experiments” with whatever object I happened to find, or later playing with ready-made chemistry and biology sets, but having such a lab book would have been terrific! Setting the question of interest and the hypothesis or hypotheses behind it prior to running the experiment is a major lesson in scientific thinking that should be offered to every kid! However, since it contains no pictures but mostly blank spaces to be filled by the young reader, one could suggest to parents to print such lab report sheets themselves.


Filed under: Books, Kids Tagged: biologie 2000, book review, cell, chimie 2000, neuron, notebook, scientific experiments, scientific mind, Think-A-Lot-Tots, Thomai Dion

by xi'an at August 23, 2016 10:16 PM

Emily Lakdawalla - The Planetary Society Blog

How big is that butte?
Whenever I share images from Curiosity, among the most common questions I’m asked is “what is the scale of this image?” With help from imaging enthusiast Seán Doran, I can answer that question for some of the Murray buttes.

August 23, 2016 10:09 PM

Peter Coles - In the Dark

Glamorgan versus Sussex

Another of life’s little coincidences came my way today in the form of a County Championship match between Glamorgan and Sussex in Cardiff. Naturally, being on holiday, and the SWALEC Stadium being very close to my house, I took the opportunity to see the first day’s play.

image

Sussex used the uncontested toss to put Glamorgan in to bat. It was a warm sunny day with light cloud and no wind. One would have imagined conditions would have been good for batting, but the Sussex skipper may have seen something in the pitch or, perhaps more likely, knew about Glamorgan’s batting frailties…

As it turned out, there didn’t seem to be much pace in the pitch, but there was definitely some swing and movement for the Sussex bowlers from the start. Glamorgan’s batsman struggled early on, losing a wicket in the very first over, and slumped to 54 for 5 at one stage, recovering only slightly to 87 for 5 at lunch.

After the interval the recovery continued, largely because of Wagg (who eventually fell for an excellent 57) and Morgan who was unbeaten at the close. Glamorgan finished on 252 all out on the stroke of the tea interval. Not a great score, but a lot better than looked likely at 54 for 5.

During the tea interval I wandered onto the field and looked at the pitch, which had quite a bit of green in it:

image

Perhaps that’s why Sussex put Glamorgan in?

Anyway, when Sussex came out to bat it was a different story. Openers Joyce and Nash put on 111 for the first wicket, but Nelson did the trick for Glamorgan and Joyce was out just before stumps bringing in a nightwatchman (Briggs) to face the last couple of overs.

A full day’s cricket of 95 overs in the sunshine yielded 363 runs for the loss of 12 wickets. Not bad at all! It’s just a pity there were only a few hundred people in the crowd!

Sussex are obviously in a strong position but the weather forecast for the later part of this week is not good so they should push on tomorrow and try to force a result!


by telescoper at August 23, 2016 09:13 PM

Clifford V. Johnson - Asymptotia

Sometimes a Sharpie…

Sometimes a sharpie and a bit of bristol are the best defense against getting lost in the digital world*... (Click for larger view.)

adding_faces_23_08_2016

(Throwing down some additional faces for a story in the book. Just wasn't feeling it in [...] Click to continue reading this post

The post Sometimes a Sharpie… appeared first on Asymptotia.

by Clifford at August 23, 2016 08:31 PM

astrobites - astro-ph reader's digest

A Sonic Boom (Half) as Old as Time Itself

Title: ALMA-SZ Detection of a Galaxy Cluster Merger Shock at Half the Age of the Universe

Authors: K. Basu, M. Sommer, J. Erler, et al.

First Author’s Institution: Argelander Institut fur Astronomie, Bonn, Germany

Paper Status: Submitted to Astrophysical Journal – Letters

Galaxy clusters are among the most massive objects in the Universe. Some contain thousands of galaxies, with well over a trillion stars between them. And that’s only 5% of a cluster! The vast majority (around 85%) of a cluster’s mass is made up of dark matter. The remaining 10% is hot, very low-density gas (plasma) called the “Intracluster Medium“, or the ICM.

Fig 1: The "Bullet Cluster"

Fig 1: The “Bullet Cluster”, which is actually two colliding galaxy clusters. The red shows the hot ICM gas (measured in X-rays), while the blue shows the dark matter (measured with weak lensing). When the galaxies collided, friction caused the gas to stick towards the center, while the dark matter kept on going. Image C/O NASA Astronomy Picture of the Day

We can weigh all the components with a variety of observations. The stars in galaxies are visible at optical wavelengths, while the hot ICM emits X-rays that can be observed with satellites like Chandra. It’s more difficult to measure the dark matter, which by definition doesn’t emit light. But the technique of “weak lensing” – measuring how the dark matter gravitationally distorts the light coming from background galaxies – gives us a rough estimate of where the dark matter is.

Fig1_right

Fig 2: An X-ray view of the “El Gordo” cluster, in orange/white. The shock front is highlighted with a white arc. The green contours show radio emission, including a large radio “relic” on top of the shock front. Figure 1 from Basu et al. 2016.

In a normal cluster, the three components (galaxies, ICM, and dark matter) all lie on top of one another. But when two clusters collide, the components can separate. Dark matter only feels the pull of gravity, but the ICM also experiences friction and gas pressure. The dark matter components whiz past one another while the ICM sticks together, as in the famous example of the Bullet Cluster, shown above. It’s like if two water balloons collide: the rubber (ICM) stays put in the middle, while the water (dark matter) is free to keep flying past.

The authors of today’s paper are catching this process in action. Specifically, they measure the shockwave in the ICM gas from the collision of two galaxy clusters, in “El Gordo”. The ICM, like any gas, has a natural sound speed (which depends on its temperature). When gas moves faster than its sound speed, it creates shockwaves, where the pressure builds significantly and the gas is superheated. This same process is what creates a sonic boom on Earth.

Fig3

Fig. 3: The measurements from X-ray and radio are shown in red, with the model shown in green. The most important panels are the top two, showing radio emission and temperature as a function of radius. There is a sharp increase in emission and temperature at the “shock front”. Figure 3 from Basu et al. 2016

The authors combined two techniques to get the best constraints on the properties of the shock. First, they used X-ray measurements from Chandra to locate the shock as the brightest, hottest portion of the ICM (Figure 2). The hot ICM plasma can distort the light coming to us from the Cosmic Microwave Background, which is called the SZ (Sunyaev-Zeldovich) Effect. The authors added SZ measurements from the new ALMA radio array, which allowed them to precisely measure the gas pressure inside and outside the shockwave.

The authors generate a model of the shock which is designed to match the two sets of observations. This is shown in Figure 3. The observations show a sharp peak in radio emission and X-ray temperature, which confirms there is a shockwave that is heating up the ICM.

By comparing different models of the shock, the authors derive a “mach number” M≈2.4, which says the shockwave is moving at 2.4 times the speed of sound. That’s quite a sonic boom, particularly given that the speed of sound in the hot plasma is around 2 million miles (3.2 million kilometers) per hour!

While this is not the first time astronomers have measured shockwaves in galaxy clusters, this is the oldest shockwave we’ve ever found. El Gordo is located about 7 billion light years away, which means we are seeing the shock happening when the universe was only half as old as it is now. With the new capabilities of ALMA, observations may be able to start seeing even farther back and see the shocks created by the formation of the very first galaxy clusters.

by Ben Cook at August 23, 2016 05:05 PM

Symmetrybreaking - Fermilab/SLAC

Five facts about the Big Bang

It’s the cornerstone of cosmology, but what is it all about?

Astronomers Edwin Hubble and Milton Humason in the early 20th century discovered that galaxies are moving away from the Milky Way. More to the point: Every galaxy is moving away from every other galaxy on average, which means the whole universe is expanding. In the past, then, the whole cosmos must have been much smaller, hotter and denser. 

That description, known as the Big Bang model, has stood up against new discoveries and competing theories for the better part of a century. So what is this “Big Bang” thing all about?

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

The Big Bang happened everywhere at once. 

The universe has no center or edge, and every part of the cosmos is expanding. That means if we run the clock backward, we can figure out exactly when everything was packed together—13.8 billion years ago. Because every place we can map in the universe today occupied the same place 13.8 billion years ago, there wasn't a location for the Big Bang: Instead, it happened everywhere simultaneously.

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

The Big Bang may not describe the actual beginning of everything. 

“Big Bang” broadly refers to the theory of cosmic expansion and the hot early universe. However, sometimes even scientists will use the term to describe a moment in time—when everything was packed into a single point. The problem is that we don’t have either observations or theory that describes that moment, which is properly (if clumsily) called the “initial singularity.” 

The initial singularity is the starting point for the universe we observe, but there might have been something that came before. 

The difficulty is that the very hot early cosmos and the rapid expansion called “inflation” that likely happened right after the singularity wiped out most—if not all—of the information about any history that preceded the Big Bang. Physicists keep thinking of new ways to check for signs of an earlier universe, and though we haven’t seen any of them so far, we can’t rule it out yet.

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

The Big Bang theory explains where all the hydrogen and helium in the universe came from. 

In the 1940s, Ralph Alpher and George Gamow calculated that the early universe was hot and dense enough to make virtually all the helium, lithium and deuterium (hydrogen with a neutron attached) present in the cosmos today; later research showed where the primordial hydrogen came from. This is known as “Big Bang nucleosynthesis,” and it stands as one of the most successful predictions of the theory. The heavier elements (such as oxygen, iron and uranium) were formed in stars and supernova explosions.

The best evidence for the Big Bang is in the form of microwaves. Early on, the whole universe was dense enough to be completely opaque. But at a time roughly 380,000 years after the Big Bang, expansion spread everything out enough to make the universe transparent. 

The light released from this transition, known as the cosmic microwave background (CMB), still exists. It was first observed in the 1960s by Arno Penzias and Robert Wilson. That discovery cemented the Big Bang theory as the best description of the universe; since then, observatories such WMAP and Planck have used the CMB to tell us a lot about the total structure and content of the cosmos.

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

One of the first people to think scientifically about the origin of the universe was a Catholic priest. 

In addition to his religious training and work, Georges Lemaître was a physicist who studied the general theory of relativity and worked out some of the conditions of the early cosmos in the 1920s and ’30s. His preferred metaphors for the origin of the universe were “cosmic egg” and “primeval atom,” but they never caught on, which is too bad, because …

 

Illustration by Sandbox Studio, Chicago with Corinne Mucha

It seems nobody likes the name "Big Bang." 

Until the 1960s, the idea of a universe with a beginning was controversial among physicists. The name “Big Bang” was actually coined by astronomer Fred Hoyle, who was the leading proponent of an alternative theory, where universe continues forever without a beginning.

His shorthand for the theory caught on, and now we’re kind of stuck with it. Calvin and Hobbes’ attempt to get us to adopt “horrendous space kablooie” has failed so far.

 

The Big Bang is the cornerstone of cosmology, but it’s not the whole story. Scientists keep refining the theory of the universe, motivated by our observation of all the weird stuff out there. Dark matter (which holds galaxies together) and dark energy (which makes the expansion of the universe accelerate) are the biggest mysteries that aren't described by the Big Bang theory by itself. 

Our view of the universe, like the cosmos itself, keeps evolving as we discover more and more new things. But rather than fading away, our best explanation for why things are the way they are has remained—the fire at the beginning of the universe.

by Matthew R. Francis at August 23, 2016 01:00 PM

August 22, 2016

Christian P. Robert - xi'an's og

Dos de Mayo [book review]

Following a discusion I had with Victor Elvirà about Spanish books, I ordered a book by Arturo Pérez-Reverte called a Day of Wrath (un día de cólera), but apparently not translated into English. The day of wrath is the second of May, 1808, when the city of Madrid went to arms against the French occupation by Napoléon’s troops. An uprising that got crushed by Murat’s repression the very same day, but which led to the entire Spain taking arms against the occupation. The book is written out of historical accounts of the many participants to the uprising, from both Madrilene and French sides. Because of so many viewpoints being reported, some for a single paragraph before the victims die, the literary style is not particularly pleasant, but this is nonetheless a gripping book that I read within a single day while going (or trying to get) to San Francisco. And it is historically revealing of how unprepared the French troops were about an uprising by people mostly armed with navajas and a few hunting rifles. Who still managed to hold parts of the town for most of a day, with the help of a single artillery battalion while the rest of the troops stayed in their barracks. The author actually insists very much on that aspect, that the rebellion was mostly due to the action of the people, while leading classes, the Army, and the clergy almost uniformly condemned it. Upped estimations on the number of deaths on that day (and the following days) range around 500 for Madrilenes and 150 for French tropps, but the many stories running in the book give the impression of many more casualties.


Filed under: Books Tagged: Arturo Pérez-Reverte, Dos de Mayo, French history, Joachim Murat, Madrid, Napoléon Bonaparte, Spain, Spanish history, un dià de cólera

by xi'an at August 22, 2016 10:16 PM

Emily Lakdawalla - The Planetary Society Blog

JunoCam "Marble Movie" data available
Since a few days after entering orbit, JunoCam has been taking photos of Jupiter every fifteen minutes, accumulating a trove of data that can be assembled into a movie of the planet.

August 22, 2016 09:56 PM

astrobites - astro-ph reader's digest

The Trouble with H0 (or not?)

Title: The Trouble with H0
Authors: J.L. Bernal, L. Verde and A.G. Riess
First Author’s Institution: University of Barcelona, Spain
Status: Submitted to the Journal of Cosmology and Astroparticle Physics

Follow-up to a previous Astrobite: ‘Conflicts between Expansion History of the Local and Distant Universe‘.


Things are getting murkier.

Reporting in April 2016, I wrote about a team of astrophysicists – led by Nobel-winning astrophysicist Adam Riess – claiming a 2.4% determination of the expansion rate of our local universe. The paper claimed a significant difference in the value of H0, as compared with the Planck Collaboration value. Now, Riess and his team, Bernal and Verde, have pushed another effort describing the ‘trouble’ with H0.

What PLANCK is upto

Standard cosmology – the Lambda-CDM model – is robust, and with a few exceptions passed every test thrown at it. Most parameters have been constrained at a level that has error bars of ~ 1% or lower. A major observational effort has been led by astrophysicists to map the expansion history of the universe, and this is what today’s story is about.

Astrophysicists measure the expansion history of the current (or local) universe (H0) through observations of Type 1a Supernovae, gravitational lensing of quasars and similar probes. We measure the expansion history of the early universe through the Cosmic Microwave Background (CMB) observations(H of an earlier epoch), through projects like PLANCK and the South Pole Telescope.

Dark_Energy

But there is an interesting caveat.

Expansion history is intricately linked to how we measure or understand distance scales between objects in the epoch we are analyzing. Our inhomogeneous universe is expanding, and distance scales change in a non linear fashion between the times of the CMB and now. Over the last few years, PLANCK has given us tremendous insight into three major parameters with great statistical significance:

  1. Geometry of the universeSpatially flat, well, almost.
  2. H0 – 67.8 +- 0.9 km/s/Mpc – depends on how we interpret the properties of the current or local universe.
  3. A number called rs (in scientific jargon known as the sound horizon at drag epoch) – a derived parameter that relies on the density and the nature of different species in the early universe.

Both H0 and rare absolutely essential in building the distance ladder in the universe, from early to current times. As mentioned before, PLANCK does these measurements using the CMB. The imprints of the first matter fluctuations embedded in the CMB – the radiation from the earliest epochs – give us an insight into the expansion history of the past. Using this information, astrophysical modeling lets us extrapolate that to the current expansion rate of the universe. In simple terms, from H at the redshift of CMB (~ 1100) we extrapolate H at present times i.e. H0. The problem is, PLANCK’s value of the extrapolated H0 doesn’t match local measurements.

 

screenshot

Fig 2. Comparing H0 values from different probes : PLANCK data 2015(in blue), PLANCK data 2015 with a tweaked neutrino model (in red), quasar measurements of gravitational lensing (in green, an independent probe) and supernovae measurements from this work (in black and grey band).

 

According to Bernal, Verde and Riess, and their collaborators, two things could lead to this:

  1. Assumptions about models of the physics of the early or the late universe. This could be a sign of new physics.
  2. Internal inconsistencies within the PLANCK data and analysis.

Measuring H0 and Rs

Taking a break from PLANCK, do we have other probes that we use to measure H0 and Rs? Yes, we do!

H0: We use supernovae distance measurements from the Sloan Digital Sky Survey, or the Supernova Legacy Survey. Moreover, we have H0 measurements from earlier CMB projects, most notably WMAP, one of the the most successful probes built by mankind ever (not to mention a Nobel winner too!).

Rs: We get this number by probing a phenomenon in the early universe called Baryon Acoustic Oscillations, a signature of the distance scales of the CMB era.

Thus, H0 and rprovide absolute scales of measuring distances at opposite ends of the universe.


What the Paper proposes

This paper proposes the following plan to discuss issues related to the model-dependent calculation of rand H0:

  1. Study changes in early time physics and late time physics separately, by slightly perturbing the current established models of the universe (And seeing what happens!).
  2. A model-independent reconstruction of rand H0.

 

screenshot

Fig 3. A graph plotting Rs and H0, with PLANCK data 2015 (in red), PLANCK data 2015 with a tweaked neutrino model (in green), the old CMB data from WMAP (in purple), model-independent construction of expansion history (in blue), with supernovae measurements (in black and grey). The figure on the right has only ‘recommended’ PLANCK data, while the figure on the left has both ‘recommended’ and ‘preliminary’. It can be seen that the green ellipse in the right figure, relieves the tension with current accepted values of H0, both from the purple ellipse and the black bars.

 

The analysis finds that there is a possibility that rat early universe and H0 at local universe have not been calibrated in a way that’s statistically correct. There is also a possibility that a slight increase in a parameter called Neff (the number of effective neutrino species in the early universe) relieves some of the tension (also mentioned in my previous bite). Another suggestion is that PLANCK analysis – currently divided between a ‘preliminary’ and ‘recommended baseline’ dataset – should only cover the ‘recommended baseline’ dataset.


Could there be new physics on the horizon? Perturbation of current models of the universe as well as improved model-independent probes of the local universe reveal a mixed answer.  Time to wait for publications from other CMB projects that are looking into this highly exciting problem right now. More on this matter to come in the next few months!

by Gourav Khullar at August 22, 2016 09:52 PM

Sean Carroll - Preposterous Universe

Maybe We Do Not Live in a Simulation: The Resolution Conundrum

Greetings from bucolic Banff, Canada, where we’re finishing up the biennial Foundational Questions Institute conference. To a large extent, this event fulfills the maxim that physicists like to fly to beautiful, exotic locations, and once there they sit in hotel rooms and talk to other physicists. We did manage to sneak out into nature a couple of times, but even there we were tasked with discussing profound questions about the nature of reality. Evidence: here is Steve Giddings, our discussion leader on a trip up the Banff Gondola, being protected from the rain as he courageously took notes on our debate over “What Is an Event?” (My answer: an outdated notion, a relic of our past classical ontologies.)

stevegiddings

One fun part of the conference was a “Science Speed-Dating” event, where a few of the scientists and philosophers sat at tables to chat with interested folks who switched tables every twenty minutes. One of the participants was philosopher David Chalmers, who decided to talk about the question of whether we live in a computer simulation. You probably heard about this idea long ago, but public discussion of the possibility was recently re-ignited when Elon Musk came out as an advocate.

At David’s table, one of the younger audience members raised a good point: even simulated civilizations will have the ability to run simulations of their own. But a simulated civilization won’t have access to as much computing power as the one that is simulating it, so the lower-level sims will necessarily have lower resolution. No matter how powerful the top-level civilization might be, there will be a bottom level that doesn’t actually have the ability to run realistic civilizations at all.

This raises a conundrum, I suggest, for the standard simulation argument — i.e. not only the offhand suggestion “maybe we live in a simulation,” but the positive assertion that we probably do. Here is one version of that argument:

  1. We can easily imagine creating many simulated civilizations.
  2. Things that are that easy to imagine are likely to happen, at least somewhere in the universe.
  3. Therefore, there are probably many civilizations being simulated within the lifetime of our universe. Enough that there are many more simulated people than people like us.
  4. Likewise, it is easy to imagine that our universe is just one of a large number of universes being simulated by a higher civilization.
  5. Given a meta-universe with many observers (perhaps of some specified type), we should assume we are typical within the set of all such observers.
  6. A typical observer is likely to be in one of the simulations (at some level), rather than a member of the top-level civilization.
  7. Therefore, we probably live in a simulation.

Of course one is welcome to poke holes in any of the steps of this argument. But let’s for the moment imagine that we accept them. And let’s add the observation that the hierarchy of simulations eventually bottoms out, at a set of sims that don’t themselves have the ability to perform effective simulations. Given the above logic, including the idea that civilizations that have the ability to construct simulations usually construct many of them, we inevitably conclude:

  • We probably live in the lowest-level simulation, the one without an ability to perform effective simulations. That’s where the vast majority of observers are to be found.

Hopefully the conundrum is clear. The argument started with the premise that it wasn’t that hard to imagine simulating a civilization — but the conclusion is that we shouldn’t be able to do that at all. This is a contradiction, therefore one of the premises must be false.

This isn’t such an unusual outcome in these quasi-anthropic “we are typical observers” kinds of arguments. The measure on all such observers often gets concentrated on some particular subset of the distribution, which might not look like we look at all. In multiverse cosmology this shows up as the “youngness paradox.”

Personally I think that premise 1. (it’s easy to perform simulations) is a bit questionable, and premise 5. (we should assume we are typical observers) is more or less completely without justification. If we know that we are members of some very homogeneous ensemble, where every member is basically the same, then by all means typicality is a sensible assumption. But when ensembles are highly heterogeneous, and we actually know something about our specific situation, there’s no reason to assume we are typical. As James Hartle and Mark Srednicki have pointed out, that’s a fake kind of humility — by asserting that “we are typical” in the multiverse, we’re actually claiming that “typical observers are like us.” Who’s to say that is true?

I highly doubt this is an original argument, so probably simulation cognoscenti have debated it back and forth, and likely there are standard responses. But it illustrates the trickiness of reasoning about who we are in a very big cosmos.

by Sean Carroll at August 22, 2016 04:45 PM

Peter Coles - In the Dark

Poll – Do you Listen to Music while you Study?

A propos de nothing in particular, the other day I posted a little poll on Twitter inquiring whether or not people like to have music playing while they work. The responses surprised me, so I thought I’d try the same question on here (although I won’t spill the beans on here immediately. I’ve made the question quite general in the hope that as wide a range of people as possible (e.g. students, researchers and faculty) will feel able to respond. By “study” I mean anything that needs you to concentrate, including practical work, coding, data analysis, reading papers, writing papers, etc. It doesn’t mean any mindless activity, such as bureaucracy.

Please fill the poll in before reading my personal response, which comes after the “read more” tag.

Oh, and if you pick “Depends” then please let me know what it depends on through the comments box (e.g. type of music, type of study..)

<noscript><a href="http://polldaddy.com/poll/9502852">Take Our Poll</a></noscript>

My response was definitely “no”. I often listen to music while preparing to work, but I find it too hard to concentrate if there’s music playing, especially if I’m trying to do calculations.

 


by telescoper at August 22, 2016 02:52 PM

Tommaso Dorigo - Scientificblogging

Post-Doctoral Positions In Experimental Physics For Foreigners
The Italian National Institute for Nuclear Physics offers 20 post-doctoral positions in experimental physics to foreigners with a PhD obtained no earlier than November 2008. 
So if have a PhD (in Physics, but I guess other disciplines are also valid as long as your cv conforms), you like Italy, or if you would like to come and work with me at the search and study of the Higgs boson with the CMS experiment (or even if you would like to do something very different, in another town, with another experiment) you might consider applying!

The economical conditions are not extraordinary in an absolute sense, but you would still end up getting a salary more or less like mine, which in Italy sort of allows one to live a decent life.

read more

by Tommaso Dorigo at August 22, 2016 01:11 PM

Christian P. Robert - xi'an's og

off to Australia

south bank of the Yarra river, Melbourne, July 21, 2012Taking advantage of being in San Francisco, I flew yesterday to Australia over the Pacific, crossing for the first time the day line. The 15 hour Qantas flight to Sydney was remarkably smooth and quiet, with most passengers sleeping for most of the way, and it gave me a great opportunity to go over several papers I wanted to read and review. Over the next week or so, I will work with my friends and co-authors David Frazier and Gael Martin at Monash University (and undoubtedly enjoy the great food and wine scene!). Before flying back to Paris (alas via San Francisco rather than direct).


Filed under: pictures, Statistics, Travel, University life, Wines Tagged: ABC, ABC convergence, asymptotic normality, Australia, consistency, Melbourne, Monash University, Qantas, San Francisco, Yarra river

by xi'an at August 22, 2016 12:18 PM

Emily Lakdawalla - The Planetary Society Blog

Space in transition: How Obama's White House charted a new course for NASA
Our Horizon Goal series on NASA's human spaceflight program continues with part 3, in which newly elected President Barack Obama and his transition team search for a NASA administrator, commission a review of the Constellation program and decide whether to extend the life of the ISS.

August 22, 2016 11:04 AM

CERN Bulletin

Administrative Circular No. 22B (Rev. 2) - Compensation for hours of long-term shift work

Administrative Circular No. 22B (Rev. 2) entitled "Compensation for hours of long-term shift work",  approved by the Director-General following discussion in the Standing Concertation Committee meeting on 22 March 2016, will be available on 1 September 2016 via the following link: https://cds.cern.ch/record/2208538.

 

This revised circular cancels and replaces Administrative Circular No. 22B (Rev. 1) also entitled "Compensation for hours of long-term shift work" of March 2011.

This document contains minor changes to reflect the new career structure.

This circular will enter into force on 1 September 2016.

August 22, 2016 10:08 AM

CERN Bulletin

Administrative Cicular No. 31 (Rev. 2) - International indemnity and non-resident allowance

Administrative Circular No. 31 (Rev. 2) entitled "International indemnity and non-resident allowance", approved by the Director-General following discussion in the Standing Concertation Committee meeting on 23 June 2016, will be available on 1 September 2016 via the following link: https://cds.cern.ch/record/2208547.

 

This revised circular cancels and replaces Administrative Circular No. 31 (Rev. 1) also entitled "International indemnity and non-resident allowance" of October 2007.

The main changes reflect the decision taken in the framework of the five-yearly review to extend eligibility for international indemnity to all staff members, as well to introduce a distinction between current staff members and those recruited as from 1 September 2016. For the latter, the international indemnity will be calculated as a percentage of the minimum salary of the grade into which they are recruited; the amount granted to the former will not change, and is now expressed as a percentage of the midpoint salary of the grade corresponding to their career path at the time of recruitment.

This circular will enter into force on 1 September 2016.

August 22, 2016 10:08 AM

CERN Bulletin

Staff Rules and Regulations - Modification No. 11 to the 11th edition

The following modifications to the Staff Rules and Regulations have been implemented:

 

  • In the framework of the Five-Yearly Review 2015, in accordance with the decisions taken by the Council in December 2015 (CERN/3213), relating to the new CERN career structure;
     
  • In accordance with the decisions taken by the Council in June 2016 (CERN/3247), relating to the status of apprentices and the remaining technical adjustments.

 

The modifications relating to the status of apprentices have entered into force on 1 August 2016 and those relating to the new CERN career structure and the technical adjustments will enter into force on 1 September 2016.

  • Preliminary Note, Contents - amendment of page iv.
     
  • Chapter I, General Provisions
    • Section 2 (Categories of members of the personnel) - amendment of pages 2 and 3.
       
  • Chapter II, Conditions of Employment and Association
  • Section 1 (Employment and association) - amendment of pages 11, 12, 13, 14 and 15.
  • Section 2 (Classification and merit recognition) – amendment of pages 16, 17 and 18.
  • Section 3 (Learning and development) - amendment of pages 19 and 20.
  • Section 4 (Leave) - amendment of pages 21, 22, 23, 25 and 26.
  • Section 5 (Termination of contract) - amendment of page 29.
     
  • Chapter III, Working Conditions
    • Section 1 (Working hours) – amendment of pages 30, 31 and 32.
       
  • Chapter IV, Social Conditions
  • Section 1 (Family and family benefits) - amendment of pages 37 and 38.
  • Section 2 (Social insurance cover) - amendment of pages 39 and 40.
     
  • Chapter V, Financial conditions
    • Section 1 (Financial benefits) – amendment of pages 41, 42, 43, 45, 46 and 47.
       
  • Chapter VI, Settlement of Disputes and Discipline
    • Section 1 (Settlement of disputes) - amendment of page 50.
    • Section 2 (Discipline) – amendment of pages 55, 56, 57 and 58.
       
  • Annex A1 (Periodic review of the financial and social conditions of members of the personnel) – amendment of page 62.
  • Annex RA1 (General definition of career paths) – page 66 is deleted.
  • Annex RA2 (Financial awards) - amendment of page 67.
  • Annex RA5 (Monthly basic salaries of staff members) - amendment of page 71.
  • Annex RA8 (International indemnity) – amendment of page 74.
  • Annex RA9 (Installation indemnity) – amendment of page 75.
  • Annex RA10 (Reinstallation indemnity) – amendment of page 76.



The complete updated electronic version of the Staff Rules and Regulation will be accessible via CDS on 1 September 2016.

August 22, 2016 10:08 AM

CERN Bulletin

Administrative Circular No. 23 (Rev. 4) - Special working hours

Administrative Circular No. 23 (Rev. 4) entitled "Special working hours", approved by the Director-General following discussion in the Standing Concertation Committee meeting on 22 March 2016, will be available on 1 September 2016 via the following link: https://cds.cern.ch/record/2208539.

 

This revised circular cancels and replaces Administrative Circular No. 23 (Rev. 3) also entitled "Special working hours" of January 2013.

This document contains modifications to reflect the new career structure and ensuring the provision consistent with practice that compensation or remuneration of special working hours performed remotely is possible only in case of emergency.  

This circular will enter into force on 1 September 2016.

August 22, 2016 10:08 AM

CERN Bulletin

Administrative Circular No. 13 (Rev. 4) - Guarantees for representatives of the personnel

Administrative Circular No. 13 (Rev. 4) entitled "Guarantees for representatives of the personnel", approved by the Director-General following discussion in the Standing Concertation Committee meeting on 22 March 2016, will be available on 1 September 2016 via the following link: https://cds.cern.ch/record/2208527.

 

This revised circular cancels and replaces Administrative Circular No. 13 (Rev. 3) also entitled "Guarantees for representatives of the personnel" of January 2014.

This document contains a single change to reflect the terminology under the new career structure: the term "career path" is replaced by "grade".

This circular will enter into force on 1 September 2016.

August 22, 2016 10:08 AM

August 21, 2016

Peter Coles - In the Dark

Collector’s Item

I read in today’s Observer an interesting opinion piece by Martin Jacques, who was editor of a magazine called Marxism Today until it folded at the end of 1991. I was a subscriber, in fact, and for some reason I have kept my copy of the final edition all this time. Here’s the front cover:

image

I note that it says “Collector’s Item” on the front, though I’m not at all sure it’s worth any more now than the £1.80 I paid nearly 25 years ago!


by telescoper at August 21, 2016 01:32 PM

Peter Coles - In the Dark

An American doctor experiences the NHS. Again.

Remember that story a couple of years ago by an American doctor about her experiences of the NHS? Well, here’s a sequel…

Dr. Jen Gunter

WIth my cousin WIth my cousin

Two years ago I wrote about my experience in a London emergency department with my son, Victor. That post has since been viewed > 450,000 times. There are over 800 comments with no trolls (a feat unto itself) and almost all of them express love for the NHS.

I was in England again this week. And yes, I was back in an emergency department, but this time with my cousin (who is English).

This is what happened.

My cousin loves high heels. As a former model she makes walking in the highest of heels look easy. However, cobblestone streets have challenges not found on catwalks and so she twisted her ankle very badly. Despite ice and elevation there was significant swelling and bruising and she couldn’t put any weight on her foot. I suggested we call her doctor and explain the situation. I was worried about a…

View original post 1,414 more words


by telescoper at August 21, 2016 11:15 AM

August 20, 2016

ZapperZ - Physics and Physicists

Brain Region Responsible For Understanding Physics?
A group of researchers seem to think that they have found the region of the brain responsible for "understanding physics".

With both sets of experiments, the researchers found that when the subjects tried predicting physical outcomes, activity was most responsive in the premotor cortex and supplementary motor region of the brain: an area described as the brain’s action-planning region.

“Our findings suggest that physical intuition and action planning are intimately linked in the brain,” said Fischer. “We believe this might be because infants learn physics models of the world as they hone their motor skills, handling objects to learn how they behave. Also, to reach out and grab something in the right place with the right amount of force, we need real-time physical understanding.”

But is this really "understanding physics", though?

Zz.

by ZapperZ (noreply@blogger.com) at August 20, 2016 03:19 PM

ZapperZ - Physics and Physicists

Who Will Host The Next LHC?
Nature has an interesting article on the issues surrounding the politics, funding, and physics in building the next giant particle collider beyond the LHC.

The Japanese are the front-runner to host the ILC, but the Chinese have their own plans on a circular electron-positron collider that can be upgraded to a future proton-proton collider.

And of course, all of these will require quite a bit of chump change to fund, and will be an international collaboration.

The climate in the US continues to be very sour in building anything like this.

Zz.

by ZapperZ (noreply@blogger.com) at August 20, 2016 02:49 PM

August 19, 2016

astrobites - astro-ph reader's digest

Where are the IceCube neutrinos coming from?

Title: Star-forming galaxies as the origin of IceCube neutrinos: Reconciliation with Fermi-LAT gamma rays
Authors: Sovan Chakraborty and Ignacio Izguirre
First author’s institution: Indian Institute of Technology Guwahati/Max Planck Institute for Physics

Featured Image: courtesy of the IceCube Collaboration. 

The IceCube Collaboration (who work with data from their South Pole neutrino observatory) caused quite a splash in 2013 when they announced they had observed the first evidence of high-energy astrophysical neutrinos, or neutrinos that originated outside of our solar system.  Since that day, theorists all over the world have been speculating about what the sources of these neutrinos are.  Supernovae, which occur when a star explodes, and hypernovae, which are substantially more energetic supernovae, are two potential sources of interest.  Just one problem: while these processes produce neutrinos, they also produce a gamma-ray flux.  This flux has been investigated by gamma-ray observatories such as the Fermi-LAT, and the diffuse gamma ray limits seem incompatible with what would be expected from hadronic models (Astrophysical sources can give off gamma rays via two different creation mechanisms: leptonic or hadronic.  Only hadronic models can also create neutrinos, so they are the focus here).  The authors tackle this discrepancy in their paper.

Looking at hypernovae and supernovae remnants from star-forming galaxies, the authors note that the hadronic models appear to cause an overpopulation of gamma rays that defies the gamma-ray bounds calculated by Fermi-LAT.  However, there are some effects that can bring the gamma ray flux to a level compatible with the neutrino flux.

Gamma rays from extragalactic sources interact with the cosmic microwave background and the extragalactic background light (EBL) on their way to us.  The EBL is leftover in background radiation from star formation processes and is very poorly understood.  Uncertainties on EBL models are generally quite large.  The authors show that gamma ray absorption as the particles travel intergalactically can be significant.

The neutrino (black) and blazar

Figure 1: The neutrino and gamma-ray spectra before taking into account the effects of both internal and intergalactic gamma-ray absorption.  The black line is the neutrino spectra; the black points are the IceCube data.  The blue line is the total gamma-ray flux, made up of the diffuse component from the electromagnetic cascade (pink) and unabsorbed diffuse gamma-ray flux that is correlated with the neutrino flux (orange). The green dashed line is the lower limit set by Fermi-LAT for the flux correlated with the neutrino spectra.  Note that the diffuse flux overpopulates this limit at some energies.

More importantly, they also studied the effect of internal gamma-ray absorption in starburst galaxies (a type of star-forming galaxy that is undergoing an exceptionally high rate of star formation).  Due to the large number of gamma rays present in a starburst galaxy, they are extremely likely to interact with their surroundings and form electron-positron pairs.  These in turn interact again to form lower-energy gamma rays (typically through a process such as Bremsstrahlung radiation), and the cycle continues.  This is known as an electromagnetic cascade.  The effect of this cascade is inhibited at lower energies.  This is because there is a large magnetic field so the electrons and positrons rapidly lose energy through synchrotron radiation.  Synchrotron radiation occurs when a charged particle moves through a magnetic field.  This can affect up to 90% of the gamma rays and dramatically change the diffuse gamma-ray flux in the GeV-TeV energy range.  The effect of this magnetic field has been largely neglected until now.

Figure 2

Figure 2: The same as Figure 1 (above) but with the effect of absorption taken into account.  Note that the gamma-ray flux (blue line) no longer overpopulates the upper limit on the contribution correlated with the neutrino spectra (green line).

The authors show that this changes the results, and that the tension between the gamma-ray and neutrino spectra disappears.  It is still possible for star-forming galaxies to produce the diffuse neutrino spectra that IceCube has observed, while staying within the bounds set by Fermi-LAT.

 

 

 

 

by Kelly Malone at August 19, 2016 09:16 PM

Symmetrybreaking - Fermilab/SLAC

The $100 muon detector

A doctoral student and his adviser designed a tabletop particle detector they hope to make accessible to budding young engineering physicists.

When Spencer Axani was an undergraduate physics student, his background in engineering led him to a creative pipe dream: a pocket-sized device that could count short-lived particles called muons all day.

Muons, heavier versions of electrons, are around us all the time, a byproduct of the cosmic rays that shoot out from supernovae and other high-energy events in space. When particles from those rays hit Earth’s atmosphere, they often decay into muons.

Muons are abundant on the surface of the Earth, but in Axani’s University of Alberta underground office, shielded by the floors above, they might be few and far between. A pocket detector would be the perfect gadget for measuring the difference.

Now a doctoral student at Massachusetts Institute of Technology, Axani has nearly made this device a reality. Along with an undergraduate student and Axani’s adviser, Janet Conrad, he’s developed a detector that sits on a desk and tallies the muons that pass by. The best part? The whole system can be built by students for under $100.

“Compared to most detectors, it’s by far the cheapest and smallest I’ve found,” Axani says. “If you make 100,000 of these, it starts becoming a very large detector. Instrumenting airplanes and ships would let you start measuring cosmic ray rates around the world.”

Particle physicists deal with cosmic rays all of the time, says Conrad, a physics professor at MIT. “Sometimes we love them, and sometimes we hate them. We love them if we can use them for calibration of our detectors, and we hate them if they provide a background for what it is that we are trying to do.”

Conrad used small muon detectors similar to the one Axani dreamed about when leading a neutrino experiment at Fermi National Accelerator Laboratory called MiniBooNE. When a professor at the University of Alberta proposed adding mini-muon detectors to another neutrino experiment, Axani was ready to pitch in.

The idea was to create muon detectors to add to IceCube, a neutrino detector built into the ice in Antarctica. They would be inserted into IceCube’s proposed low-energy upgrade, known as PINGU (Precision IceCube Next Generation Upgrade).

First, they needed a prototype. Axani got to work and quickly devised a rough detector housed in PVC pipe. “It looked pretty lab,” Axani said. It also gave off a terrible smell, the result of using a liquid called toluene as a scintillator, a material that gives off light when hit by a charged particle.

Over the next few months, Axani refined the device, switching to an odorless plastic scintillator and employing silicon photomultipliers (SiPM), which amplify the light from the scintillator into a signal that can be read. Adding some electronics allowed him to build a readout screen that ticks off the amount of energy from muon interactions and registers the time of the event.

Sitting in Axani’s office, the counter shows a rate of one muon every few seconds, which is what they expected from the size of the detector. Though it’s fairly constant, even minor changes like increased humidity or heavy rain can alter it.

Conrad and Axani have taken the detector down into the Boston subway, using the changes in the muon count to calculate the depth of the train tunnels. They’ve also brought it into the caverns of Fermilab’s neutrino experiments to measure the muon flux more than 300 feet underground.

Axani wants to take it to higher elevations—say, in an airplane at 30,000 feet above sea level—where muon counts should be higher, since the particles have had less time to decay after their creation in the atmosphere.

Fermilab physicist Herman White suggested taking one of the the tiny detectors on a ship to study muon counts at sea. Mapping out the muon rate around the globe at sea has never been achieved. Liquid scintillator can be harmful to marine life, and the high voltage and power consumption of the large devices present a safety hazard.

While awaiting review of the PINGU upgrade, both Conrad and Axani see value in their project as an educational tool. With a low cost and simple instructions, the muon counter they created can be assembled by undergraduates and high school students, who would learn about machining, circuits, and particle physics along the way—no previous experience required.

“The idea was, students building the detectors would develop skills typically taught in undergraduate lab classes,” Spencer says. “In return, they would end up with a device useful for all sorts of physics measurements.”

Conrad has first-hand knowledge of how hands-on experience like this can teach students new skills. As an undergraduate at Swarthmore College, she took a course that taught all the basic abilities needed for a career in experimental physics: using a machine shop, soldering, building circuits. As a final project, she constructed a statue that she’s held on to ever since.

Creating the statue helped Conrad cement the lessons she learned in the class, but the product was abstract, not a functioning tool that could be used to do real science.

“We built a bunch of things that were fun, but they weren’t actually useful in any way,” Conrad says. “This [muon detector] takes you through all of the exercises that we did and more, and then produces something at the end that you would then do physics with.”

Axani and Conrad published instructions for building the detector on the open-source physics publishing site arXiv, and have been reworking the project with the aim of making it accessible to high-school students. No math more advanced than division and multiplication is needed, Axani says. And the parts don’t need to be new, meaning students could potentially take advantage of leftovers from experiments at places like Fermilab.

“This should be for students to build,” Axani says. “It’s a good project for creative people who want to make their own measurements.”

by Laura Dattaro at August 19, 2016 03:57 PM

The n-Category Cafe

Compact Closed Bicategories

I’m happy to announce that this paper has been published:

Abstract. A compact closed bicategory is a symmetric monoidal bicategory where every object is equipped with a weak dual. The unit and counit satisfy the usual ‘zig-zag’ identities of a compact closed category only up to natural isomorphism, and the isomorphism is subject to a coherence law. We give several examples of compact closed bicategories, then review previous work. In particular, Day and Street defined compact closed bicategories indirectly via Gray monoids and then appealed to a coherence theorem to extend the concept to bicategories; we restate the definition directly.

We prove that given a 2-category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> with finite products and weak pullbacks, the bicategory of objects of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, spans, and isomorphism classes of maps of spans is compact closed. As corollaries, the bicategory of spans of sets and certain bicategories of ‘resistor networks” are compact closed.

This paper is dear to my heart because it forms part of Mike Stay’s thesis, for which I served as co-advisor. And it’s especially so because his proof that objects, spans, and maps-of-spans in a suitable 2-category forms a compact symmetric monoidal bicategory turned out to be much harder than either of us were prepared for!

A problem worthy of attack
Proves its worth by fighting back.

In a compact closed category every object comes with morphisms called the ‘cap’ and ‘cup’, obeying the ‘zig-zag identities’. For example, in the category where morphisms are 2d cobordisms, the zig-zag identities say this:

But in a compact closed bicategory the zig-zag identities hold only up to 2-morphisms, which in turn must obey equations of their own: the ‘swallowtail identities’. As the name hints, these are connected to the swallowtail singularity, which is part of René Thom’s classification of catastrophes. This in turn is part of a deep and not yet fully worked out connection between singularity theory and coherence laws for ‘<semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-categories with duals’.

But never mind that: my point is that proving the swallowtail identities for a bicategory of spans in a 2-category turned out to be much harder than expected. Luckily Mike rose to the challenge, as you’ll see in this paper!

This paper is also gaining a bit of popularity for its beautiful depictions of the coherence laws for a symmetric monoidal bicategory. And symmetric monoidal bicategories are starting to acquire interesting applications.

The most developed of these are in mathematical physics — for example, 3d topological quantum field theory! To understand 3d TQFTs, we need to understand the symmetric monoidal bicategory where objects are collections of circles, morphisms are 2d cobordisms, and 2-morphisms are 3d cobordisms-between-cobordisms. The whole business of ‘modular tensor categories’ is immensely clarified by this approach. And that’s what this series of papers, still underway, is all about:

Mike Stay, on the other hand, is working on applications to computer science. That’s always been his focus — indeed, his Ph.D. was not in math but computer science. You can get a little taste here:

But there’s a lot more coming soon from him and Greg Meredith.

As for me, I’ve been working on applied math lately, like bicategories where the morphisms are electrical circuits, or Markov processes, or chemical reaction networks. These are, in fact, also compact closed symmetric monoidal bicategories, and my student Kenny Courser is exploring that aspect.

Basically, whenever you have diagrams that you can stick together to form new diagrams, and processes that turn one diagram into another, there’s a good chance you’re dealing with a symmetric monoidal bicategory! And if you’re also allowed to ‘bend wires’ in your diagrams to turn inputs into outputs and vice versa, it’s probably compact closed. So these are fundamental structures — and it’s great that Mike’s paper on them is finally published.

by john (baez@math.ucr.edu) at August 19, 2016 03:17 PM

August 18, 2016

Quantum Diaries

What is “Model Building”?

Hi everyone! It’s been a while since I’ve posted on Quantum Diaries. This post is cross-posted from ParticleBites.

One thing that makes physics, and especially particle physics, is unique in the sciences is the split between theory and experiment. The role of experimentalists is clear: they build and conduct experiments, take data and analyze it using mathematical, statistical, and numerical techniques to separate signal from background. In short, they seem to do all of the real science!

So what is it that theorists do, besides sipping espresso and scribbling on chalk boards? In this post we describe one type of theoretical work called model building. This usually falls under the umbrella of phenomenology, which in physics refers to making connections between mathematically defined theories (or models) of nature and actual experimental observations of nature.

One common scenario is that one experiment observes something unusual: an anomaly. Two things immediately happen:

  1. Other experiments find ways to cross-check to see if they can confirm the anomaly.
  2. Theorists start figure out the broader implications if the anomaly is real.

#1 is the key step in the scientific method, but in this post we’ll illuminate what #2 actually entails. The scenario looks a little like this:

An unusual experimental result (anomaly) is observed. One thing we would like to know is whether it is consistent with other experimental observations, but these other observations may not be simply related to the anomaly.

An unusual experimental result (anomaly) is observed. One thing we would like to know is whether it is consistent with other experimental observations, but these other observations may not be simply related to the anomaly.

Theorists, who have spent plenty of time mulling over the open questions in physics, are ready to apply their favorite models of new physics to see if they fit. These are the models that they know lead to elegant mathematical results, like grand unification or a solution to the Hierarchy problem. Sometimes theorists are more utilitarian, and start with “do it all” Swiss army knife theories called effective theories (or simplified models) and see if they can explain the anomaly in the context of existing constraints.

Here’s what usually happens:

Usually the nicest models of new physics don't fit! In the explicit example, the minimal supersymmetric Standard Model doesn't include a good candidate to explain the 750 GeV diphoton bump.

Usually the nicest models of new physics don’t fit! In the explicit example, the minimal supersymmetric Standard Model doesn’t include a good candidate to explain the 750 GeV diphoton bump.

Indeed, usually one needs to get creative and modify the nice-and-elegant theory to make sure it can explain the anomaly while avoiding other experimental constraints. This makes the theory a little less elegant, but sometimes nature isn’t elegant.

Candidate theory extended with a module (in this case, an additional particle). This additional model is "bolted on" to the theory to make it fit the experimental observations.

Candidate theory extended with a module (in this case, an additional particle). This additional model is “bolted on” to the theory to make it fit the experimental observations.

Now we’re feeling pretty good about ourselves. It can take quite a bit of work to hack the well-motivated original theory in a way that both explains the anomaly and avoids all other known experimental observations. A good theory can do a couple of other things:

  1. It points the way to future experiments that can test it.
  2. It can use the additional structure to explain other anomalies.

The picture for #2 is as follows:

A good hack to a theory can explain multiple anomalies. Sometimes that makes the hack a little more cumbersome. Physicists often develop their own sense of 'taste' for when a module is elegant enough.

A good hack to a theory can explain multiple anomalies. Sometimes that makes the hack a little more cumbersome. Physicists often develop their own sense of ‘taste’ for when a module is elegant enough.

Even at this stage, there can be a lot of really neat physics to be learned. Model-builders can develop a reputation for particularly clever, minimal, or inspired modules. If a module is really successful, then people will start to think about it as part of a pre-packaged deal:

A really successful hack may eventually be thought of as it's own variant of the original theory.

A really successful hack may eventually be thought of as it’s own variant of the original theory.

Model-smithing is a craft that blends together a lot of the fun of understanding how physics works—which bits of common wisdom can be bent or broken to accommodate an unexpected experimental result? Is it possible to find a simpler theory that can explain more observations? Are the observations pointing to an even deeper guiding principle?

Of course—we should also say that sometimes, while theorists are having fun developing their favorite models, other experimentalists have gone on to refute the original anomaly.

pheno_05

Sometimes anomalies go away and the models built to explain them don’t hold together.

 

But here’s the mark of a really, really good model: even if the anomaly goes away and the particular model falls out of favor, a good model will have taught other physicists something really neat about what can be done within the a given theoretical framework. Physicists get a feel for the kinds of modules that are out in the market (like an app store) and they develop a library of tricks to attack future anomalies. And if one is really fortunate, these insights can point the way to even bigger connections between physical principles.

I cannot help but end this post without one of my favorite physics jokes, courtesy of T. Tait:

 A theorist and an experimentalist are having coffee. The theorist is really excited, she tells the experimentalist, “I’ve got it—it’s a model that’s elegant, explains everything, and it’s completely predictive.”The experimentalist listens to her colleague’s idea and realizes how to test those predictions. She writes several grant applications, hires a team of postdocs and graduate students, trains them,  and builds the new experiment. After years of design, labor, and testing, the machine is ready to take data. They run for several months, and the experimentalist pores over the results.

The experimentalist knocks on the theorist’s door the next day and says, “I’m sorry—the experiment doesn’t find what you were predicting. The theory is dead.”

The theorist frowns a bit: “What a shame. Did you know I spent three whole weeks of my life writing that paper?”

by Flip Tanedo at August 18, 2016 10:53 PM

Clifford V. Johnson - Asymptotia

Stranger Stuff…

Ok all you Stranger Things fans. You were expecting a physicist to say a few things about the show weren't you? Over at Screen Junkies, they've launched the first episode of a focus on TV Science (a companion to the Movie Science series you already know about)... and with the incomparable host Hal Rudnick, I talked about Stranger Things. There are spoilers. Enjoy.

screen_junkies_stranger_things

(Embed and link after the fold:)
[...] Click to continue reading this post

The post Stranger Stuff… appeared first on Asymptotia.

by Clifford at August 18, 2016 06:35 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 13, 14
While I do not believe that this series of posts can be really useful to my younger colleagues, who will in a month have to participate in a tough selection for INFN researchers in Rome, I think there is some value in continuing what I have started last month. 
After all, as physicists we are problem solvers, and some exercise is good for all of us. Plus, the laypersons who occasionally visit this blog may actually enjoy fiddling with the questions. For them, though, I thought it would be useful to also get to see the answers to the questions, or at least _some_ answer.

read more

by Tommaso Dorigo at August 18, 2016 03:35 PM

ZapperZ - Physics and Physicists

Could You Pass A-Level Physics Now?
This won't tell if you will pass it, since A-Level Physics consists of several papers, including essay questions. But it is still an interesting test, and you might make a careless mistake if you don't read the question carefully.

And yes, I did go through the test, and I got 13 out of 13 correct even though I guessed at one of them (I wasn't sure what "specific charge" meant and was too lazy to look it up). The quiz at the end asked if I was an actual physicist! :)

You're probably an actual physicist, aren't you?

Check it out. This is what those A-level kids had to content with.

Zz.

by ZapperZ (noreply@blogger.com) at August 18, 2016 03:23 PM

astrobites - astro-ph reader's digest

What did the first collections of stars look like?

TITLE: A COMMON ORIGIN FOR GLOBULAR CLUSTERS AND ULTRA-FAINT DWARFS IN SIMULATIONS OF THE FIRST GALAXIES (draft pre-print)
AUTHORS: Massimo Ricotti , Owen H. Parry , Nickolay Y. Gnedin
FIRST AUTHOR INSTITUTION: Department of Astronomy, University of Maryland

Over the past century we have rapidly closed many gaps in our knowledge of the history of the Universe. We can see all the way back to the epoch of recombination, just 380,000 years after the Big Bang, thanks to the Cosmic Microwave Background (CMB). Some of the first galaxies, existing just a billion years after the Big Bang, are beginning to be revealed by the largest space and ground based telescopes. The time between these two epochs, however, is still out of reach. This is the epoch when the first stars formed, and gradually began to light up the universe.

The first stars, confusingly named Population III, form out of gas containing no metals (Astronomy parlance for elements heavier than Hydrogen and Helium). This makes them different from stars that form subsequently (known as Population I and II. I know, it’s backwards. Blame Walter Baade), as these later stars form from gas containing the metals expelled by the trailblazing Pop III stars.  Because Pop III stars form from metal free gas they have unique properties compared to stars in the universe today. One of the most striking features is their extreme mass, a hundred times the mass of our Sun in some theories. Such large stars subsequently have very short lifetimes, as they burn through their nuclear fuel rapidly. This in turn makes them difficult to detect; in the grand timeline of the universe, they blink in and out of existence. Until we get better observations one of the only ways to explore these objects, and test theories of their formation and evolution, is through simulations.

Gas Density

The projected density of gas for the biggest galaxies at z = 9 (roughly 500-600 Myr since the Big Bang). Where there is a disc, the top row shows a top down view of it and the bottom row shows a side view. Each image is 100 parsecs on a side.

Today’s paper is about one such simulation. The authors explored the kinds of environments that the first stars are born into, and what objects they evolve into. Unfortunately, simulating galaxies is difficult. They are large, and interact closely with their nearby environment, so you need to simulate a big volume. But ideally you would want to simulate each individual star too. Simulating this huge range of scales, from individual stars to clusters of galaxies, is currently impossible – no astrophysicist has access to a computer powerful enough – but the authors begin to push this limit by ramping up the resolution of their simulation so that each simulation particle represents a collection of stellar objects that are approximately 40 times the mass of the Sun. Previous simulations of small galaxy clusters could only resolve collections of tens of thousands of stars, so this is a big improvement.

Stellar Density

Each panel displays an identical view to Figure 1, but showing the projected density of stars. The disc is nowhere to be seen, and the stars extend to much greater distance than the gas.

The authors look at the morphology of their simulated objects, and distinguish a few trends. The gas tends to form a disc inside a dark matter halo, and star formation is confined to the disc (see figure 1). The stars themselves though are often spread out in a wider, spherical arrangement (see figure 2). They attribute this to the fact that the stars, after eating up most of their surrounding gas in the disc, become ‘unbound’ – in other words, they are no longer held together by their mutual gravity, and begin to separate out. They then become bound within the larger dark matter halo, and the expansion stops. These objects look suspiciously like Ultra Faint Dwarf Galaxies in the local universe. The size of the spheroid can also be extended through mergers, which ‘dynamically heat’ the object, adding some kinetic energy to all the constituent stars.

Certain objects tend to be smaller and more compact, containing only Pop II stars. They are triggered by nearby Pop III stars which, when they die, spread the metals they contain into the cosmos through powerful winds or supernovae. Gas polluted by these metals can cool efficiently, and therefore form Pop II stars easier. The authors suggest that these objects could be the first compact, bound stellar objects in the universe, but hesitate on what to call them – are they Globular Clusters, Ultra Compact Dwarfs, or something in between? And how many of these objects will actually survive to the present day, perhaps visible in our local galaxy as ‘fossil’ galaxies, relics directly from the first stellar objects? These simulations are only run up to a billion years after the big bang, so such questions will have to wait for bigger simulations in future that follow these objects to redshift zero.

Another peculiar object the authors identify contains only Pop III stars. Since these stars are so short lived, they will rapidly die, leaving behind a dark, apparently empty halo, though it will in fact be full of the remnants of these monster stars. Hints of such objects were found last year (here’s a summary of the results, and here’s the original paper if you’re interested / feeling brave).

One of the most pertinent and urgent reasons why we need to understand these objects is for the upcoming James Webb Space Telescope, which will begin to probe this era of first star formation. We need to understand the transition from Pop III to Pop II star formation in order to know how many Pop III stars JWST could expect to see.

Many of the questions raised above could be solved by simulations with higher resolution. But the results are an exciting step towards a full understanding of the environment in which the first stars formed. The intriguing common origin of compact star clusters and ultra-faint dwarfs is worthy of further investigation, which the authors plan to publish shortly. It remains to be seen whether any relics of these first collections of baby stars have survived to the present day, and are kicking around our galactic backyard, waiting to be discovered.

by Christopher Lovell at August 18, 2016 09:21 AM

August 17, 2016

Clifford V. Johnson - Asymptotia

New Style…

character-design-dialogues-share

Style change. For a story-within-a-story in the book, I'm changing styles, going to a looser, more cartoony style, which sort of fits tonally with the subject matter in the story. The other day on the subway I designed the characters in that style, and I share them with you here. It's lots of fun to draw in this looser [...] Click to continue reading this post

The post New Style… appeared first on Asymptotia.

by Clifford at August 17, 2016 09:00 PM

Emily Lakdawalla - The Planetary Society Blog

OSIRIS-REx launch preview
Launch day is coming for NASA's next interplanetary explorer! OSIRIS-REx is on schedule for launch on September 8, 2016 at 19:05 EDT (16:05 PDT, 23:05 UTC) from Cape Canaveral Air Force Station. OSIRIS-REx is the first NASA planetary launch since MAVEN in 2013, and will be the last until InSight in 2018.

August 17, 2016 08:34 PM

August 16, 2016

Lubos Motl - string vacua and pheno

Cold summer, mushrooms, Czech wine, \(17\MeV\) boson
Central Europe is experiencing one of the coldest Augusts in recent memory. (For example, Czechia's top and oldest weather station in Prague-Klementinum broke the record cold high for August 11th, previously from 1869.) But it's been great for mushroom pickers.



When you spend just an hour in the forest, you may find dozens of mushrooms similar to this one-foot-in-diameter bay bolete (Czech: hřib hnědý [brown], Pilsner Czech: podubák). I don't claim that we broke the Czech record today.

Also, the New York Times ran a story on the Moravian (Southeastern Czechia's) wine featuring an entrepreneur who came from Australia to improve the business. He reminds me of Josef Groll, the cheeky Bavarian brewmaster who was hired by the dissatisfied dignified citizens of Pilsen in 1842 and improved the beer in the city by 4 categories. Well, the difference is that the Moravian wine has never really sucked, unlike Pilsen's beer, except for the Moravian beer served to the tourists from Prague, as NYT also explains.

Hat tip: the U.S. ambassador.




More seriously, the April 2016 UC Irvine paper that showed much more confidence in the bold 6.8-sigma claims about the evidence for a new \(17\MeV\) gauge boson was published in PRL, probably the most prestigious place where such papers may be published. UC Irvine released a press release to celebrate this publication.




Again, it would be wonderful if this new boson existed. But people around the Hungarian team have the record of having made many similar claims in the past – I don't want to go into details which people, how they overlapped, and how many claims here (see e.g. this discussion) – and \(17\MeV\) is a typical energy from the regime of nuclear physics which is a very messy emergent discipline of science. So there's a big risk that numerous observable effects in this regime may be misattributed and misinterpreted.

The right "X boson" to explain the observation has to be "protophobic" – its interactions with electrons and neutrons must be nonzero but its interactions with protons have to vanish. This may look extremely unnatural but maybe it's not so bad.

by Luboš Motl (noreply@blogger.com) at August 16, 2016 05:42 PM

Symmetrybreaking - Fermilab/SLAC

The physics photographer

Fermilab’s house photographer of almost 30 years, Reidar Hahn, shares four of his most iconic shots.

Science can produce astounding images. Arcs of electricity. Microbial diseases, blown up in full color. The bones of a long-dead beasts. The earth, a hazy blue marble in the distance. 

But scientific progress is not always so visually dramatic. In laboratories in certain fields, such as high-energy particle physics, the stuff that excites the scientists might be hidden within the innards of machinery or encrypted as data.

Those labs need visual translators to show to the outside world the beauty and significance of their experiments. 

Reidar Hahn specializes in bringing physics to life. As Fermilab’s house photographer, he has been responsible for documenting most of what goes on in and around the lab for the past almost 30 years. His photos reveal the inner workings of complicated machinery. They show the grand scale of astronomical studies. 

Hahn took up amateur photography in his youth, gaining experience during trips to the mountains out West. He attended Michigan Technological University to earn a degree in forestry and in technical communications. The editor of the school newspaper noticed Hahn’s work and recruited him; he eventually became the principal photographer. 

After graduating, Hahn landed a job with a group of newspapers in the suburbs of Chicago. He became interested in Fermilab after covering the opening of the Tevatron, Fermilab’s now-decommissioned landmark accelerator. He began popping in to the lab to look for things to photograph. Eventually, they asked him to stay.

Reidar says he was surprised by what he found at the lab. “I had this misconception that when I came here, there would be all these cool, pristine cleanrooms with guys in white suits and rubber gloves. And there are those things here. But a lot of it is concrete buildings with duct tape and cable ties on the floor. Sometimes, the best thing you can do for a photo is sweep the floor before you shoot.”

Hahn says he has a responsibility, when taking photos for the public, to show the drama of high-energy physics, to impart a sense of excitement for the state of modern science.

Below, he shares the techniques he used to make some of his iconic images for Fermilab.

 

Tevatron

Photo by Reidar Hahn, Fermilab

The Tevatron

“I knew they were going to be shutting down the Tevatron—our large accelerator—and I wanted to get a striking or different view of it. It was 2011, and it would be big news when the lab shut it down. 

“This was composed of seven different photos. You can’t keep the shutter open on a digital camera very long, so I would do a two-minute exposure, then do another two-minute exposure, then another. This shot was done in the dead of winter on a very cold day; It was around zero [degrees]. I was up on the roof probably a good hour.

“It took a little time to prepare and think out. I could have shot it in the daylight, but it wouldn’t have had as much drama. So I had fire trucks and security vehicles and my wife driving around in circles with all their lights on for about half an hour. The more lights the better. I was on the 16th floor roof of the high-rise [Fermilab’s Wilson Hall]. I had some travelling in other directions, because if they were all going counter-clockwise, you’d just see headlights on the left and taillights on the other end. They were slowly driving around—10, 15 miles an hour—and painting a circle [of light] with their headlights and taillights. 

“This image shows a sense of physics on a big scale. And it got a lot of play. It got a full double spread in Scientific American. It was in a lot of other publications.

“I think particle physics has some unique opportunities for photography because of these scale differences. We’re looking for the smallest constituents of matter using the biggest machines in the world to do it.”

 

SRF Cavities

Photo by Reidar Hahn, Fermilab

SRF cavities

“This was an early prototype superconducting [radio-frequency] cavity, which is used to accelerate particles. Every one of those donuts there forces a particle to go faster and faster. In 2005, these cavities were just becoming a point of interest here at Fermilab. 

“This was sitting in a well-lit room with a lot of junk around it. They didn’t want it moved. So I had to think how I could make this interesting. How could I give it some motion, some feel that there’s some energy here?

“So I [turned] all the room lights out. This whole photo was done with a flashlight. You leave the shutter open, and you move the light over the subject and paint the light onto the subject. It’s a way to selectively light certain things. This is about four exposures combined in Photoshop. I had a flashlight with different color gels on it, and I just walked back and forth. 

“I wanted something dynamic to the photo. It’s an accelerator cavity; it should look like something that provides movement. So in the end, I took the gels off, and I dragged the flashlight through the scene [to create the streak of light above the cavity]. It could represent a [particle] beam, but it just provides some drama. 

“A good photo can help communicate that excitement we all have here about science. Scientists may not use [this photo] as often for technical things, but we’re also trying to make science exciting for the non-scientists. And people can learn that some of these things are beautiful objects. They can see some kind of elegance to the equipment that scientists develop and build for the tools of discovery.”

 

Scintillating material

Photo by Reidar Hahn, Fermilab

Scintillating material

“This was taken back in ’93. It was done on film—we bought our first digital camera in 1998. 

“This is a chemist here at the lab, and she's worked a lot on different kinds of scintillating compounds. A scintillator is something that takes light in the invisible spectrum and turns it to the visible spectrum. A lot of physics detectors use scintillating material to image particles that you can't normally see.

“[These] are some test samples she had. She needed the photo to illustrate various types of wave-shifting scintillator. I wanted to add her to the photo because—it all goes back to my newspaper days—people make news, not things. But the challenge gets tougher when you have to add a person to the picture. You can’t have someone sit still for three minutes while making an exposure.

“There’s a chemical in this plastic that wave-shifts some type of particle from UV to visible light. So I painted the scintillating plastic with the UV light in the dark and then had Anna come over and sit down at the stool. I had two flashes set up to light her. [The samples] all light internally. That’s the beauty of scintillator materials. 

“But it goes to show you how we have to solve a lot of problems to actually make our experiments work.”

 

Cerro Tololo observatory

Photo by Reidar Hahn, Fermilab

Cerro Tololo Observatory

“This is the Cerro Tololo [Inter-American] Observatory in Chile, taken in October 2012. We have a lot of involvement in the Dark Energy Survey, [a collaboration to map galaxies and supernovae and to study dark energy and the expansion of the universe]. Sometimes we get to go to places to document things the lab’s involved in.

“This one is hundreds of photos stacked together. If you look close, you can see it’s a series of dots. A 30-second exposure followed by a second for the shutter to reset and then another 30-second exposure.  

“The Earth spins. When you point a camera around the night sky and happen to get the North Star or Southern Cross—this is the Southern Cross—in the shot, you can see how the Earth rotates: This is what people refer to as star-trails. It’s a good reminder that we live in a vast universe and we’re spinning through it.

“We picked a time when there’s no moon because it’s hard to do this kind of shot when the moon comes up. Up on the top of the mountain, they don’t want a lot of light. We walked around with little squeeze lights or no lights at all because we didn’t want to have anything affect the telescopes. But every once in awhile I would notice a car go down from the top, and as it would go around the corner, they’d tap the brake lights. We learned to use the brake lights to light the building. It gives some drama to the dome.

“You’ve got to improvise. You have to work with some very tight parameters and still come back with the shot.”

by Molly Olmstead at August 16, 2016 01:00 PM

The n-Category Cafe

Two Miracles of Algebraic Geometry

In real analysis you get just what you pay for. If you want a function to be seven times differentiable you have to say so, and there’s no reason to think it’ll be eight times differentiable.

But in complex analysis, a function that’s differentiable is infinitely differentiable, and its Taylor series converges, at least locally. Often this lets you extrapolate the value of a function at some faraway location from its value in a tiny region! For example, if you know its value on some circle, you can figure out its value inside. It’s like a fantasy world.

Algebraic geometry has similar miraculous properties. I recently learned about two.

Suppose if I told you:

  1. Every group is abelian.
  2. Every function between groups that preserves the identity is a homomorphism.

You’d rightly say I’m nuts. But all this is happening in the category of sets. Suppose we go to the category of connected projective algebraic varieties. Then a miracle occurs, and the analogous facts are true:

  1. Every connected projective algebraic group is abelian. These are called abelian varieties.
  2. If <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> are abelian varieties and <semantics>f:AB<annotation encoding="application/x-tex">f : A \to B</annotation></semantics> is a map of varieties with <semantics>f(1)=1<annotation encoding="application/x-tex">f(1) = 1</annotation></semantics>, then <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is a homomorphism.

The connectedness is crucial here. So, as Qiaochu Yuan pointed out in our discussion of these issues on MathOverflow, the magic is not all about algebraic geometry: you can see signs of it in topology. As a topological group, an abelian variety is just a torus. Every continuous basepoint-preserving map between tori is homotopic to a homomorphism. But the rigidity of algebraic geometry takes us further, letting us replace ‘homotopic’ by ‘equal’.

This gives some interesting things. From now on, when I say ‘variety’ I’ll mean ‘connected projective complex algebraic variety’. Let <semantics>Var *<annotation encoding="application/x-tex">Var_*</annotation></semantics> be the category of varieties equipped with a basepoint, and basepoint-preserving maps. Let <semantics>AbVar<annotation encoding="application/x-tex">AbVar</annotation></semantics> be the category of abelian varieties, and maps that preserve the group operation. There’s a forgetful functor

<semantics>U:AbVarVar *<annotation encoding="application/x-tex"> U: AbVar \to Var_* </annotation></semantics>

sending any abelian variety to its underlying pointed variety. <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is obviously faithful, but Miracle 2 says that it’s is a full functor.

Taken together, these mean that <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is only forgetting a property, not a structure. So, shockingly, being abelian is a mere property of a variety.

Less miraculously, the functor <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> has a left adjoint! I’ll call this

<semantics>Alb:Var *AbVar<annotation encoding="application/x-tex"> Alb: Var_* \to AbVar </annotation></semantics>

because it sends any variety <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> with basepoint to something called its Albanese variety.

In case you don’t thrill to adjoint functors, let me say what this mean in ‘plain English’ — or at least what some mathematicians might consider plain English.

Given any variety <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> with a chosen basepoint, there’s an abelian variety <semantics>Alb(X)<annotation encoding="application/x-tex">Alb(X)</annotation></semantics> that deserves to be called the ‘free abelian variety on <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>’. Why? Because it has the following universal property: there’s a basepoint-preserving map called the Albanese map

<semantics>i X:XAlb(X)<annotation encoding="application/x-tex">i_X \colon X \to Alb(X)</annotation></semantics>

such that any basepoint-preserving map <semantics>f:XA<annotation encoding="application/x-tex">f: X \to A</annotation></semantics> where <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> happens to be abelian factors uniquely as <semantics>i X<annotation encoding="application/x-tex">i_X</annotation></semantics> followed by a map

<semantics>f¯:Alb(X)A<annotation encoding="application/x-tex">\overline{f} \colon Alb(X) \to A </annotation></semantics>

that is also a group homomorphism. That is:

<semantics>f=f¯i X<annotation encoding="application/x-tex"> f = \overline{f} \circ i_X </annotation></semantics>

Okay, enough ‘plain English’. Back to category theory.

As usual, the adjoint functors

<semantics>U:AbVarVar *,Alb:Var *AbVar<annotation encoding="application/x-tex"> U: AbVar \to Var_* , \qquad Alb: Var_* \to AbVar </annotation></semantics>

define a monad

<semantics>T=UAlb:Var *Var *<annotation encoding="application/x-tex"> T = U \circ Alb : Var_* \to Var_* </annotation></semantics>

The unit of this monad is the Albanese map. Moreover <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is monadic, meaning that abelian varieties are just algebras of the monad <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>.

All this is very nice, because it means the category theorist in me now understands the point of Albanese varieties. At a formal level, the Albanese variety of a pointed variety is a lot like the free abelian group on a pointed set!

But then comes a fact connected to Miracle 2: a way in which the Albanese variety is not like the free abelian group! <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> is an idempotent monad:

<semantics>T 2T<annotation encoding="application/x-tex"> T^2 \cong T </annotation></semantics>

Since the right adjoint <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> is only forgetting a property, the left adjoint <semantics>Alb<annotation encoding="application/x-tex">Alb</annotation></semantics> is only ‘forcing that property to hold’, and forcing it to hold again doesn’t do anything more for you!

In other words: the Albanese variety of the Albanese variety is just the Albanese variety.

(I am leaving some forgetful functors unspoken in this snappy statement: I really mean “the underlying pointed variety of the Albanese variety of the underlying pointed variety of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is isomorphic to the Albanese variety of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>”. But forgetful functors often go unspoken in ordinary mathematical English: they’re not just forgetful, they’re forgotten.)

Four puzzles:

Puzzle 1. Where does Miracle 1 fit into this story?

Puzzle 2. Where does the Picard variety fit into this story? (There’s a kind of duality for abelian varieties, whose categorical significance I haven’t figured out, and the dual of the Albanese variety of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> is called the Picard variety of <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>.)

Puzzle 3. Back to complex analysis. Suppose that instead of working with connected projective algebraic varieties we used connected compact complex manifolds. Would we still get a version of Miracles 1 and 2?

Puzzle 4. How should we pronounce ‘Albanese’?

( I don’t think it rhymes with ‘Viennese’. I believe Giacomo Albanese was one of those ‘Italian algebraic geometers’ who always get scolded for their lack of rigor. If he’d just said it was a bloody monad…)

by john (baez@math.ucr.edu) at August 16, 2016 03:13 AM

August 15, 2016

Sean Carroll - Preposterous Universe

You Should Love (or at least respect) the Schrödinger Equation

Over at the twitter dot com website, there has been a briefly-trending topic #fav7films, discussing your favorite seven films. Part of the purpose of being on twitter is to one-up the competition, so I instead listed my #fav7equations. Slightly cleaned up, the equations I chose as my seven favorites are:

  1. {\bf F} = m{\bf a}
  2. \partial L/\partial {\bf x} = \partial_t ({\partial L}/{\partial {\dot {\bf x}}})
  3. {\mathrm d}*F = J
  4. S = k \log W
  5. ds^2 = -{\mathrm d}t^2 + {\mathrm d}{\bf x}^2
  6. G_{ab} = 8\pi G T_{ab}
  7. \hat{H}|\psi\rangle = i\partial_t |\psi\rangle

In order: Newton’s Second Law of motion, the Euler-Lagrange equation, Maxwell’s equations in terms of differential forms, Boltzmann’s definition of entropy, the metric for Minkowski spacetime (special relativity), Einstein’s equation for spacetime curvature (general relativity), and the Schrödinger equation of quantum mechanics. Feel free to Google them for more info, even if equations aren’t your thing. They represent a series of extraordinary insights in the development of physics, from the 1600’s to the present day.

Of course people chimed in with their own favorites, which is all in the spirit of the thing. But one misconception came up that is probably worth correcting: people don’t appreciate how important and all-encompassing the Schrödinger equation is.

I blame society. Or, more accurately, I blame how we teach quantum mechanics. Not that the standard treatment of the Schrödinger equation is fundamentally wrong (as other aspects of how we teach QM are), but that it’s incomplete. And sometimes people get brief introductions to things like the Dirac equation or the Klein-Gordon equation, and come away with the impression that they are somehow relativistic replacements for the Schrödinger equation, which they certainly are not. Dirac et al. may have originally wondered whether they were, but these days we certainly know better.

As I remarked in my post about emergent space, we human beings tend to do quantum mechanics by starting with some classical model, and then “quantizing” it. Nature doesn’t work that way, but we’re not as smart as Nature is. By a “classical model” we mean something that obeys the basic Newtonian paradigm: there is some kind of generalized “position” variable, and also a corresponding “momentum” variable (how fast the position variable is changing), which together obey some deterministic equations of motion that can be solved once we are given initial data. Those equations can be derived from a function called the Hamiltonian, which is basically the energy of the system as a function of positions and momenta; the results are Hamilton’s equations, which are essentially a slick version of Newton’s original {\bf F} = m{\bf a}.

There are various ways of taking such a setup and “quantizing” it, but one way is to take the position variable and consider all possible (normalized, complex-valued) functions of that variable. So instead of, for example, a single position coordinate x and its momentum p, quantum mechanics deals with wave functions ψ(x). That’s the thing that you square to get the probability of observing the system to be at the position x. (We can also transform the wave function to “momentum space,” and calculate the probabilities of observing the system to be at momentum p.) Just as positions and momenta obey Hamilton’s equations, the wave function obeys the Schrödinger equation,

\hat{H}|\psi\rangle = i\partial_t |\psi\rangle.

Indeed, the \hat{H} that appears in the Schrödinger equation is just the quantum version of the Hamiltonian.

The problem is that, when we are first taught about the Schrödinger equation, it is usually in the context of a specific, very simple model: a single non-relativistic particle moving in a potential. In other words, we choose a particular kind of wave function, and a particular Hamiltonian. The corresponding version of the Schrödinger equation is

\displaystyle{\left[-\frac{1}{\mu^2}\frac{\partial^2}{\partial x^2} + V(x)\right]|\psi\rangle = i\partial_t |\psi\rangle}.

If you don’t dig much deeper into the essence of quantum mechanics, you could come away with the impression that this is “the” Schrödinger equation, rather than just “the non-relativistic Schrödinger equation for a single particle.” Which would be a shame.

What happens if we go beyond the world of non-relativistic quantum mechanics? Is the poor little Schrödinger equation still up to the task? Sure! All you need is the right set of wave functions and the right Hamiltonian. Every quantum system obeys a version of the Schrödinger equation; it’s completely general. In particular, there’s no problem talking about relativistic systems or field theories — just don’t use the non-relativistic version of the equation, obviously.

What about the Klein-Gordon and Dirac equations? These were, indeed, originally developed as “relativistic versions of the non-relativistic Schrödinger equation,” but that’s not what they ended up being useful for. (The story is told that Schrödinger himself invented the Klein-Gordon equation even before his non-relativistic version, but discarded it because it didn’t do the job for describing the hydrogen atom. As my old professor Sidney Coleman put it, “Schrödinger was no dummy. He knew about relativity.”)

The Klein-Gordon and Dirac equations are actually not quantum at all — they are classical field equations, just like Maxwell’s equations are for electromagnetism and Einstein’s equation is for the metric tensor of gravity. They aren’t usually taught that way, in part because (unlike E&M and gravity) there aren’t any macroscopic classical fields in Nature that obey those equations. The KG equation governs relativistic scalar fields like the Higgs boson, while the Dirac equation governs spinor fields (spin-1/2 fermions) like the electron and neutrinos and quarks. In Nature, spinor fields are a little subtle, because they are anticommuting Grassmann variables rather than ordinary functions. But make no mistake; the Dirac equation fits perfectly comfortably into the standard Newtonian physical paradigm.

For fields like this, the role of “position” that for a single particle was played by the variable x is now played by an entire configuration of the field throughout space. For a scalar Klein-Gordon field, for example, that might be the values of the field φ(x) at every spatial location x. But otherwise the same story goes through as before. We construct a wave function by attaching a complex number to every possible value of the position variable; to emphasize that it’s a function of functions, we sometimes call it a “wave functional” and write it as a capital letter,

\Psi[\phi(x)].

The absolute-value-squared of this wave functional tells you the probability that you will observe the field to have the value φ(x) at each point x in space. The functional obeys — you guessed it — a version of the Schrödinger equation, with the Hamiltonian being that of a relativistic scalar field. There are likewise versions of the Schrödinger equation for the electromagnetic field, for Dirac fields, for the whole Core Theory, and what have you.

So the Schrödinger equation is not simply a relic of the early days of quantum mechanics, when we didn’t know how to deal with much more than non-relativistic particles orbiting atomic nuclei. It is the foundational equation of quantum dynamics, and applies to every quantum system there is. (There are equivalent ways of doing quantum mechanics, of course, like the Heisenberg picture and the path-integral formulation, but they’re all basically equivalent.) You tell me what the quantum state of your system is, and what is its Hamiltonian, and I will plug into the Schrödinger equation to see how that state will evolve with time. And as far as we know, quantum mechanics is how the universe works. Which makes the Schrödinger equation arguably the most important equation in all of physics.

While we’re at it, people complained that the cosmological constant Λ didn’t appear in Einstein’s equation (6). Of course it does — it’s part of the energy-momentum tensor on the right-hand side. Again, Einstein didn’t necessarily think of it that way, but these days we know better. The whole thing that is great about physics is that we keep learning things; we don’t need to remain stuck with the first ideas that were handed down by the great minds of the past.

by Sean Carroll at August 15, 2016 10:28 PM

Tommaso Dorigo - Scientificblogging

The 2016 Perseids, And The Atmosphere As A Detector
As a long-time meteor observer, I never lose an occasion to watch the peak of good showers. The problem is that similar occasions have become less frequent in the recent times, due to a busier agenda. 
In the past few days, however, I was at CERN and could afford going out to observe the night sky, so it made sense to spend at least a couple of hours to check on the peak activity of the Perseids, which this year was predicted to be stronger than usual.

read more

by Tommaso Dorigo at August 15, 2016 12:16 PM

The n-Category Cafe

What is a Formal Proof?

There’s been some discussion recently in the homotopy type theory community about questions like “must type-checking always be decidable?” While the specific phrasing of this question is specific to type theory (and somewhat technical as well), it is really a manifestation of a deeper and more general question: what is a formal proof?

At one level, the answer to this question is a matter of definition: any particular foundational system for mathematics defines what it considers to be a “formal proof”. However, the current discussions are motivated by questions in the design of foundational systems, so this is not the relevant answer. Instead the question is what properties should a notion of “formal proof” satisfy for it to be worthy of the name?

To start with let me emphasize that whenever I say a “proof” I will mean a correct proof. In addition to defining a notion of (correct) formal proof, a foundational system often defines some class of “arguments” that may or may not be correct; but for now, let’s just consider the correct proofs.

I’m inclined to claim that the real sine qua non of a notion of “formal proof” is a soundness theorem. That means that if we can prove <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> (in some formal system) under hypotheses <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>, say, and we know (externally to the formal system) that <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> are true, we ought to be able to conclude (again externally) that <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is also true. For if a proof doesn’t guarantee the truth of its conclusion, what good is it to prove anything?

To be sure, categorical logicians want more than a simple soundness theorem that refers to “truth”: we want to be able to interpret proofs in arbitrary sufficiently structured categories. More precisely, proofs should provide a means of constructing an initial (or free) category of some sort, which we can map uniquely into any other such category <semantics>𝒞<annotation encoding="application/x-tex">\mathcal{C}</annotation></semantics> to interpret proofs in terms of objects and morphisms in <semantics>𝒞<annotation encoding="application/x-tex">\mathcal{C}</annotation></semantics>. An ordinary soundness theorem is just the special case of this when <semantics>𝒞<annotation encoding="application/x-tex">\mathcal{C}</annotation></semantics> is a category that we consider “the real world”, such as a category of sets. We might also want to have a dual “completeness theorem” that everything true is provable, in some sense. However, while those are all nice, without at least a simple soundness theorem I think it’s hard to justify calling something a “proof”.

Now, how do we prove a soundness theorem? In principle, I’m willing to be open-minded about this, but the only way I’ve ever seen to do it is by induction. That is, a formal proof is (or gives rise to something that is) inductively constructed by some collection of rules, and we prove soundness by proving that each of these rules “preserves truth”, so that when we put a bunch of them together into a proof, truth is still preserved all the way through.

For example, one common rule (called “or-elimination”) says that if we can prove “<semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> or <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>”, and assuming <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> we can prove <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, and also assuming <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> we can prove <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, then we can deduce <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> without assuming anything extra. This rule is sound under the ordinary meaning of “or”, in the following sense: assuming inductively that the three premises are sound, we conclude that (1) “<semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> or <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>” is true, that (1) if <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is true then so is <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, and that (3) if <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> is true then so is <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>. By (1), it must be that either <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is true or <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> is true; in the first case, we deduce that <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is true from (2); while in the second case we deduce that <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is true from (3). Thus, in all cases <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> must be true. This is one of the “inductive steps” in a proof by “structural induction” that all proofs preserve truth, so that the formal system is sound.

(I hesitated before including this example at all, because it looks so tautological that one feels as if nothing is happening. The point is that it’s shifting the proof from the object-theory to the meta-theory: the rule of proof in the object-theory is sound because it represents a pattern of reasoning that’s correct in the meta-theory. Rest assured that it becomes much less trivial for more complicated systems like dependent type theory, and also when we want to interpret proofs into arbitrary categories.)

So in conclusion, it seems to me (at the moment) that any notion of “formal proof” worthy of the name must be (or include, or give rise to) some sort of inductively defined structure that we can use to prove a soundness theorem. In type theory, these inductively defined structures are called derivation trees.

Now, as I mentioned above, many formal systems also include some notion of “argument” that might or might not be a proof. Indeed, I’m tempted to claim that it’s impossible to avoid dealing with this at some point. Of course, ordinary mathematics written on paper is not a formal proof in any formal system; but even if we tried (totally infeasibly) to always write complete formal proofs on paper, there would always be the possibility (because “paper is untyped”) that we mis-applied a rule somewhere. One might say that formal proof is a mathematical abstraction that can exist essentially nowhere in reality.

The question of “type-checking” is about how we get from an argument to a proof. Of course, this depends on what kind of argument we are talking about! For instance, most type theories include a notion of “term” that plays the role of a kind of argument. Terms are, roughly, one-dimensional syntactic representations of derivation trees (or parts of them); but rather than directly being defined inductively as derivation trees are, the “well-typed terms” (those that represent derivation trees) are singled out from a larger class of “untyped terms”. Thus the latter are a sort of “argument” that might or might not give rise to a proof.

For instance, a proof by “or-elimination” as above would be represented by a term like <semantics>case(M,u.P,v.Q)<annotation encoding="application/x-tex">case(M,u.P,v.Q)</annotation></semantics>, where <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is a proof of “<semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> or <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>”, <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is a proof of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> that gets to use a hypothesis <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, and <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics> is a proof of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> that gets to use a hypothesis of <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>. This is an expression of the sort that could be typed into a computer proof assistant; it’s formally analogous to arithmetic expressions like <semantics>x+(y*z)<annotation encoding="application/x-tex">x+(y*z)</annotation></semantics> that you can write in a Java or Python program. But in addition to the problems of potential typos and misspellings, we can put together strings of symbols that “look like terms” but don’t actually represent proofs; for instance, maybe we write <semantics>case(M,u.P,v.Q)<annotation encoding="application/x-tex">case(M,u.P,v.Q)</annotation></semantics> where <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is not actually a proof of “<semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> or <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>” for any <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> (maybe it is a proof of “if <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> then <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>”), or where <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> and <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics> are not proofs of the same conclusion <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>. In a statically typed programming language, this sort of thing produces a compiler error; that’s basically the same sort of thing as the type-checking of a proof assistant.

The specific question “should type-checking be decidable” is about whether there should be an algorithm to which you can hand an untyped term and be guaranteed to get an answer of either “yes, this represents a derivation tree” or “no, it doesn’t” in a finite amount of time. In other words, your compiler can never hang; it must either succeed or return with an error message. But from the perspective I’m advocating here, “decidable checking” is not a fundamental property of a formal system, or more precisely not a property of the proofs in that system. Rather, it is a property of a certain class of “arguments” that might or might not represent proofs.

In particular, although many type theories have decidable type-checking for terms, essentially all practical type theories also include other kinds of “argument” that do not have “decidable checking”. For instance, practical implementations of dependent type theory (such as Coq, Agda, and Lean) never force the user to write out complete terms (let alone derivation trees); instead they have powerful “elaborators” that can fill in implicit arguments using techniques such as unification and typeclass inference. These elaborators are generally not guaranteed to terminate in general; for instance, it’s quite possible to set up a loop in typeclass inference causing Coq to hang, and “higher-order unification” is known formally to be an undecidable problem.

And this is even before we get to the informal arguments found in pencil-and-paper mathematics. Converting those into a formal proof of any sort is certainly undecidable by any ealgorithm!

So we certainly can’t do without “notions of argument” that don’t have “decidable type-checking”. But what I would ask of designers of new formal systems whether there is, somewhere in the specification of the system, a (probably inductive) notion of proof for which one can prove a soundness theorem (and, hopefully, also a categorical initiality theorem, and maybe a completeness theorem). If not, how do you justify calling the things you are talking about “proofs”?

Note that “type-checking” is trivially “decidable” for proofs of this sort; by their very nature they are correct proofs. The further our “arguments” get from the inductive formal proofs, the more difficult of a problem “type-checking” becomes, until in the limit we get to “I have found a truly marvelous proof of this result which this margin is too narrow to contain”. Somewhere in there is the boundary between decidable and undecidable type-checking. Somewhere else in there is the boundary between feasible and infeasible type-checking. And in practice, we certainly make use of “notions of argument” that lie on both sides of each of those lines.

However, it does seem to me that if a formal system is going to have at its core some inductive notion of proof, then for a proof assistant to honestly call itself an implementation of that formal system, it ought to include, somewhere in its internals, some data structure that represents those proofs reasonably faithfully. And given how trivial “type-checking” is for the actual formal proofs, it seems to me that anything calling itself a “reasonably faithful representation” of those proofs ought to at least have decidable type-checking. Those representations may not be what the user of the system calls “terms”; but they ought to be there somewhere. Informally, the purpose of a proof assistant is to assist the user in producing a proof; it may not actually go all the way to produce a complete inductive formal proof, but it ought to at least produce something close enough that the rest of the distance can be crossed algorithmically.

However, I remain open to being convinced otherwise.

EDIT: After a long and probably hard-to-follow discussion in the comments, I have been convinced otherwise (though I remain open to being convinced back). I still say that a proof assistant ought to somehow “faithfully represent formal proofs” internally. But now I have an actual mathematical/practical reason for that (not just a philosophical one), which implies a concrete criterion for what “faithfully represent” means: I want to be able to compute with the formal proofs that get created, such as by applying a constructive proof of the soundness/initiality theorem to them. With this criterion, it turns out that decidability is a red herring. What I want is more than decidability in one sense — we need the actual proofs, not just a decidably checkable representation of them — but also less in another sense, since we don’t need to actually represent the entire proof as a data structure, only “compute with that data structure” as if we had it. See this comment below and its responses for further discussion.

by shulman (viritrilbia@gmail.com) at August 15, 2016 04:19 AM

August 14, 2016

Clifford V. Johnson - Asymptotia

A Known Fact…

“The fact that certain bodies, after being rubbed, appear to attract other bodies, was known to the ancients.” Thus begins, rather awesomely, the preface to Maxwell’s massively important “Treatise on Electricity and Magnetism” (1873). -cvj

The post A Known Fact… appeared first on Asymptotia.

by Clifford at August 14, 2016 07:29 PM

August 13, 2016

Geraint Lewis - Cosmic Horizons

For the love of Spherical Harmonics
I hate starting every blog post with an apology as I have been busy, but I have. But I have. Teaching Electromagnetism to our first year class, computational physics using MatLab, and six smart talented students to wrangle, takes up a lot of time.

But I continue to try and learn a new thing every day! And so here's a short summary of what I've been doing recently.

There's no secret I love maths. I'm not skilled enough to be a mathematician, but I am an avid user. One of the things I love about maths is its shock value. What, I hear you say, shock? Yes, shock.

I remember when I discovered that trigonometric functions can be written as infinite series, and finding you can calculate these series numerically on a computer by adding the terms together, getting more and more accurate as we add higher terms.

And then there is Fourier Series! The fact that you can add these trigonometric functions together, appropriately weighted, to make other functions, functions that look nothing like sines and cosines. And again, calculating these on a computer.

This is my favourite, the fact that you can add waves together to make a square wave.
But we can go one step higher. We can think of waves on a sphere. These are special waves called called Spherical Harmonics.

Those familiar with Schrodinger's equation know that these appear in the solution for the hydrogen atom, describing the wave function, telling us about the properties of the electron.

But spherical harmonics on a sphere are like the sines and cosines above, and we can describe any function over a sphere by summing up the appropriately weighted harmonics. What function you might be thinking? How about the heights of the land and the depths of the oceans over the surface of the Earth?

This cool website has done this, and provide the coefficients that you need to use to describe the surface of the Earth in terms of spherical harmonics. The coefficients are complex numbers as they describe not only how much of a harmonic you need to add, but also how much you need to rotate it.

So, I made a movie.
What you are seeing is the surface of the Earth. At the start, we have only the zeroth "mode", which is just a constant value across the surface. Then we add the first mode, which is a "dipole", which is negative on one side of the Earth and positive on the other, but appropriately rotated. And then we keep adding higher and higher modes, which adds more and more detail. And I think it looks very cool!

Why are you doing this, I hear you cry. Why, because to make this work, I had to beef up my knowledge of python and povray, learn how to fully wrangle healpy to deal with functions on a sphere, a bit of scripting, a bit of ffmpeg, and remember just what spherical harmonics are. And as I've written about before, I think it is an important thing for a researcher to grow these skills.

When will I need these skills? Dunno, but they're now in my bag of tricks and ready to use.

by Cusp (noreply@blogger.com) at August 13, 2016 03:47 AM

August 12, 2016

ZapperZ - Physics and Physicists

Proton Radius Problem
John Timmer on Ars Technica has written a wonderful article on the "proton radius problem". The article gives a brief background on an earlier discovery, and then moves on to a new result on a deuterium atom.

This area is definitely a work-in-progress, and almost as exciting as the neutrino mass deficiency mystery from a many years ago.

Zz.

by ZapperZ (noreply@blogger.com) at August 12, 2016 03:34 PM

ZapperZ - Physics and Physicists

The Science of Sports
With the Olympics in full swing right now, the Perimeter Institute has released a series that discusses the physics behind various sports at the Games. Called The Physics of the Olympics, it covers a wide range of events.

Zz.

by ZapperZ (noreply@blogger.com) at August 12, 2016 03:20 PM

August 11, 2016

Lubos Motl - string vacua and pheno

Modern obsession with permanent revolutions in physics
Francisco Villatoro joined me and a few others in pointing out that it's silly to talk about crises in physics. The LHC just gave us a big package of data at unprecedented energies to confirm a theory that's been around for mere four decades. It's really wonderful.

People want new revolutions and they want them quickly. This attitude to physics is associated with the modern era. It's not new as I will argue but on the other hand, I believe that it was unknown in the era of Newton or even Maxwell. Modern physics – kickstarted by the discovery of relativity and quantum mechanics – has shown that something can be seriously wrong with the previous picture of physics. So people naturally apply "mathematical induction" and assume that a similar revolution will occur infinitely many times.

Well, I don't share this expectation. The framework of quantum mechanics is very likely to be with us forever. And even when it comes to the choice of the dynamics, string theory will probably be the most accurate theory forever – and quantum field theory will forever be the essential framework for effective theories.




David Gross' 1994 discussion of the 1938 conference in Warsaw shows that the desire to "abandon all the existing knowledge" is in no way new. It was surely common among physicists before the war.




Let me remind you that Gross has argued that all the heroes of physics were basically wrong and deluded about various rather elementary things – except for Oskar Klein who presented his "almost Standard Model" back in the late 1930s. Klein was building on the experience with the Kaluza-Klein theory of extra dimensions and his candidate for a theory of everything
  • appreciated the relationship between particles and fields, both for fermions and bosons that he treated analogously
  • used elementary particles analogous to those in the Standard Model – gauge bosons as well as elementary fermions
  • it had doublets of fermions (electron, neutrino; proton, neutron) and some massless and massive bosons mediating the forces
  • just a little bit was missing for him to construct the full Yang-Mills Lagrangian etc.
If Klein had gotten 10 brilliant graduate students, he must have discovered the correct Standard Model before Hitler's invasion of Poland a year later. Well, they would probably have to do some quick research of the hadrons, to learn about the colorful \(SU(3)\) and other things, but they still had a chance. 3 decades of extra work looks excessive from the contemporary viewpoint. But maybe we're even sillier these days.

Gross says that there were some borderline insane people at the conference – like Eddington – and even the sane giants were confused about the applicability of quantum fields for photons, among other things. And Werner Heisenberg, the main father of quantum mechanics, was among those who expected all of quantum mechanics to break down imminently. Gross recalls:
Heisenberg concluded from the existence both of ultraviolet divergences and multi-particle production that there had to be a fundamental length of order the classical radius of the electron, below which the concept of length loses its significance and quantum mechanics breaks down. The classical electron radius, \(e^2/mc^2\) is clearly associated with the divergent electron self-energy, but also happens to be the range of nuclear forces, so it has something to do with the second problem. Quantum mechanics itself, he said, should break down at these lengths. I have always been amazed at how willing the great inventors of quantum mechanics were to give it up all at the drop of a divergence or a new experimental discovery.
The electron Compton wavelength, \(10^{-13}\) meters, was spiritually their "Planck scale". Everything – probably including the general rules of quantum mechanics – was supposed to break down over there. We know that quantum mechanics (in its quantum field theory incarnation) works very well at distances around \(10^{-20}\) meters, seven orders of magnitude shorter than the "limit" considered by Heisenberg.

This is just an accumulated piece of evidence supporting the statement that the belief in the "premature collapse of the status quo theories" has been a disease that physicists have suffered from for a century or so.

You know, if you want to localize the electron at shorter distances than the Compton wavelength, the particle pair production becomes impossible to neglect. Also, the loop diagrams produce integrals whose "majority" is the ultraviolet divergence, suggesting that you're in the regime where the theory breaks down. In some sense, it was reasonable to expect that a "completely new theory" would have to take over.

In reality, we know that the divergences may be removed by renormalization and the theory – quantum field theory – has a much greater range of validity. In some sense, the "renormalized QED" may be viewed as the new theory that Heisenberg et al. had in mind. Except that by its defining equations, it's still the same theory as the QED written down around 1930. One simply adds rules how to subtract the infinities to get the finite experimental predictions.

I want to argue that these two historical stories could be analogous:
Heisenberg believed that around \(e^2/mc^2 \sim 10^{-13}\,{\rm meters}\), all hell breaks loose because of the particle production and UV divergences.

Many phenomenologists have believed that around \(1/m_{\rm Higgs}\sim 10^{-19}\,{\rm meters}\), all hell breaks loose in order to make the light Higgs mass natural.
In both cases, there is no "inevitable" reason why the theory should break down. The UV divergences are there and dominate above the momenta \(|p|\sim m_e\). But they don't imply an inconsistency because the renormalization may deal with them.

In the case of the naturalness, everyone knows that there is not even a potential for an inconsistency. The Standard Model is clearly a consistent effective field theory up to much higher energies. It just seems that it's fine-tuned, correspondingly unnatural, and therefore "unlikely" assuming that the parameters take some "rather random values" from the allowed parameter space, using some plausible measure on the parameter space.

At the end, Heisenberg was wrong that QED had to break down beneath the Compton wavelength. However, he was morally right about a broader point – that theories may break down and be replaced by others because of divergences. Fermi's four-fermion theory produces divergences that cannot be cured by renormalization and that's the reason why W-bosons, Z-bosons, and the rest of the electroweak theory has to be added at the electroweak scale. An analogous enhancement of the whole quantum field theory framework to string theory is needed near the string/Planck scale or earlier, thanks to the analogous non-renormalizability of Einstein's GR.

So something about the general philosophy believed by Heisenberg was right but the details just couldn't have been trusted as mechanically as the folks in the 1930s tended to do. Whether QED was consistent at length scales shorter than the Compton wavelength was a subtle question and the answer was ultimately Yes, it's consistent. So there was no reason why the theory "had to" break down and it didn't break down at that point.

Similarly, the reasons why the Standard Model should break down already at the electroweak scale are analogously vague and fuzzy. As I wrote a year ago, naturalness is fuzzy, subjective, model-dependent, and uncertain. You simply can't promote it to something that will reliably inform you about the next discovery in physics and the precise timing.

But naturalness is still a kind of an argument that broadly works, much like Heisenberg's argument was right whenever applied more carefully in different, luckier contexts. One simply needs to be more relaxed about the validity of naturalness. There may be many reasons why things look unnatural even though they are actually natural. Just compare the situation with that of Heisenberg. Before the renormalization era, it may have been sensible to consider UV divergences as a "proof" that the whole theory had to be superseded by a different one. But it wasn't true for subtle reasons.

The relaxed usage of naturalness should include some "tolerance towards a hybrid thinking of naturalness and the anthropic selection". Naturalness and the anthropic reasoning are very different ways of thinking. But that doesn't mean that they're irreconcilable. Future physicists may very well be forced to take both of them into account. Let me offer you a futuristic, relaxed, Lumoesque interpretation why supersymmetry or superpartner masses close to the electroweak scale are preferred.

Are the statements made by the supporters of the "anthropic principle" universally wrong? Not at all. Some of them are true – in fact, tautologically true. For example, the laws of physics and the parameters etc. are such that they allow the existence of stars and life (and everything else we see around, too). You know, the subtle anthropic issue is that the anthropic people also want to okay other laws of physics that admit "some other forms of intelligent life" but clearly disagree with other features of our Universe. They look at some "trans-cosmic democracy" in which all intelligent beings, regardless of their race, sex, nationality, and string vacuum surrounding them, are allowed to vote in some Multiverse United Nations. ;-)

OK, my being an "opponent of the anthropic principle as a way to discover new physics" means that I don't believe in this multiverse multiculturalism. It's impossible to find rules that would separate objects in different vacua to those who can be considered our peers and those who can't. For example, even though the PC people are upset, I don't consider e.g. Muslims who just mindlessly worship Allah to be my peers, to be the "same kind of observers as I am". So you may guess what I could think about some even stupider bound states of some particles in a completely different vacuum of string theory. Is that bigotry or racism not to consider some creatures from a different heterotic compactification a subhuman being? ;-)

So I tend to think that the only way to use the anthropic reasoning rationally is simply to allow the selection of the vacua according to everything we have already measured. I have measured that there exists intelligent life in the Universe surrounding me. But I have also measured the value of the electron's electric charge (as an undergrad, and I hated to write the report that almost no one was reading LOL). So I have collapsed the wave function into the space of the possible string vacua that are compatible with these – and all other – facts.

If all vacua were non-supersymmetric but if they were numerous, I would agree with the anthropic people that it's enough to have one in which the Higgs mass is much lower than the Planck scale if you want to have life – with long-lived stars etc. So the anthropic selection is legitimate. It's totally OK to assume that the vacua that admit life are the focus of the physics research, that there is an extra "filter" that picks the viable vacua and doesn't need further explanations.

However, what fanatical champions of the anthropic principle miss – and that may be an important point of mine – is that even if I allow this "life exists" selection of the vacua as a legitimate filter or a factor in the probability distributions for the vacua, I may still justifiably prefer the natural vacua with a rather low-energy supersymmetry breaking scale. Why?

Well, simply because these vacua are much more likely to produce life than the non-supersymmetric or high-SUSY-breaking-scale vacua! In those non-SUSY vacua, the Higgs is likely to be too heavy and the probability that one gets a light Higgs (needed for life) is tiny. On the other hand, there may be a comparable number of vacua that have a low-energy SUSY and a mechanism that generates an exponentially low SUSY breaking scale by some mechanism (an instanton, gluino condensate, something). And in this "comparably large" set of vacua, a much higher percentage will include a light Higgs boson and other things that are helpful or required for life.

So even if one reduces the "probability of some kind of a vacuum" to the "counting of vacua of various types", the usual bias equivalent to the conclusions of naturalness considerations may still emerge!

You know, some anthropic fanatics – and yes, I do think that even e.g. Nima has belonged to this set – often loved or love to say that once we appreciate the anthropic reasoning, it follows that we must abandon the requirement that the parameters are natural. Instead, the anthropic principle takes care of them. But this extreme "switch to the anthropic principle" is obviously wrong. It basically means that all of remaining physics arguments get "turned off". But it isn't possible to turn off physics. The naturalness-style arguments are bound to re-emerge even in a consistent scheme that takes the anthropic filters into account.

Take F-theory on a Calabi-Yau four-fold of a certain topology. It produces some number of non-SUSY (or high-energy SUSY) vacua, and some number of SUSY (low-energy SUSY) vacua. These two numbers may differ by a few orders of magnitude. But the probability to get a light Higgs may be some \(10^{30}\) times higher in the SUSY vacua. So the total number of viable SUSY vacua will be higher than the total number of non-SUSY vacua. We shouldn't think that this is some high-precision science because the pre-anthropic ratio of the number of vacua could have differed from one by an order of magnitude or two. But it's those thirty orders of magnitude (or twenty-nine) that make us prefer the low-energy SUSY vacua.

On the other hand, there's no reliable argument that would imply that "new particles as light as the Higgs boson" have to exist. The argument sketched in the previous paragraph only works up to an order of magnitude or two (or a few).

You know, it's also possible that superpartners that are too light also kill life for some reason; or there is no stringy vacuum in which the superpartners are too light relatively to the Higgs boson. In that case, well, it's not the end of the world. The actual parameters allowed by string theory (and life) beat whatever distribution you could believe otherwise (by their superior credibility). If the string vacuum with the lightest gluino that is compatible with the existing LHC observations has a \(3\TeV\) gluino, then the gluino simply can't be lighter. You can protest against it but that's the only thing you can do against a fact of Nature. The actual constraints resulting from full-fledged string theory or a careful requirement of "the existence of life" always beat some vague distributions derived from the notion of naturalness.

So when I was listing the adjectives that naturalness deserves, another one could be "modest" i.e. "always prepared to be superseded by a more rigorous or quantitative argument or distribution". Naturalness is a belief that some parameters take values of order one – but we only need to talk about the values in this vague way up to the moment when we find a better or more precise or more provable way to determine or constrain the value of the parameter.

Again, both the champions of the anthropic principle and the warriors for naturalness often build on exaggerated, fanatical, oversimplified, or native theses. Everyone should think more carefully about the aspects of these two "philosophies" – their favorite one as well as the "opposite" one – and realize that there are lots of statements and principles in these "philosophies" that are obviously right and also lots of statements made by the fanatical supporters that are obviously wrong. Even more importantly, "naturalness" and "anthropic arguments" are just the most philosophically flavored types of arguments in physics – but aside from them, there still exist lots of normal, "technical" physics arguments. I am sure that the latter will be a majority of physics in the future just like they were a majority of physics in the past.

At the end, I want to say that people could have talked about the scales in ways that resemble the modern treatment of the scale sometime in the 1930s, too. The first cutoff where theories were said to break down was the electron mass, below an \({\rm MeV}\). Quantum field theory was basically known in the 1930s. Experiments went from \(1\keV\) to \(1\MeV\) and \(1\GeV\) to \(13\TeV\) – it was many, many orders of magnitude – but the framework of quantum field theory as the right effective theory survived. All the changes have been relatively minor since the 1930s. Despite the talk about some glorious decades in the past, people have been just adjusting technical details of quantum field theory since the 1930s.

And the theory was often ahead of experiments. In particular, new quarks (at least charm and top) were predicted before they were observed. The latest example of this gap was the discovery of the Higgs boson that took place some 48 years after it was theoretically proposed. If string theory were experimentally proven 48 years after its first formula were written down, we would see a proof in 2016. But you know, the year 48 isn't a high-precision law of physics. ;-)

Both experimental discoveries and theoretical discoveries are still taking place. Theories are being constructed and refined every year – even in recent years. And the experiments are finding particles previously unknown to the experiments – most recently, the Higgs boson in 2012. It's the "separate schedules" of the theory and experiment that confuses lots of people. But if you realize that it's normal and it's been a fact for many decades, you will see that there's nothing "unusually slow or frustrating" about the current era. Just try to fairly assess how many big experimental discoveries confirming big theories were done in the 1930s or 1940s or 1950s or 1980s etc.

The talk about frustration, nightmares, walls, and dead ends can't be justified by the evidence. It's mostly driven by certain people's anti-physics agenda.

by Luboš Motl (noreply@blogger.com) at August 11, 2016 06:03 PM

The n-Category Cafe

A Survey of Magnitude

The notion of the magnitude of a metric space was born on this blog. It’s a real-valued invariant of metric spaces, and it came about as a special case of a general definition of the magnitude of an enriched category (using Lawvere’s amazing observation that metric spaces are usefully viewed as a certain kind of enriched category).

Anyone who’s been reading this blog for a while has witnessed the growing-up of magnitude, with all the attendant questions, confusions, misconceptions and mess. (There’s an incomplete list of past posts here.) Parents of grown-up children are apt to forget that their offspring are no longer helpless kids, when in fact they have a mortgage and children of their own. In the same way, it would be easy for long-time readers to have the impression that the theory of magnitude is still at the stage of resolving the basic questions.

Certainly there’s still a great deal we don’t know. But by now there’s also lots we do know, so Mark Meckes and I recently wrote a survey paper:

Tom Leinster and Mark Meckes, The magnitude of a metric space: from category theory to geometric measure theory. ArXiv:1606.00095; also to appear in Nicola Gigli (ed.), Measure Theory in Non-Smooth Spaces, de Gruyter Open.

Here I’ll tell you some of the highlights: ten things we used not to know, but do now.

Before I give you the highlights, I can’t resist repeating the definition of magnitude, as it’s so very simple.

Let <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> be a finite metric space. Denote by <semantics>Z A<annotation encoding="application/x-tex">Z_A</annotation></semantics> the square matrix whose rows and columns are indexed by the points of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, and with entries <semantics>Z A(a,b)=e d(a,b)<annotation encoding="application/x-tex">Z_A(a, b) = e^{-d(a, b)}</annotation></semantics>. Assuming that <semantics>Z A<annotation encoding="application/x-tex">Z_A</annotation></semantics> is invertible, the magnitude of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> is

<semantics>|A|= a,bAZ A 1(a,b)<annotation encoding="application/x-tex"> \left|A\right| = \sum_{a, b \in A} Z_A^{-1}(a, b) </annotation></semantics>

— the sum of all the entries of the inverse matrix <semantics>Z A 1<annotation encoding="application/x-tex">Z_A^{-1}</annotation></semantics>.

This definition immediately prompts the questions: why should <semantics>Z A<annotation encoding="application/x-tex">Z_A</annotation></semantics> be invertible? And how do you define magnitude for metric spaces that aren’t finite? The answers appear right at the beginning of the list below.

So, here are ten things we now know about magnitude that once upon a time we didn’t. They’re all covered in the survey paper, where you can also find references.

  1. Magnitude is well-defined (that is, <semantics>Z A<annotation encoding="application/x-tex">Z_A</annotation></semantics> is invertible) for every finite subset of Euclidean space. In fact, <semantics>Z A<annotation encoding="application/x-tex">Z_A</annotation></semantics> is not only invertible but positive definite when <semantics>A n<annotation encoding="application/x-tex">A \subseteq \mathbb{R}^n</annotation></semantics>.

  2. There is a canonical way to extend the definition of magnitude from finite spaces to compact spaces, as long as they’re “positive definite” (meaning that the matrix <semantics>Z A<annotation encoding="application/x-tex">Z_A</annotation></semantics> is positive definite for every finite subset <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>). For instance, this includes all compact subspaces of <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics>.

    What I mean by “canonical” is that there are several different ways that you might think of extending the definition, and they all give the same answer. For instance, you might define the magnitude of a compact positive definite space <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> as the supremum of the magnitudes of its finite subsets. Or, you might take some sequence <semantics>(B n)<annotation encoding="application/x-tex">(B_n)</annotation></semantics> of finite subsets of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> whose union is dense in <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, “define” <semantics>|A|=lim n|B n|<annotation encoding="application/x-tex">\left|A\right| = \lim_{n \to \infty} \left|B_n\right|</annotation></semantics>, and hope that none of the many things that could go wrong with this “definition” do go wrong. Or, you might abandon finite approximations altogether and try to take a more direct, analysis-based approach. Mark showed that these approaches all work and all give the same number.

  3. When you introduce a scale factor <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>, the magnitude <semantics>|tA|<annotation encoding="application/x-tex">\left| t A \right|</annotation></semantics> of a compact set <semantics>A n<annotation encoding="application/x-tex">A \subseteq \mathbb{R}^n</annotation></semantics> is a continuous function of <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics>, taking values in <semantics>[1,)<annotation encoding="application/x-tex">[1, \infty)</annotation></semantics> (assuming <semantics>A<annotation encoding="application/x-tex">A \neq \emptyset</annotation></semantics>). This is called the magnitude function of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>.

  4. The magnitude function of any finite metric space knows its cardinality. In fact, it is increasing for sufficiently large <semantics>t<annotation encoding="application/x-tex">t</annotation></semantics> and converges to the cardinality as <semantics>t<annotation encoding="application/x-tex">t \to \infty</annotation></semantics>. This illustrates the idea that the magnitude of a finite space is the “effective number of points”.

  5. The magnitude function of a compact subset of <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> knows its volume. Specifically, Juan-Antonio Barceló and Tony Carbery showed that for compact <semantics>A n<annotation encoding="application/x-tex">A \subseteq \mathbb{R}^n</annotation></semantics>,

    <semantics>Vol(A)=c nlim t|tA|t n<annotation encoding="application/x-tex"> Vol(A) = c_n \lim_{t \to \infty} \frac{\left|t A\right|}{t^n} </annotation></semantics>

    where <semantics>c n<annotation encoding="application/x-tex">c_n</annotation></semantics> is a known constant.

  6. The magnitude function of a compact subset of <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> also knows its Minkowski dimension. (Minkowski dimension is one of the more important notions of fractional dimension; it’s typically equal to the Hausdorff dimension.) Specifically, Mark showed that the Minkowski dimension of a compact set <semantics>A n<annotation encoding="application/x-tex">A \subseteq \mathbb{R}^n</annotation></semantics> is equal to the growth of the magnitude function, meaning that there are constants <semantics>c,C>0<annotation encoding="application/x-tex">c, C \gt 0</annotation></semantics> such that

    <semantics>c<|tA|t dimA<C<annotation encoding="application/x-tex"> c \lt \frac{\left|t A\right|}{t^{\dim A}} \lt C </annotation></semantics>

    for all <semantics>t0<annotation encoding="application/x-tex">t \gg 0</annotation></semantics>.

  7. There’s an exact formula for the magnitude of the sphere of any dimension, with the geodesic metric. This was found by Simon Willerton.

  8. There’s also an exact formula for the magnitude of any odd-dimensional Euclidean ball, computed by Barceló and Carbery. This formula raises lots of interesting questions, and I may say more about it in a future post.

  9. The magnitude function of a convex body in <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> with the taxicab metric (i.e. the metric induced by the 1-norm) is a polynomial. Its degree is the dimension of the body, and the coefficients are certain geometric measures of the body — e.g. up to a known factor, the top coefficient is the volume.

  10. There are various asymptotic formulas for the magnitude functions of other spaces. For instance, Simon showed that the magnitude function of a homogeneous Riemannian <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-manifold is asymptotically a polynomial of degree <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> whose top coefficient is proportional to the volume and whose <semantics>(n2)<annotation encoding="application/x-tex">(n - 2)</annotation></semantics>th coefficient is proportional to the total scalar curvature. And Simon and I found various instances in which an asymptotic inclusion-exclusion principle holds: e.g. for the ternary Cantor set <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>,

    <semantics>lim t(|3tC|2|tC|)=0,<annotation encoding="application/x-tex"> \lim_{t \to \infty} \Bigl( \left|3 t C\right| - 2\left|t C\right| \Bigr) = 0, </annotation></semantics>

    corresponding to the fact that <semantics>3tC<annotation encoding="application/x-tex">3 t C</annotation></semantics> is the disjoint union of 2 copies of <semantics>tC<annotation encoding="application/x-tex">t C</annotation></semantics>.

You can find more highlights, plus all the details I’ve left out, in our survey.

There are also a couple of important developments that we don’t cover:

  • The magnitude of a graph, and the corresponding homology theory. You can view a graph as a metric space: the points are the vertices, and distances are shortest path-lengths, giving all edges length <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>. The study of magnitude for graphs has a special flavour, largely because distances in graphs are always integers. But most excitingly, it is just the shadow of a graded homology theory for graphs, developed by Richard Hepworth and Simon Willerton. Magnitude is the Euler characteristic of that homology theory, in the same way that the Jones polynomial is the Euler characteristic of Khovanov homology for links. For instance, the product formula for magnitude is a corollary of a Künneth theorem in homology — a decategorification of it, if you like. Similarly, a certain inclusion-exclusion formula for magnitude is the decategorification of a Mayer-Vietoris theorem for magnitude homology.

  • There is an intimate connection between magnitude and maximum entropy — or more exactly, the exponential of entropy, which I like to call diversity. Mark and I wrote this up separately a little while ago. In fact, Mark used the relationship between magnitude and maximum diversity to prove the result on Minkowski dimension that I mentioned earlier.

Incidentally, I know of just one other categorically-minded paper with the words “geometric measure theory” in the title, and it’s one of my favourite papers of all time:

Stephen H. Schanuel, What is the length of a potato? An introduction to geometric measure theory. In Categories in Continuum Physics, Lecture Notes in Mathematics 1174. Springer, Berlin, 1986.

It’s nine pages of pure joy. Even if you don’t read our survey, I strongly recommend that you read this!

by leinster (Tom.Leinster@ed.ac.uk) at August 11, 2016 01:34 PM

August 10, 2016

Symmetrybreaking - Fermilab/SLAC

#AskSymmetry Twitter chat with Risa Wechsler

See cosmologist Risa Wechsler's answers to readers' questions about dark matter and dark energy.

<noscript>[<a href="http://storify.com/Symmetry/asksymmetry-twitter-chat-with-cosmologist-risa-wec" target="_blank">View the story "#AskSymmetry Twitter Chat with Risa Wechsler - Aug. 9, 2016" on Storify</a>]</noscript>

August 10, 2016 07:06 PM

Lubos Motl - string vacua and pheno

Naturalness, a null hypothesis, hasn't been superseded
Quanta Magazine's Natalie Wolchover has interviewed some real physicists to learn
What No New Particles Means for Physics
so you can't be surprised that the spirit of the article is close to my take on the same question published three days ago. Maria Spiropulu says that experimenters like her know no religion so her null results are a discovery, too. I agree with that. I am just obliged to add that if she were surprised she is not getting some big prizes for the discovery of the Standard Model at \(\sqrt{s}=13\TeV\), it's because her discovery is too similar to the discovery of the Standard Model at \(\sqrt{s}=1.96\TeV\), \(\sqrt{s}=7\TeV\), and \(\sqrt{s}=8\TeV\), among others. ;-) And the previous similar discoveries were already done by others.

She and others at the LHC are doing a wonderful job and tell us the truth but the opposite answer – new physics – would still be more interesting for the theorists – or any "client" of the experimenters. I believe that this point is obvious and it makes no sense to try to hide it.

Nima Arkani-Hamed says lots of things I appreciate, too, although his assertions are exaggerated, as I will discuss. It's crazy to talk about a disappointment, he tells us. Experimenters have worked hard and well. Those who whine that some new pet model hasn't been confirmed are spoiled brats who scream because they didn't get their favorite lollipop and they should be spanked.




Yup. But when you look under the surface, you will see that there are actually many different opinions about naturalness and the state of physics expressed by different physicists. If you're strict enough, many of these opinions almost strictly contradict each other.




Nathaniel Craig whom I know as a brilliant student at Harvard says that the absence of new physics will have to be taken into account and addressed but he implicitly makes it clear that he will keep on thinking about theories such as his "neutral naturalness". Some kind of naturalness will almost certainly be believed and elaborated upon by people like him in the future, anyway. I think that Nathaniel and other bright folks like that should grow balls and say some of these things more clearly – even if they contradict some more senior colleagues.

Aside from saying that the diphoton could have been groundbreaking (yup), Raman Sundrum said:
Naturalness is so well-motivated that its actual absence is a major discovery.
Well, it is well-motivated but it hasn't been shown not to exist. This claim of mine contradicting Sundrum's assertion above was used in the title of this blog post.

What does it mean that you show that naturalness doesn't exist? Well, naturalness is a hypothesis and you want to exclude it. Except that naturalness – while very "conceptual" – is a classic example of a null hypothesis. If you want to exclude it, you should exclude it at least by a five-sigma deviation! You need to find a phenomenon whose probability is predicted to be smaller than 1 in 1,000,000 according to the null hypothesis.

We are routinely used to require just a 2-sigma (95%) exclusion for particular non-null hypotheses that add some new particles of effects. But naturalness is clearly not one of those. Naturalness is the null hypothesis in these discussions. So you need to exclude it by the five-sigma evidence. Has it taken place?

Naturalness isn't a sharply defined Yes/No adjective. As parameters become (gradually) much smaller than one, the theory becomes (gradually) less natural. When some fundamental parameter in the Lagrangian is fine-tuned to 1 part in 300, we say that \(\Delta=300\) and the probability that the parameter is this close to the special value (typically zero) or closer is \(p=1/300\).

(The precise formula to define \(\Delta\) in MSSM or a general model is a seemingly technical but also controversial matter. There are many ways to do so. Also, there are lots of look-elsewhere effects that could be added as factors in \(\Delta\) or removed from it. For these reasons, I believe that you should only care about the order of magnitude of \(\Delta\), not some precise changes of values.)

The simplest supersymmetric models have been shown to be unnatural at \(\Delta \gt 300\) or something like that. That means that some parameters look special or fine-tuned. The probability of this degree of fine-tuning is \(p=1/300\) or so. Does it rule out naturalness? No because we require a five-sigma falsification of the null hypothesis e.g. \(p=1/1,000,000\) or so. We're very far from it. Superpartners at masses comparable to \(10\TeV\) will still allow naturalness to survive.

Twenty years ago, I wasn't fond of using this X-sigma terminology but my conclusions were basically the same. If some parameters are comparable to \(0.01\), they may still be said to be of order one. We know such parameters. The fine-structure constant is \(\alpha\approx 1/137.036\). We usually don't say that it's terribly unnatural. The value may be rather naturally calculated from the \(SU(2)\) and \(U(1)_Y\) electroweak coupling constants and those become more natural, and so on. But numbers of order \(0.01\) only differ from "numbers of order one" by some 2.5 sigma.

My taste simply tells me that \(1/137.036\) is a number of order one. When you need to distinguish it from one, you really need a precise calculation. For me, there's pretty much "no qualitative realm in between" \(1\) and \(1/137.036\). Numbers like \(0.01\) have to be allowed in Nature because we surely know that there are dimensionless ratios (like \(m_{\rm Planck}/m_{\rm proton}\)) that are vastly different from one and they have to come from somewhere. Even if SUSY or something else stabilizes the weak scale, it must still be explained why the scale – and the QCD scale (it's easier) – is so much lower than the Planck scale. The idea that everything is "really" of the same order is surely silly at the end.

OK, assuming that \(\Delta\gt 300\) has been established, Sundrum's claim that it disproves naturalness is logically equivalent to the claim that any 3-sigma deviation seen anywhere falsifies the null hypothesis, and therefore proves some new physics. Well, we know it isn't the case. We had a 4-sigma and 3-sigma diphoton excess. Despite the fact that the Pythagorean combination is exactly 5 (with the rounded numbers I chose), we know that it was a fluke.

Now, the question whether naturalness is true is probably (even) more fundamental than the question whether the diphoton bump came from a real particle. But the degree of certainty hiding in 3-sigma or \(p=1/300\)-probable propositions is exactly the same. If you (Dr Sundrum) think that the observation that some \(\Delta \gt 300\) disproves naturalness, then you're acting exactly as sloppily as if you consider any 3-sigma bump to be a discovery!

A physicist respecting that particle physics is a hard science simply shouldn't do so. Naturalness is alive and well. 3-sigma deviations such as the observation that \(\Delta \gt 300\) in some models simply do sometimes occur. We can't assume that they are impossible. And we consider naturalness to be the "basic story to discuss the values of parameters" because this story looks much more natural or "null" than known competitors. If and when formidable competitors were born, one could start to distinguish them and naturalness could lose the status of "the null hypothesis". But no such a convincing competitor exists now.

As David Gross likes to say, naturalness isn't a real law of physics. It's a strategy. Some people have used this strategy too fanatically. They wanted to think that even \(\Delta \gt 10\) was too unnatural and picked other theories. But this is logically equivalent to the decision to follow research directions according to 1.5-sigma deviations. Whenever there's a 1.5-sigma bump somewhere, such a physicist would immediately focus on it. That's simply not how a solid physicist behaves in the case of specific channels at the LHC. So it's not how he should behave when it comes to fundamental conceptual questions such as naturalness, either.

Naturalness is almost certainly a valid principle but when you overuse it – in a way that is equivalent to the assumption that more than 3-sigma or 2-sigma or even 1.5-sigma deviations can't exist – you're pretty much guaranteed to be sometimes proven wrong by Mother Nature because statistics happens. If you look carefully, you should be able to find better guides for your research than 1.5-sigma bumps in the data. And the question whether \(\Delta\lt 10\) or \(\Delta \gt 10\) in some model is on par with any other 1.5-sigma bump. You just shouldn't care about those much.

While I share much of the spirit of Nima's comments, they're questionable at the same moment, too. For example, Nima said:
It’s striking that we’ve thought about these things for 30 years and we have not made one correct prediction that they have seen.
Who hasn't made the correct predictions? Surely many people have talked about the observation of "just the Standard Model" (Higgs and nothing else that is new) at the LHC. I've surely talked about it for decades. And we had discussions about it with virtually all high-energy physicists who have ever discussed anything about broader physics. I think that the Standard Model was by far the single most likely particular theory expected from the first serious LHC run at energies close to \(\sqrt{s}=14\TeV\).

The actual adjective describing this scenario wasn't "considered unlikely" but rather "considered uninteresting". It was simply not too interesting for theorists to spend hours about the possibility that the LHC sees just the Standard Model. And it isn't too interesting for theorists now, either. A hard-working theorist would hardly write a paper in 2010 about the "Standard Model at \(\sqrt{s}=13\TeV\)". There's simply nothing new and interesting to say and no truly new calculation to be made. But the previous sentence – and the absence of such papers – doesn't mean that physicists have generally considered this possibility unlikely.

I am a big SUSY champion but even in April 2007, before the LHC was running, I wrote that the probability was 50% that the LHC would observe SUSY. Most of the remaining 50% is "just the Standard Model" because I considered – and I still consider – the discovery of all forms of new physics unrelated to SUSY before SUSY to be significantly less likely than SUSY.

So I think that the statement that "we haven't made any correct prediction" to be an ill-defined piece of sloppy social science. The truth value depends on who is allowed to be counted as "we" and how one quantifies these voters' support for various possible answers. When it came to a sensible ensemble of particle physicists who carefully talk about probabilities rather than hype or composition of their papers and who are not (de facto or de iure) obliged to paint things in more optimistic terms, I am confident that the average probability they would quote for "just the Standard Model at the LHC" was comparable to 50%.

I want to discuss one more quote from Nima:
There are many theorists, myself included, who feel that we’re in a totally unique time, where the questions on the table are the really huge, structural ones, not the details of the next particle. We’re very lucky to get to live in a period like this — even if there may not be major, verified progress in our lifetimes.
You know, Nima is great at many things including these two:
  1. A stellar physicist
  2. A very good motivational speaker
I believe that the quote above proves the second item, not the first. Are we in a totally unique time? When did the unique time start? Nima has already been talking about it for quite some time. ;-) If the young would-be physicists believe it, the quote may surely increase their excitement. But aren't they being fooled? Is there some actual evidence or a rational reason to think that "physics is going through a totally unique time when structural paradigm shifts are around the corner"?

I think that the opposite statement is much closer to be a rational conclusion of the available evidence. Take the statements by Michelson or Kelvin over 100 years. Physics is almost over. All that remains is to measure the values of the parameters with a better precision.

Well, I think that the evidence is rather strong that this statement would actually be extremely appropriate for the present situation of physics and the near future! This surely looks like a period in which no truly deep paradigm shifts are taking place and none are expected in coming months or years. I think that Nima's revolutionary proposition reflects his being a motivational speaker rather than a top physicist impartially evaluating the evidence.

We should really divide the question of paradigm shifts to those that demand an experimental validation; and those that may proceed without an experimental validation. These two "realms of physics" have become increasingly disconnected – and this evolution has always been unavoidable. And it's simply primarily the first one, the purely theoretical branch, where truly new things are happening. When you only care about theories that explain the actual doable experiments, the situation is well-described by the Michelson-Kelvin quote about the physics of decimals (assuming that Michelson or Kelvin become a big defenders of the Standard Model).

Sometimes motivational speeches are great and needed. But at the end, I think that physicists should also be grownups who actually know what they're doing even if they're shaping their opinions about big conceptual questions.

For years, Nima liked to talk about the unique situation and the crossroad where it's being decided whether physics will pursue the path of naturalness or the completely different path of the anthropic reasoning. Well, maybe and Nima sounds persuasive but I have always had problems with these seemingly oversimplified assertions.

First, the natural-vs-anthropic split is just a description of two extreme philosophies that may be well-defined for the theories we already know but may become inadequate for the discussion of the future theories in physics. In particular, it seems very plausible to me that the types of physics theories in the future will not "clearly fall" into one of the camps (natural or anthropic). They may be hybrids, they may be completely different, they may show that the two roads discussed by Nima don't actually contradict each other. At any rate, I am convinced that when new frameworks to discuss the vacuum selection and other things become persuasive, they will be rather quantitative again: they will have nothing to do with the vagueness and arbitrariness of the anthropic principle as we know it today. Also, they will be careful in the sense that they will avoid some of the naive strategies by the "extreme fans of naturalness" who think that 2-sigma deviations or \(\Delta \gt 20\) are too big and can't occur. Future theories of physics will be theories studied by grownups – physicists who will avoid the naivite and vagueness of both the extreme naturalness cultists as well as the anthropic metaphysical babblers.

Even if some new physics were – or had been – discovered at the LHC, including supersymmetry (not sure about the extra dimensions, those would be really deep), I would still tend to think that the paradigm shift in physics would probably be somewhat less deep than the discovery of quantum mechanics 90 years ago.

So while I think that it's silly to talk about some collapse of particle or fundamental physics and similar things, I also think that the talk about the exceptionally exciting situation in physics etc. has become silly. It's surely not my obsession to be a golden boy in the middle. But I am in the middle when it comes to the question whether the future years in physics are going to be interesting. We don't know and the answer is probably gonna be a lukewarm one, too, although both colder and hotter answers can't be excluded. And I am actually confident that the silent majority of physicists agrees that the contemporary physics and fundamental physics in the foreseeable future is and will be medium-interesting. ;-)

by Luboš Motl (noreply@blogger.com) at August 10, 2016 06:49 PM

Symmetrybreaking - Fermilab/SLAC

Dark matter hopes dwindle with X-ray signal

A previously detected, anomalously large X-ray signal is absent in new Hitomi satellite data, setting tighter limits for a dark matter interpretation.  

In the final data sent by the Hitomi spacecraft, a surprisingly large X-ray signal previously seen emanating from the Perseus galaxy cluster did not appear. This casts a shadow over previous speculation that the anomalously bright signal might have come from dark matter. 

“We would have been able to see this signal much clearer with Hitomi than with other satellites,” says Norbert Werner from the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory.

“However, there is no unidentified X-ray line at the high flux level found in earlier studies.”

Werner and his colleagues from the Hitomi collaboration report their findings in a paper submitted to The Astrophysical Journal Letters.      

The mysterious signal was first discovered with lower flux in 2014 when researchers looked at the superposition of X-ray emissions from 73 galaxy clusters recorded with the European XMM-Newton satellite. These stacked data increase the sensitivity to signals that are too weak to be detected in individual clusters.   

The scientists found an unexplained X-ray line at an energy of about 3500 electronvolts (3.5 keV), says Esra Bulbul from the MIT Kavli Institute for Astrophysics and Space Research, the lead author of the 2014 study and a co-author of the Hitomi paper.

“After careful analysis we concluded that it wasn’t caused by the instrument itself and that it was unlikely to be caused by any known astrophysical processes,” she says. “So we asked ourselves ‘What else could its origin be?’”

One interpretation of the so-called 3.5-keV line was that it could be caused by hypothetical dark matter particles called sterile neutrinos decaying in space.

Yet, there was something bizarre about the 3.5-keV line. Bulbul and her colleagues found it again in data taken with NASA’s Chandra X-ray Observatory from just the Perseus cluster. But in the Chandra data, the individual signal was inexplicably strong—about 30 times stronger than it should have been according to the stacked data.

Adding to the controversy was the fact that some groups saw the X-ray line in Perseus and other objects using XMM-Newton, Chandra and the Japanese Suzaku satellite, while others using the same instruments reported no detection.

Astrophysicists highly anticipated the launch of the Hitomi satellite, which carried an instrument—the soft X-ray spectrometer (SXS)—with a spectral resolution 20 times better than the ones aboard previous missions. The SXS would be able to record much sharper signals that would be easier to identify.

Hitomi recorded the X-ray spectrum of the Perseus galaxy cluster with the protective filter still attached to its soft X-ray spectrometer.

Hitomi collaboration

The new data were collected during Hitomi’s first month in space, just before the satellite was lost due to a series of malfunctions. Unfortunately during that time, the SXS was still covered with a protective filter, which absorbed most of the X-ray photons with energies below 5 keV.

“This limited our ability to take enough data of the 3.5-keV line,” Werner says. “The signal might very well still exist at the much lower flux level observed in the stacked data.”

Hitomi’s final data at least make it clear that, if the 3.5-keV line exists, its X-ray signal is not anomalously strong. A signal 30 times stronger than expected would have made it through the filter.

The Hitomi results rule out that the anomalously bright signal in the Perseus cluster was a telltale sign of decaying dark matter particles. But they leave unanswered the question of what exactly scientists detected in the past.

“It’s really unfortunate that we lost Hitomi,” Bulbul says. “We’ll continue our observations with the other X-ray satellites, but it looks like we won’t be able to solve this issue until another mission goes up.”

Chances are this might happen in a few years. According to a recent report, the Japan Aerospace Exploration Agency and NASA have begun talks about launching a replacement satellite.

by Manuel Gnida at August 10, 2016 03:36 PM

Tommaso Dorigo - Scientificblogging

The Daily Physics Problem - 10, 11, 12
As explained in the first installment of this series, these questions are a warm-up for my younger colleagues, who will in two months have to pass a tough exam to become INFN researchers. In fact, now that the application period has ended, I can say that there have been 718 applications for 58 positions. That's a lot, but OTOH any applicants starts off with a one-in-12.4 chance of getting the job, which is not so terribly small. 

read more

by Tommaso Dorigo at August 10, 2016 01:13 PM

Lubos Motl - string vacua and pheno

Is a cosmic string playing with Tabby's star?
Aliens, you're fired: that's how Trump supporters do Tabby's science

It's almost midnight but I could have problems to sleep because of this idea that I have to record somewhere. The blog seems like a better place than my scribbling notebook now – despite the fact that the idea could be embarrassingly wrong.



Thousands of young people are excited about a cosmic superstring in the constellation Cygnus.

Let me start with this nice music called "Superstring" by "Cygnus X". I've known it for some 15 years – around 2001, I found it as one composition among several others that appeared when I inserted superstring-like keywords to music searches. ;-) If these words were a hint, what would it tell you? Yes, Cygnus is a constellation so if you look for an experimental proof of superstrings, you should look in the constellation Cygnus (swan).



OK, if you were gazing in that direction for years, you would finally find a seemingly ordinary star, Tabby's star or KIC 8462852, which is some 1480 light years away from the Earth. Its radius and mass are about 50% higher than the Sun's. This star became famous because of its strangely behaving flux. Hundreds of news outlets argue that these adjustments of the flux were caused by extraterrestrial aliens, more specifically by a Dyson swarm they built to extract the energy from their Sun.




Let me offer you a potentially equally groundbreaking but arguably less unnatural solution: a cosmic string. This very blog contains several posts about a possible discovery of a cosmic string through two nearby images of a galaxy that have seemed identical for a few years.




The two nearby images could have been identical and the doubling could have been caused by the gravitational lensing through a cosmic string.



Because of the cosmic string in between – which "attracts" the light rays thanks to its deficit angle \(\delta = GT\) or so – a part of the image is doubled, the optimistic story said. Cosmic strings are the only objects that are capable of creating identical undistorted images via gravitational lensing. The space around a cosmic string is flat almost everywhere – except at the location of the cosmic string. The spacetime looks like a cone and the deficit angle describes how much it differs from a flat plane. It was exciting for some 3 years before sharper images from the Hubble showed in 2006 that there were two similar but distinct galaxies. The cosmic string explanation of CSL-1 was dead.

But now, Tabby's star is another baffling object. Montet and Simon published a preprint about the bizarre time dependence of the flux coming from Tabby's star just a few days ago. In 3 years when it was observed by Kepler, the flux was dropping approximately linearly, by 0.9% in total. However, in half a year afterwards, the drop accelerated to 2% per 7 months. And then the fast drop stopped although some linear decrease may be continuing now.

The cloud models basically suck and one of the reasons might be that they think that the high intensity is the "normal" value – and the low intensity is obtained by some partial shielding. Well, the truth may be the other way around. The low value of the flux may be the normal one and something – the cosmic string lensing – may have been temporarily enhancing the flux simply because the cosmic string made the star temporarily wider.



This image of CSL-1 shows that the cosmic string – cutting the picture along a line so that the two red disks are on the opposite sides – could have doubled the image and therefore the flux. If the tension and the deficit angle were smaller, it could have just increased the width of the star.

Now, the radius of the star is some 3 light seconds. We need to increase its flux – and therefore width – by a few percent, so the distance by which the cosmic string shifts parts of the image should be some 0.1 light seconds. On the other hand, the distance of Tabby's star is 1480 light years which is \(5\times 10^{10}\) light seconds. The ratio is roughly \(10^{11}\) so we need a cosmic string with a deficit angle comparable to \(\delta \sim GT\sim 10^{-11}\). That's a much lower tension and deficit angle than generally expected for popular cosmic strings, \(\delta \sim 10^{-6}\), but I will look at the problems it causes tomorrow. (I do think that this lower string tension could be compatible with the unification ideas in Witten's strongly coupled heterotic strings, for example.) The location of the cosmic string that is very far from the middle of the Earth-Tabby_star segment (i.e. close to the Earth or to Tabby's star) could change the numbers.

How should the intensity be affected? The cosmic string basically adds a "strip" of the stellar disk. As the cosmic string is moving across the star, the added area basically behaves as the thickness of the disk \(y(x)\) at a given \(x\) of the star, so the graph of the flux as a function of time \(P(t)\) should basically be a positive semicircle.



On the left side and the right side from the blue semicircle, the blue semicircle should be extended by the constant horizontal axis. The cosmic string may hypothetically vibrate and add lots of complexity but I think that this simple graph should be the "zeroth approximation" of the graph for the flux from Tabby's star as a function of time – assuming that a straight cosmic string is moving uniformly and simply crossing the star. Clearly, we need Kepler's observation to start near the center of the picture – the maximum of the flux when Tabby's star was maximally widened.

The August 2016 paper contains Figure 3:



You see that they tried to describe the observations of the flux in terms of some piecewise linear function or something of the sort. Can you try to fit the observations with my semicircle graph instead? Is a semicircle (well, a semiellipse because the stretching and units are different for the two axes) continuing with the constant function afterwards a good enough fit? It's after the midnight now and I want to sleep. But I hope that when I wake up, a TRF commenter will clarify all the missing details of this stupidity and the Nobel committee won't wake me up before 7 am.

Thank you very much and good night.



Update, Wednesday, 8 am

Mathematica is efficient so within minutes, I created this best fit using the semicircle function (the Mathematica source has less than 20 commands):



It looks very good to me assuming that the datapoints (in my picture ellipses) are 1-sigma intervals.

BTW when I was sleeping, I realized that the gravitational lensing by a less exotic object than a cosmic string could possibly do a similarly good job. Well, I don't know the functions for the total flux from an object deformed by a localized gravitational lensing source so the cosmic string is the easiest case for me to calculate.

by Luboš Motl (noreply@blogger.com) at August 10, 2016 06:42 AM

The n-Category Cafe

In Praise of the Gershgorin Disc Theorem

I’m revising the notes for the introductory linear algebra class that I teach, and wondering whether I can find a way to fit in the wonderful but curiously unpromoted Gershgorin disc theorem.

The Gershgorin disc theorem is an elementary result that allows you to make very fast deductions about the locations of eigenvalues. For instance, it lets you look at the matrix

<semantics>(3 i 1 1 4+5i 2 2 1 1)<annotation encoding="application/x-tex"> \begin{pmatrix} 3 &i &1 \\ -1 &4 + 5i &2 \\ 2 &1 &-1 \end{pmatrix} </annotation></semantics>

and see, with only the most trivial mental arithmetic, that the real parts of its eigenvalues must all lie between <semantics>4<annotation encoding="application/x-tex">-4</annotation></semantics> and <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> and the imaginary parts must lie between <semantics>3<annotation encoding="application/x-tex">-3</annotation></semantics> and <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics>.

I wasn’t taught this theorem as an undergraduate, and ever since I learned it a few years ago, have wondered why not. I feel ever so slightly resentful about it. The theorem is so useful, and the proof is a pushover. Was it just me? Did you get taught the Gershgorin disc theorem as an undergraduate?

Here’s the statement:

Theorem (Gershgorin) Let <semantics>A=(a ij)<annotation encoding="application/x-tex">A = (a_{i j})</annotation></semantics> be a square complex matrix. Then every eigenvalue of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> lies in one of the Gershgorin discs

<semantics>{z:|za ii|r i}<annotation encoding="application/x-tex"> \{ z \in \mathbb{C} \colon |z - a_{i i}| \leq r_i \} </annotation></semantics>

where <semantics>r i= ji|a ij|<annotation encoding="application/x-tex">r_i = \sum_{j \neq i} |a_{i j}|</annotation></semantics>.

For example, if

<semantics>A=(3 i 1 1 4+5i 2 2 1 1)<annotation encoding="application/x-tex"> A = \begin{pmatrix} 3 &i &1 \\ -1 &4 + 5i &2 \\ 2 &1 &-1 \end{pmatrix} </annotation></semantics>

(as above) then the three Gershgorin discs have:

  • centre <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> and radius <semantics>|i|+|1|=2<annotation encoding="application/x-tex">|i| + |1| = 2</annotation></semantics>,
  • centre <semantics>4+5i<annotation encoding="application/x-tex">4 + 5i</annotation></semantics> and radius <semantics>|1|+|2|=3<annotation encoding="application/x-tex">|-1| + |2| = 3</annotation></semantics>,
  • centre <semantics>1<annotation encoding="application/x-tex">-1</annotation></semantics> and radius <semantics>|2|+|1|=3<annotation encoding="application/x-tex">|2| + |1| = 3</annotation></semantics>.

Diagram of three Gershgorin discs

Gershgorin’s theorem says that every eigenvalue lies in the union of these three discs. My statement about real and imaginary parts follows immediately.

Even the proof is pathetically simple. Let <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics> be an eigenvalue of <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>. Choose a <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>-eigenvector <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>, and choose <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> so that <semantics>|x i|<annotation encoding="application/x-tex">|x_i|</annotation></semantics> is maximized. Taking the <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>th coordinate of the equation <semantics>Ax=λx<annotation encoding="application/x-tex">A x = \lambda x</annotation></semantics> gives

<semantics>(λa ii)x i= jia ijx j.<annotation encoding="application/x-tex"> (\lambda - a_{i i})x_i = \sum_{j \neq i} a_{i j} x_j. </annotation></semantics>

Now take the modulus of each side:

<semantics>|λa ii||x i|=| jia ijx j| ji|a ij||x j|( ji|a ij|)|x i|=r i|x i|<annotation encoding="application/x-tex"> |\lambda - a_{i i}| |x_i| = \left| \sum_{j \neq i} a_{i j} x_j \right| \leq \sum_{j \neq i} |a_{i j}| |x_j| \leq \left( \sum_{j \neq i} |a_{i j}| \right) |x_i| = r_i |x_i| </annotation></semantics>

where to get the inequalities, we used the triangle inequality and then the maximal property of <semantics>|x i|<annotation encoding="application/x-tex">|x_i|</annotation></semantics>. Cancelling <semantics>|x i|<annotation encoding="application/x-tex">|x_i|</annotation></semantics> gives <semantics>|λa ii|r i<annotation encoding="application/x-tex">|\lambda - a_{i i}| \leq r_i</annotation></semantics>. And that’s it!

The theorem is often stated with a supplementary part that gives further information about the location of the eigenvalues: if the union of <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> of the discs forms a connected-component of the union of all of them, then exactly <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> eigenvalues lie within it. In the example shown, this tells us that there’s exactly one eigenvalue in the blue disc at the top right and exactly two eigenvalues in the union of the red and green discs. (But the theorem says nothing about where those two eigenvalues are within that union.) That’s harder to prove, so I can understand why it wouldn’t be taught in a first course.

But the main part is entirely elementary in both its statement and its proof, as well as being immediately useful. As far as that main part is concerned, I’m curious to know: when did you first meet Gershgorin’s disc theorem?

by leinster (Tom.Leinster@ed.ac.uk) at August 10, 2016 01:55 AM

August 09, 2016

Lubos Motl - string vacua and pheno

IceCube rules out the sterile neutrino model for LSND
All babies are being killed and embryos are being aborted these days.

ATLAS and CMS at the LHC have basically ruled out all theories predicting new particle physics phenomena for the first 10/fb of the \(\sqrt{s}=13\TeV\) data.

Meanwhile, South Dakota-based LUX has improved the limits on the WIMP dark matter cross section by a factor of four: dark matter, if it exists, is harder to be directly detected than previously thought. (Dark matter may also be composed of LIGO-style black holes in which case we will hopefully not play with it here on Earth.) I must mention that almost simultaneously, a China-based experiment PandaX-II has basically matched the results of LUX. See a comparison of the two charts.

Lots of kind hypothetical new particles and processes were killed or postponed. What happened to the evil ones? They were killed or postponed, too. This also applies to sterile neutrinos, a particular family of beasts that are totally plausible but not beloved by people like me.



IceCube is based on the 86-string theory.

At the South Pole, The IceCube Neutrino Observatory was designed to detect \({\rm TeV}\)-scale high-energy cosmic muon neutrinos. Those get converted to muons and IceCube is particularly sensitive to those.




In 1996, the LSND collaboration in Los Alamos announced the evidence for some kind of novel neutrino oscillations, \(\bar\nu_\mu\to \bar\nu_e\), that differed from the more established solar and atmospheric neutrino oscillations.

The LSND anomaly was controversial, a point often repeated by my Rutgers graduate instructor of practical particle physics, Glennys Farrar. In 1998, I was working on a term paper on neutrinos and this "duty" has approximately doubled my knowledge of neutrino physics.




While the oscillation between the muon and electron flavors was originally proposed by LSND, physicists quickly realized that the wavelengths of the oscillations needed to explain the LSND observation disagree with the parameters (squared mass differences) obtained from the other, more well-known oscillations.

The most popular attitude among the pro-LSND phenomenologists was to postulate another flavor of the neutrinos but one that has no associated charged lepton – because we haven't seen a fourth charged lepton species – the sterile neutrino. The word "sterile" means that it is impotent to breed a charged lepton through interactions involving a virtual \(W\)-boson.



IceCube drilling tower and hose reel in 2009.

You may pick the simplest model of this form and claim that the observed oscillation was actually changing muon neutrinos into the new, sterile neutrinos. That gives you a well-defined model that may be tested. And it just happens that IceCube has tested it. IceCube wasn't designed to test sterile neutrinos – instead, just those ordinary ones in the cosmic rays. But it just turns out that due to some luck in the numbers, IceCube is sensitive exactly to these hypothetical sterile neutrinos proposed to explain the LSND anomaly.

Well, in May 2016, IceCube released a preprint
Searches for Sterile Neutrinos with the IceCube Detector
which just appeared in PRL and where they excluded some piece of the parameter space of theories with sterile neutrinos. In particular, the best point fitted to explain the LSND anomaly was ruled out at some 99% confidence level. The disappearance of the muon neutrinos could have been seen but it wasn't.

Because of the publication in PRL, the story was covered in two semi-technical articles published by the Symmetry Magazine and Physics.APS.org. Some paragraphs in these articles go beyond my text. I also liked the factoid that one banana (the most popular radioactive fruit) emits some 10 neutrinos a second.



This negative result doesn't say what is the true explanation of the LSND anomaly. Obviously, one possible simple explanation is that the LSND members were intoxicated by LSD – there's some linguistic evidence supporting this hypothesis. Am I the first one to have noticed? :-) I don't want to claim authoritatively that the result was wrong. LSND made another claim in 2001 and Fermilab's MiniBooNE published a potentially related anomaly a decade later. Add a 2011 neutrino anomaly from reactors and a 1994 anomaly from radioactive sources.

So there is some potential evidence in favor of new neutrino species. On the other hand, there are also lots of experiments with negative results, often almost directly contradicting the "positive" claims above.

In the theoretical picture, I believe that sterile neutrinos are basically unmotivated – both in quantum field theory and string theory – but sterile neutrinos may be particles that may have other names, e.g. "modulinos", superpartners of the scalar moduli in string theory, or something else. So while I am basically unexcited by the proposal to trust the sterile neutrinos without further explanations – it is an ugly theory – I am very far from being able to prove a no-go theorem of any sort. They're plausible and their discovery would be a game-changer of course.

However, at this moment, the "negative" results – exclusions – seem to be the kings of the day.

by Luboš Motl (noreply@blogger.com) at August 09, 2016 08:34 PM

Symmetrybreaking - Fermilab/SLAC

The contents of the universe

How do scientists know what percentages of the universe are made up of dark matter and dark energy?

Cosmologist Risa Wechsler of the Kavli Institute for Particle Astrophysics and Cosmology explains.

Video of d6S4PyJ01IA

Have a burning question about particle physics? Let us know via email or Twitter (using the hashtag #AskSymmetry). We might answer you in a future video!

You can watch a playlist of the #AskSymmetry videos here. You can see Risa Wechsler's answers to readers' questions about dark matter and dark energy on Twitter here.

by Amanda Solliday at August 09, 2016 05:23 PM

Symmetrybreaking - Fermilab/SLAC

Sterile neutrinos in trouble

The IceCube experiment reports ruling out to a high degree of certainty the existence of a theoretical low-mass sterile neutrino.

This week scientists on the world’s largest neutrino experiment, IceCube, dealt a heavy blow to theories predicting a new type of particle—and left a mystery behind.

More than two decades ago, the LSND neutrino experiment at Los Alamos National Laboratory produced a result that challenged what scientists knew about neutrinos. The most popular theory is that the LSND anomaly was caused by the hidden influence of a new type of particle, a sterile neutrino.

A sterile neutrino would not interact with other matter through any of the forces of the Standard Model of particle physics, save perhaps gravity.

With their new result, IceCube scientists are fairly certain the most popular explanation for the anomaly is incorrect. In a paper published in Physical Review Letters, they report that after searching for the predicted form of the stealthy particle, they excluded its existence at approximately the 99 percent confidence level.

“The sterile neutrino would’ve been a profound discovery,” says physicist Ben Jones of the University of Texas, Arlington, who worked on the IceCube analysis. “It would really have been the first particle discovered beyond the Standard Model of particle physics.”

It’s surprising that such a result would come from IceCube. The detector, buried in about a square kilometer of Antarctic ice, was constructed to study very different neutrinos: high-energy ones propelled toward Earth by violent events in space. But by an accident of nature, IceCube happens to be in just the right position to study low-mass sterile neutrinos as well.

There are three known types of neutrinos: electron neutrinos, muon neutrinos and tau neutrinos. Scientists have caught all three types, but they have never built a detector that could catch a sterile neutrino.

Neutrinos are shape-shifters; as they travel, they change from one type to another. The likelihood that a neutrino has shifted to a new type at any given point depends on its mass and the distance it has traveled.

It also depends on what the neutrino has traveled through. Neutrinos very rarely interact with other matter, so they can travel through the entire Earth without hitting any obstacles. But they are affected by all of the electrons in the Earth’s atoms along the way.

“The Earth acts like an amplifier,” says physicist Carlos Argüelles of MIT, who worked on the IceCube analysis.

Traveling through that density of electrons raises the likelihood that a neutrino will change into the predicted sterile neutrino quite significantly—to almost 100 percent, Argüelles says. At a specific energy, the scientists on IceCube should have noticed a mass disappearance of neutrinos as they shifted identities into particles they could not see.

“The position of the dip [in the number of neutrinos found] depends on the mass of sterile neutrinos,” says theorist Joachim Kopp of the Johannes Gutenberg University Mainz. “If they were heavier, the dip would move to a higher energy, a disadvantage for IceCube. At a lower mass, it would move to a lower energy, at which IceCube cannot see neutrinos anymore. IceCube happens to be in a sweet spot.”

And yet, the scientists found no such dip.

This doesn’t mean they can completely rule out the existence of low-mass sterile neutrinos, Jones says. “But it’s also true to say that the likelihood that a sterile neutrino exists is now the lowest it has ever been before.”

The search for the sterile neutrino continues. Kopp says the planned Short Baseline Neutrino program at Fermilab will be perfectly calibrated to investigate the remaining mass region most likely to hold low-mass sterile neutrinos, if they do exist.

The IceCube analysis was based on data taken over the course of a year starting in 2011. The IceCube experiment has since collected five times as much data, and scientists are already working to update their search.

In the end, if these experiments throw cold water on the low-mass sterile neutrino theory, they will still have another question to answer: If sterile neutrinos did not cause the anomaly at Los Alamos, what did?

by Kathryn Jepsen at August 09, 2016 02:38 PM

Jester - Resonaances

Game of Thrones: 750 GeV edition
The 750 GeV diphoton resonance has made a big impact on theoretical particle physics. The number of papers on the topic is already legendary, and they keep coming at the rate of order 10 per week. Given that the Backović model is falsified, there's no longer a theoretical upper limit.  Does this mean we are not dealing with the classical ambulance chasing scenario? The answer may be known in the next days.

So who's leading this race?  What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric.  For this contest, I will consider two different metrics: the King Beyond the Wall that counts the number of papers on the topic, and the Iron Throne that counts how many times these papers have been cited.

In the first category,  the contest is much more fierce than one might expect: it takes 8 papers to be the leader, and 7 papers may not be enough to even get on the podium!  Among the 3 authors with 7 papers the final classification is decided by trial by combat the citation count.  The result is (drums):

Citations, tja...   Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is taken by a Targaryen contender (trumpets):

This explains why the resonance is usually denoted by the letter S.

Update 09.08.2016. Now that the 750 GeV excess is officially dead, one can give the final classification. The race for the iron throne was tight till the end, but there could only be one winner:

As you can see, in this race the long-term strategy and persistence proved to be more important than pulling off a few early victories.  In the other category there have also been  changes in the final stretch: the winner added 3 papers in the period between the un-official and official announcement of the demise of the 750 GeV resonance. The final standing are:


Congratulations for all the winners.  For all the rest, wish you more luck and persistence in the next edition,  provided it will take place.


by Jester (noreply@blogger.com) at August 09, 2016 12:56 PM

Clifford V. Johnson - Asymptotia

All Aboard…!

The other day, quite recently, I clicked "place your order" on... a toy New York MTA bus. I can't pretend it was for the youngster of the house, it was for me. No, it is not a mid-life crisis (heh... I'm sure others might differ on this point), and I will happily declare that it is not out of nostalgia for my time in the city, especially back in the 90s.

dialogues_process_share_8-8-16

It's for the book. I've an entire story set on a bus in Manhattan and I neglected to location scout a bus when I was last there. I figured I could work from tourist photos and so forth. Turns out that you don't get many good tourist photos of MTA bus interiors, and not the angles I want. Then I discovered various online bus-loving subcultures that go through all the details of every model of NYC bus, with endless shots of the buses in different parts of the city... but still not many good interiors and no good overheads and so forth. (See Transittalk, for example - I now know way more about buses in New york than I ever thought I'd want to know.) Then I accidentally had an Amazon link show up in my [...] Click to continue reading this post

The post All Aboard…! appeared first on Asymptotia.

by Clifford at August 09, 2016 04:50 AM

August 06, 2016

John Baez - Azimuth

Topological Crystals (Part 3)


k4_crystal

Last time I explained how to create the ‘maximal abelian cover’ of a connected graph. Now I’ll say more about a systematic procedure for embedding this into a vector space. That will give us a topological crystal, like the one above.

Some remarkably symmetrical patterns arise this way! For example, starting from this graph:

we get this:

Nature uses this pattern for crystals of graphene.

Starting from this graph:

we get this:

Nature uses this for crystals of diamond! Since the construction depends only on the topology of the graph we start with, we call this embedded copy of its maximal abelian cover a topological crystal.

Today I’ll remind you how this construction works. I’ll also outline a proof that it gives an embedding of the maximal abelian cover if and only if the graph has no bridges: that is, edges that disconnect the graph when removed. I’ll skip all the hard steps of the proof, but they can be found here:

• John Baez, Topological crystals.

The homology of graphs

I’ll start with some standard stuff that’s good to know. Let X be a graph. Remember from last time that we’re working in a setup where every edge e goes from a vertex called its source s(e) to a vertex called its target t(e). We write e: x \to y to indicate that e is going from x to y. You can think of the edge as having an arrow on it, and if you turn the arrow around you get the inverse edge, e^{-1}: y \to x. Also, e^{-1} \ne e.

The group of integral 0-chains on X, C_0(X,\mathbb{Z}), is the free abelian group on the set of vertices of X. The group of integral 1-chains on X, C_1(X,\mathbb{Z}), is the quotient of the free abelian group on the set of edges of X by relations e^{-1} = -e for every edge e. The boundary map is the homomorphism

\partial : C_1(X,\mathbb{Z}) \to C_0(X,\mathbb{Z})

such that

\partial e = t(e) - s(e)

for each edge e, and

Z_1(X,\mathbb{Z}) =  \ker \partial

is the group of integral 1-cycles on X.

Remember, a path in a graph is a sequence of edges, the target of each one being the source of the next. Any path \gamma = e_1 \cdots e_n in X determines an integral 1-chain:

c_\gamma = e_1 + \cdots + e_n

For any path \gamma we have

c_{\gamma^{-1}} = -c_{\gamma},

and if \gamma and \delta are composable then

c_{\gamma \delta} = c_\gamma + c_\delta

Last time I explained what it means for two paths to be ‘homologous’. Here’s the quick way to say it. There’s groupoid called the fundamental groupoid of X, where the objects are the vertices of X and the morphisms are freely generated by the edges except for relations saying that the inverse of e: x \to y really is e^{-1}: y \to x. We can abelianize the fundamental groupoid by imposing relations saying that \gamma \delta = \delta \gamma whenever this equation makes sense. Each path \gamma : x \to y gives a morphism which I’ll call [[\gamma]] : x \to y in the abelianized fundamental groupoid. We say two paths \gamma, \gamma' : x \to y are homologous if [[\gamma]] = [[\gamma']].

Here’s a nice thing:

Lemma A. Let X be a graph. Two paths \gamma, \delta : x \to y in X are homologous if and only if they give the same 1-chain: c_\gamma = c_\delta.

Proof. See the paper. You could say they give ‘homologous’ 1-chains, too, but for graphs that’s the same as being equal.   █

We define vector spaces of 0-chains and 1-chains by

C_0(X,\mathbb{R}) = C_0(X,\mathbb{Z}) \otimes \mathbb{R}, \qquad C_1(X,\mathbb{R}) = C_1(X,\mathbb{Z}) \otimes \mathbb{R},

respectively. We extend the boundary map to a linear map

\partial :  C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R})

We let Z_1(X,\mathbb{R}) be the kernel of this linear map, or equivalently,

Z_1(X,\mathbb{R}) = Z_0(X,\mathbb{Z}) \otimes \mathbb{R}  ,

and we call elements of this vector space 1-cycles. Since Z_1(X,\mathbb{Z}) is a free abelian group, it forms a lattice in the space of 1-cycles. Any edge of X can be seen as a 1-chain, and there is a unique inner product on C_1(X,\mathbb{R}) such that edges form an orthonormal basis (with each edge e^{-1} counting as the negative of e.) There is thus an orthogonal projection

\pi : C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R}) .

This is the key to building topological crystals!

The embedding of atoms

We now come to the main construction, first introduced by Kotani and Sunada. To build a topological crystal, we start with a connected graph X with a chosen basepoint x_0. We define an atom to be a homology class of paths starting at the basepoint, like

[[\alpha]] : x_0 \to x

Last time I showed that these atoms are the vertices of the maximal abelian cover of X. Now let’s embed these atoms in a vector space!

Definition. Let X be a connected graph with a chosen basepoint. Let A be its set of atoms. Define the map

i : A \to Z_1(X,\mathbb{R})

by

i([[ \alpha ]]) = \pi(c_\alpha) .

That i is well-defined follows from Lemma A. The interesting part is this:

Theorem A. The following are equivalent:

(1) The graph X has no bridges.

(2) The map i : A \to Z_1(X,\mathbb{R}) is one-to-one.

Proof. The map i is one-to-one if and only if for any atoms [[ \alpha ]] and [[ \beta ]], i([[ \alpha ]])  = i([[ \beta ]]) implies [[ \alpha ]]= [[ \beta ]]. Note that \gamma = \beta^{-1} \alpha is a path in X with c_\gamma = c_{\alpha} - c_\beta, so

\pi(c_\gamma) = \pi(c_{\alpha} - c_\beta) =   i([[ \alpha ]]) - i([[ \beta ]])

Since \pi(c_\gamma) vanishes if and only if c_\gamma is orthogonal to every 1-cycle, we have

c_{\gamma} \textrm{ is orthogonal to every 1-cycle}   \; \iff \;   i([[ \alpha ]])  = i([[ \beta ]])

On the other hand, Lemma A says

c_\gamma = 0 \; \iff \; [[ \alpha ]]= [[ \beta ]].

Thus, to prove (1)\iff(2), it suffices to that show that X has no bridges if and only if every 1-chain c_\gamma orthogonal to every 1-cycle has c_\gamma =0. This is Lemma D below.   █

The following lemmas are the key to the theorem above — and also a deeper one saying that if X has no bridges, we can extend i : A \to Z_1(X,\mathbb{R}) to an embedding of the whole maximal abelian cover of X.

For now, we just need to show that any nonzero 1-chain coming from a path in a bridgeless graph has nonzero inner product with some 1-cycle. The following lemmas, inspired by an idea of Ilya Bogdanov, yield an algorithm for actually constructing such a 1-cycle. This 1-cycle also has other desirable properties, which will come in handy later.

To state these, let a simple path be one in which each vertex appears at most once. Let a simple loop be a loop \gamma : x \to x in which each vertex except x appears at most once, while x appears exactly twice, as the starting point and ending point. Let the support of a 1-chain c, denoted \mathrm{supp}(c), be the set of edges e such that \langle c, e\rangle> 0. This excludes edges with \langle c, e \rangle= 0 , but also those with \langle c , e \rangle < 0, which are inverses of edges in the support. Note that

c = \sum_{e \in \mathrm{supp}(c)} \langle c, e \rangle  .

Thus, \mathrm{supp}(c) is the smallest set of edges such that c can be written as a positive linear combination of edges in this set.

Okay, here are the lemmas!

Lemma B. Let X be any graph and let c be an integral 1-cycle on X. Then for some n we can write

c = c_{\sigma_1} + \cdots +  c_{\sigma_n}

where \sigma_i are simple loops with \mathrm{supp}(c_{\sigma_i}) \subseteq \mathrm{supp}(c).

Proof. See the paper. The proof is an algorithm that builds a simple loop \sigma_1 with\mathrm{supp}(c_{\sigma_1}) \subseteq \mathrm{supp}(c). We subtract this from c, and if the result isn’t zero we repeat the algorithm, continuing to subtract off 1-cycles c_{\sigma_i} until there’s nothing left.   █

Lemma C. Let \gamma: x \to y be a path in a graph X. Then for some n \ge 0 we can write

c_\gamma = c_\delta + c_{\sigma_1} + \cdots +  c_{\sigma_n}

where \delta: x \to y is a simple path and \sigma_i are simple loops with \mathrm{supp}(c_\delta), \mathrm{supp}(c_{\sigma_i}) \subseteq \mathrm{supp}(c_\gamma).

Proof. This relies on the previous lemma, and the proof is similar — but when we can’t subtract off any more c_{\sigma_i}’s we show what’s left is c_\delta for a simple path \delta: x \to y.   █

Lemma D. Let X be a graph. Then the following are equivalent:

(1) X has no bridges.

(2) For any path \gamma in X, if c_\gamma is orthogonal to every 1-cycle then c_\gamma = 0.

Proof. It’s easy to show a bridge e gives a nonzero 1-chain c_e that’s orthogonal to all 1-cycles, so the hard part is showing that for a bridgeless graph, if c_\gamma is orthogonal to every 1-cycle then c_\gamma = 0. The idea is to start with a path for which c_\gamma \ne 0. We hit this path with Lemma C, which lets us replace \gamma by a simple path \delta. The point is that a simple path is a lot easier to deal with than a general path: a general path could wind around crazily, passing over every edge of our graph multiple times.

Then, assuming X has no bridges, we use Ilya Bogdanov’s idea to build a 1-cycle that’s not orthogonal to c_\delta. The basic idea is to take the path \delta : x \to y and write it out as \delta = e_1 \cdots e_n. Since the last edge e_n is not a bridge, there must be a path from y back to x that does not use the edge e_n or its inverse. Combining this path with \delta we can construct a loop, which gives a cycle having nonzero inner product with c_\delta and thus with c_\gamma.

I’m deliberately glossing over some difficulties that can arise, so see the paper for details!   █

Embedding the whole crystal

Okay: so far, we’ve taken a connected bridgeless graph X and embedded its atoms into the space of 1-cycles via a map

i : A \to Z_1(X,\mathbb{R})  .

These atoms are the vertices of the maximal abelian cover \overline{X}. Now we’ll extend i to an embedding of the whole graph \overline{X} — or to be precise, its geometric realization |\overline{X}|. Remember, for us a graph is an abstract combinatorial gadget; its geometric realization is a topological space where the edges become closed intervals.

The idea is that just as i maps each atom to a point in the vector space Z_1(X,\mathbb{R}), j maps each edge of |\overline{X}| to a straight line segment between such points. These line segments serve as the ‘bonds’ of a topological crystal. The only challenge is to show that these bonds do not cross each other.

Theorem B. If X is a connected graph with basepoint, the map i : A \to Z_1(X,\mathbb{R}) extends to a continuous map

j : |\overline{X}| \to Z_1(X,\mathbb{R})

sending each edge of |\overline{X}| to a straight line segment in Z_1(X,\mathbb{R}). If X has no bridges, then j is one-to-one.

Proof. The first part is easy; the second part takes real work! The problem is to show the edges don’t cross. Greg Egan and I couldn’t do it using just Lemma D above. However, there’s a nice argument that goes back and uses Lemma C — read the paper for details.

As usual, history is different than what you read in math papers: David Speyer gave us a nice proof of Lemma D, and that was good enough to prove that atoms are mapped into the space of 1-cycles in a one-to-one way, but we only came up with Lemma C after weeks of struggling to prove the edges don’t cross.   █

Connections to tropical geometry

Tropical geometry sets up a nice analogy between Riemann surfaces and graphs. The Abel–Jacobi map embeds any Riemann surface \Sigma in its Jacobian, which is the torus H_1(\Sigma,\mathbb{R})/H_1(\Sigma,\mathbb{Z}). We can similarly define the Jacobian of a graph X to be H_1(X,\mathbb{R})/H_1(X,\mathbb{Z}). Theorem B yields a way to embed a graph, or more precisely its geometric realization |X|, into its Jacobian. This is the analogue, for graphs, of the Abel–Jacobi map.

After I put this paper on the arXiv, I got an email from Matt Baker saying that he had already proved Theorem A — or to be precise, something that’s clearly equivalent. It’s Theorem 1.8 here:

• Matthew Baker and Serguei Norine, Riemann–Roch and Abel–Jacobi theory on a finite graph.

This says that the vertices of a bridgeless graph X are embedded in its Jacobian by means of the graph-theoretic analogue of the Abel–Jacobi map.

What I really want to know is whether someone’s written up a proof that this map embeds the whole graph, not just its vertices, into its Jacobian in a one-to-one way. That would imply Theorem B. For more on this, try my conversation with David Speyer.

Anyway, there’s a nice connection between topological crystallography and tropical geometry, and not enough communication between the two communities. Once I figure out what the tropical folks have proved, I will revise my paper to take that into account.

Next time I’ll talk about more examples of topological crystals!


by John Baez at August 06, 2016 10:12 AM

Marco Frasca - The Gauge Connection

In the aftermath of ICHEP 2016

ICHEP2016

ATLAS and CMS nuked our illusions on that bump. More than 500 papers were written on it and some of them went through Physical Review Letters. Now, we are contemplating the ruins of that house of cards. This says a lot about the situation in hep in these days. It should be emphasized that people at CERN warned that that data were not enough to draw a conclusion and if they fix the threshold at 5\sigma a reason must exist. But carelessness acts are common today if you are a theorist and no input from experiment is coming for long.

It should be said that the fact that LHC could confirm the Standard Model and nothing else is one of the possibilities. We should hope that a larger accelerator could be built, after LHC decommissioning, as there is a long way to the Planck energy that we do not know how to probe yet.

What does it remain? I think there is a lot yet. My analysis of the Higgs sector is still there to be checked as I will explain in a moment but this is just another way to treat the equations of the Standard Model, not beyond it. Besides, for the end of the year they will reach 30\ fb^{-1}, almost triplicating the actual integrated luminosity and something interesting could ever pop out. There are a lot of years of results ahead and there is no need to despair. Just to wait. This is one of the most important activities of a theorist. Impatience does not work in physics and mostly for hep.

About the signal strength, things seem yet too far to be settled. I hope to see better figures for the end of the year. ATLAS is off the mark, going well beyond unity for WW, as happened before. CMS claimed 0.3\pm 0.5 for WW decay, worsening their excellent measurement of 0.72^{+0.20}_{-0.18} reached in Run I. CMS agrees fairly well with my computations but I should warn that the error bar is yet too large and now is even worse. I remember that the signal strength is obtained by the ratio of the measured cross section to the one obtained from the Standard Model. The fact that is smaller does not necessarily mean that we are beyond the Standard Model but that we are just solving the Higgs sector in a different way than standard perturbation theory. This solution entails higher excitations of the Higgs field but they are strongly depressed and very difficult to observe now. The only mark could be the signal strength for the observed Higgs particle. Finally, the ZZ channel is significantly less sensible and error bars are so large that one can accommodate whatever she likes yet. Overproduction seen by ATLAS is just a fluctuation that will go away in the future.

The final sentence to this post is what we have largely heard in these days: Standard Model rules.


Filed under: Particle Physics, Physics Tagged: 750 GeV, ATLAS, CERN, CMS, Higgs particle, ICHEP 2016, LHC

by mfrasca at August 06, 2016 09:31 AM

August 05, 2016

Matt Strassler - Of Particular Significance

The 2016 Data Kills The Two-Photon Bump

Results for the bump seen in December have been updated, and indeed, with the new 2016 data — four times as much as was obtained in 2015 — neither ATLAS nor CMS [the two general purpose detectors at the Large Hadron Collider] sees an excess where the bump appeared in 2015. Not even a hint, as we already learned inadvertently from CMS yesterday.

All indications so far are that the bump was a garden-variety statistical fluke, probably (my personal guess! there’s no evidence!) enhanced slightly by minor imperfections in the 2015 measurements. Should we be surprised? No. If you look back at the history of the 1970s and 1980s, or at the recent past, you’ll see that it’s quite common for hints — even strong hints — of new phenomena to disappear with more data. This is especially true for hints based on small amounts of data (and there were not many two photon events in the bump — just a couple of dozen).  There’s a reason why particle physicists have very high standards for statistical significance before they believe they’ve seen something real.  (Many other fields, notably medical research, have much lower standards.  Think about that for a while.)  History has useful lessons, if you’re willing to learn them.

Back in December 2011, a lot of physicists were persuaded that the data shown by ATLAS and CMS was convincing evidence that the Higgs particle had been discovered. It turned out the data was indeed showing the first hint of the Higgs. But their confidence in what the data was telling them at the time — what was called “firm evidence” by some — was dead wrong. I took a lot of flack for viewing that evidence as a 50-50 proposition (70-30 by March 2012, after more evidence was presented). Yet the December 2015 (March 2016) evidence for the bump at 750 GeV was comparable to what we had in December 2011 for the Higgs. Where’d it go?  Clearly such a level of evidence is not so firm as people claimed. I, at least, would not have been surprised if that original Higgs hint had vanished, just as I am not surprised now… though disappointed of course.

Was this all much ado about nothing? I don’t think so. There’s a reason to have fire drills, to run live-fire exercises, to test out emergency management procedures. A lot of new ideas, both in terms of new theories of nature and new approaches to making experimental measurements, were generated by thinking about this bump in the night. The hope for a quick 2016 discovery may be gone, but what we learned will stick around, and make us better at what we do.


Filed under: History of Science, LHC News Tagged: #LHC #Higgs #ATLAS #CMS #diphoton

by Matt Strassler at August 05, 2016 05:18 PM

Quantum Diaries

Plusieurs petits pas mais pas de grand bond en avant

Les grandes percées sont rares en physique. La recherche est plutôt jalonnée d’innombrables petites avancées et c’est ce qui ressortira de la Conférence Internationale de la Physique des Hautes Énergies (ICHEP) qui s’est ouverte hier à Chicago. On y espérait un pas de géant mais aujourd’hui les expériences CMS et ATLAS ont toutes deux rapporté que l’effet prometteur observé à 750 GeV dans les données de 2015 avait disparu. Il est vrai que ce genre de choses n’est pas rare en physique des particules étant donné la nature statistique de tous les phénomènes que nous observons.

CMS-2016-750GeV

Sur chaque figure, l’axe vertical indique le nombre d’évènements trouvés contenant une paire de photons dont la masse combinée apparaît sur l’axe horizontal en unités de GeV. (À gauche) Les points en noir représentent les données expérimentales recueillies et analysées jusqu’à présent par la Collaboration CMS, soit 12.9 fb-1, à comparer aux 2.7 fb-1 disponibles en 2015. Le trait vertical associé à chaque point représente la marge d’erreur expérimentale. En tenant compte de ces erreurs, les données sont compatibles avec ce à quoi on s’attend pour le bruit de fond, tel qu’indiqué par la courbe en vert. (À droite) Une nouvelle particule se serait manifestée sous forme d’un pic tel que celui en rouge si elle avait eu les mêmes propriétés que celles pressenties dans les données de 2015 à 750 GeV. Visiblement, les données expérimentales (points noirs) reproduisent simplement le bruit de fond. Il faut donc conclure que ce qui avait été aperçu dans les données de 2015 n’était que le fruit d’une variation statistique.

Mais dans ce cas, c’était particulièrement convainquant car le même effet avait été observé indépendamment par deux équipes qui travaillent sans se consulter et utilisent des méthodes d’analyse et des détecteurs différents. Cela avait déclenché beaucoup d’activités et d’optimisme : à ce jour, 540 articles scientifiques ont été écrits sur cette particule hypothétique qui n’a jamais existé, tant l’implication de son existence serait profonde.

Mais les théoriciens et théoriciennes ne furent pas les seuls à nourrir autant d’espoir. Beaucoup d’expérimentalistes y ont cru et ont parié sur son existence, un de mes collègues allant jusqu’à mettre en jeu une caisse d’excellent vin.

Si beaucoup de physiciens et physiciennes avaient bon espoir ou étaient même convaincus de la présence d’une nouvelle particule, les deux expériences ont néanmoins affiché la plus grande prudence. En l’absence de preuves irréfutables de sa présence, aucune des deux collaborations, ATLAS et CMS, n’a revendiqué quoi que ce soit. Ceci est caractéristique des scientifiques : on parle de découvertes seulement lorsqu’il ne subsiste plus aucun doute.

Mais beaucoup de physiciens et physiciennes, moi y compris, ont délaissé un peu leurs réserves, non seulement parce que les chances que cet effet disparaisse étaient très minces, mais aussi parce que cela aurait été une découverte beaucoup plus grande que celle du boson de Higgs, générant du coup beaucoup d’enthousiasme. Tout le monde soupçonne qu’il doit exister d’autres particules au-delà de celles déjà connues et décrites par le Modèle standard de la physique des particules. Mais malgré des années passées à leur recherche, nous n’avons toujours rien à nous mettre sous la dent.

Depuis que le Grand collisionneur de hadrons (LHC) du CERN opère à plus haute énergie, ayant passé de 8 TeV à 13 TeV en 2015, les chances d’une découverte majeure sont plus fortes que jamais. Disposer de plus d’énergie donne accès à des territoires jamais explorés auparavant.

Jusqu’ici, les données de 2015 n’ont pas révélé la présence de particules ou phénomènes nouveaux mais la quantité de données recueillies était vraiment limitée. Au contraire, cette année le LHC se surpasse, ayant déjà produit cinq fois plus de données que l’année dernière. On espère y découvrir éventuellement les premiers signes d’un effet révolutionnaire. Des dizaines de nouvelles analyses basées sur ces données récentes seront présentées à la conférence ICHEP jusqu’au 10 août et j’en reparlerai sous peu.

Il a fallu 48 ans pour découvrir le boson de Higgs après qu’il fut postulé théoriquement alors qu’on savait ce que l’on voulait trouver. Mais aujourd’hui, nous ne savons même pas ce que nous cherchons. Cela pourrait donc prendre encore un peu de temps. Il y a autre chose, tout le monde le sait. Mais quand le trouverons nous, ça, c’est une autre histoire.

Pauline Gagnon

Pour en savoir plus sur la physique des particules et les enjeux du LHC, consultez mon livre : « Qu’est-ce que le boson de Higgs mange en hiver et autres détails essentiels».

Pour recevoir un avis lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution.

by Pauline Gagnon at August 05, 2016 03:50 PM

Quantum Diaries

Many small steps but no giant leap

Giant leaps are rare in physics. Scientific research is rather a long process made of countless small steps and this is what will be presented throughout the week at the International Conference on High Energy Physics (ICHEP) in Chicago. While many hoped for a major breakthrough, today, both the CMS and ATLAS experiments reported that the promising effect observed at 750 GeV in last year’s data has vanished. True, this is not uncommon in particle physics given the statistical nature of all phenomena we observe.

CMS-2016-750GeV

On both plots, the vertical axis gives the number of events found containing a pair of photons with a combined mass given in units of GeV (horizontal axis) (Left plot) The black dots represent all data collected in 2016 and analysed so far by the CMS Collaboration, namely 12.9 fb-1, compared to the 2.7 fb-1 available in 2015. The vertical line associated with each data point represents the experimental error margin. Taking these errors into account, the data are compatible with what is expected from various backgrounds, as indicated by the green curve. (Right) A new particle would have manifested itself as a peak as big as the red one shown here if it had the same features as what had been seen in the 2015 data around 750 GeV. Clearly, the black data points pretty much reproduce the background. Hence, we must conclude that what was seen in the 2015 data was simply due to a statistical fluctuation.

What was particularly compelling in this case was that the very same effect had been observed by two independent teams, who worked without consulting each other and used different detectors and analysis methods. This triggered frantic activity and much expectation: to date, 540 scientific theory papers have been written on a hypothetical particle that never was, so profound the implications of the existence of such a new particle would be.

But theorists were not the only ones to be so hopeful. Many experimentalists had taken strong bets, one of my colleagues going as far as putting a case of very expensive wine on it.

If many physicists were hopeful or even convinced of the presence of a new particle, both experiments nevertheless had been very cautious. Without unambiguous signs of its presence, neither the ATLAS nor the CMS Collaborations had made claims. This is very typical of scientists: one should not claim anything until it has been established beyond any conceivable doubt.

But many theorists and experimentalists, including myself, threw some of our caution to the air, not only because the chances it would vanish were so small but also because it would have been a much bigger discovery than that of the Higgs boson, generating much enthusiasm. As it stands, we all suspect that there are other particles out there, beyond the known ones, those described by the Standard Model of particle physics. But despite years spent looking for them, we still have nothing to chew on. In 2015, the Large Hadron Collider at CERN raised its operating energy, going from 8 TeV to the current 13 TeV, making the odds for a discovery stronger than ever since higher energy means access to territories never explored before.

So far, the 2015 data has not revealed any new particle or phenomena but the amount of data collected was really small. On the contrary, this year, the LHC is outperforming itself, having already delivered five times more data than last year. The hope is that these data will eventually reveal the first signs of something revolutionary. Dozens of new analyses based on the recent data will be presented until August 10 at the ICHEP conference and I’ll present some of them later on.

It took 48 years to discover the Higgs boson after it was first theoretically predicted when we knew what to expect. This time, we don’t even know what we are looking for. So it could still take a little longer. There is more to be found, we all know it. But when will we find it, is another story.

Pauline Gagnon

To find out more about particle physics, check out my book « Who Cares about Particle Physics: making sense of the Higgs boson, the Large Hadron Collider and CERN ».

To be notified of new blogs, follow me on Twitter : @GagnonPauline or sign up on this distribution list

 

by Pauline Gagnon at August 05, 2016 03:49 PM

Jester - Resonaances

After the hangover
The loss of the 750 GeV diphoton resonance is a big blow to the particle physics community. We are currently going through the 5 stages of grief, everyone at their own pace, as can be seen e.g. in this comments section. Nevertheless, it may already be a good moment to revisit the story one last time, so as  to understand what went wrong.

In the recent years, physics beyond the Standard Model has seen 2 other flops of comparable impact: the faster-than-light neutrinos in OPERA, and the CMB tensor fluctuations in BICEP.  Much as the diphoton signal, both of the above triggered a binge of theoretical explanations, followed by a massive hangover. There was one big difference, however: the OPERA and BICEP signals were due to embarrassing errors on the experiments' side. This doesn't seem to be the case for the diphoton bump at the LHC. Some may wonder whether the Standard Model background may have been slightly underestimated,  or whether one experiment may have been biased by the result of the other... But, most likely, the 750 GeV bump was just due to a random fluctuation of the background at this particular energy. Regrettably, the resulting mess cannot be blamed on experimentalists, who were in fact downplaying the anomaly in their official communications. This time it's the theorists who  have some explaining to do.

Why did theorists write 500 papers about a statistical fluctuation?  One reason is that it didn't look like one at first sight. Back in December 2015, the local significance of the diphoton  bump in ATLAS run-2 data was 3.9 sigma, which means the probability of such a fluctuation was 1 in 10000. Combining available run-1 and run-2 diphoton data in ATLAS and CMS, the local significance was increased to 4.4 sigma.  All in all, it was a very unusual excess, a 1-in-100000 occurrence! Of course, this number should be interpreted with care. The point is that the LHC experiments perform gazillion different measurements, thus they are bound to observe seemingly unlikely outcomes in a small fraction of them. This can be partly taken into account by calculating the global significance, which is the probability of finding a background fluctuation of the observed size anywhere in the diphoton spectrum. The global significance of the 750 GeV bump quoted by ATLAS was only about two sigma, the fact strongly emphasized by the collaboration.  However, that number can be misleading too.  One problem with the global significance is that, unlike for the local one, it cannot be  easily combined in the presence of separate measurements of the same observable. For the diphoton final state we  have ATLAS and CMS measurements in run-1 and run-2,  thus 4 independent datasets, and their robust concordance was crucial  in creating the excitement.  Note also that what is really relevant here is the probability of a fluctuation of a given size in any of the  LHC measurement, and that is not captured by the global significance.  For these reasons, I find it more transparent work with the local significance, remembering that it should not be interpreted as the probability that the Standard Model is incorrect. By these standards, a 4.4 sigma fluctuation in a combined ATLAS and CMS dataset is still a very significant effect which deserves a special attention. What we learned the hard way is that such large fluctuations do happen at the LHC...   This lesson will certainly be taken into account next time we encounter a significant anomaly.

Another reason why the 750 GeV bump was exciting is that the measurement is rather straightforward.  Indeed, at the LHC we often see anomalies in complicated final states or poorly controlled differential distributions, and we treat those with much skepticism.  But a resonance in the diphoton spectrum is almost the simplest and cleanest observable that one can imagine (only a dilepton or 4-lepton resonance would be cleaner). We already successfully discovered one particle this way - that's how the Higgs boson first showed up in 2011. Thus, we have good reasons to believe that the collaborations control this measurement very well.

Finally, the diphoton bump was so attractive because theoretical explanations were  plausible.  It was trivial to write down a model fitting the data, there was no need to stretch or fine-tune the parameters, and it was quite natural that the particle first showed in as a diphoton resonance and not in other final states. This is in stark contrast to other recent anomalies which typically require a great deal of gymnastics to fit into a consistent picture.   The only thing to give you a pause was the tension with the LHC run-1 diphoton data, but even that became  mild after the Moriond update this year.

So we got a huge signal of a new particle in a clean channel with plausible theoretic models to explain it...  that was a really bad luck.  My conclusion may not be shared by everyone but I don't think that the theory community committed major missteps  in this case.  Given that for 30 years we have been looking for a clue about the fundamental theory beyond the Standard Model, our reaction was not disproportionate once a seemingly reliable one had arrived.  Excitement is an inherent part of physics research. And so is disappointment, apparently.

There remains a question whether we really needed 500 papers...   Well, of course not: many of  them fill an important gap.  Yet many are an interesting read, and I personally learned a lot of exciting physics from them.  Actually, I suspect that the fraction of useless papers among the 500 is lower than for regular daily topics.  On a more sociological side, these papers exacerbate the problem with our citation culture (mass-grave references), which undermines the citation count as a means to evaluate the research impact.  But that is a wider issue which I don't know how to address at the moment.

Time to move on. The ICHEP conference is coming next week, with loads of brand new results based on up to 16 inverse femtobarns of 13 TeV LHC data.  Although the rumor is that there is no new exciting  anomaly at this point, it will be interesting to see how much room is left for new physics. The hope lingers on, at least until the end of this year.

In the comments section you're welcome to lash out on the entire BSM community - we made a wrong call so we deserve it. Please, however, avoid personal attacks (unless on me). Alternatively, you can also give us a hug :) 

by Jester (noreply@blogger.com) at August 05, 2016 03:14 PM

Symmetrybreaking - Fermilab/SLAC

LHC bump fades with more data

Possible signs of new particle seem to have washed out in an influx of new data.

A curious anomaly seen by two Large Hadron Collider experiments is now looking like a statistical fluctuation.

The anomaly—an unanticipated excess of photon pairs with a combined mass of 750 billion electronvolts—was first reported by both the ATLAS and CMS experiments in December 2015.

Such a bump in the data could indicate the existence of a new particle. The Higgs boson, for instance, materialized in the LHC data as an excess of photon pairs with a combined mass of 125 billion electronvolts. However, with only a handful of data points, the two experiments could not discern whether that was the case or if it were the result of normal statistical variance.  

After quintupling their 13-TeV dataset between April and July this year, both experiments report that the bump has greatly diminished and, in some analyses, completely disappeared.

What made this particular bump interesting is that both experiments saw the same anomaly in completely independent data sets, says Wade Fisher, a physicist at Michigan State University.

“It’s like finding your car parked next to an identical copy,” he says. “That’s a very rare experience, but it doesn’t mean that you’ve discovered something new about the world. You’d have to keep track of every time it happened and compare what you observe to what you’d expect to see if your observation means anything.”

Theorists predicted that a particle of that size could have been a heavier cousin of the Higgs boson or a graviton, the theoretical particle responsible for gravity. While data from more than 1000 trillion collisions have smoothed out this bump, scientists on the ATLAS experiment still cannot completely rule out its existence.

“There’s up fluxes and down fluxes in statistics,” Fisher says. “Up fluctuations can sometimes look like the early signs of a new particles, and down fluctuations can sometimes make the signatures of a particle disappear. We’ll need the full 2016 data set to be more confident about what we’re seeing.”

Scientists on both experiments are currently scrutinizing the huge influx of data to both better understand predicted processes and look for new physics and phenomena.

"New physics can manifest itself in many different ways—we learn more if it surprises us rather than coming in one of the many many theories we're already probing," says Steve Nahn, a CMS researcher working at Fermilab. "So far the canonical Standard Model is holding up quite well, and we haven't seen any surprises, but there's much more data coming from the LHC, so there's much more territory to explore."

by Sarah Charley at August 05, 2016 02:46 PM

Jon Butterworth - Life and Physics

For the record…

Recently I was involved in discussions that led to a meeting of climate-change skeptics moving from UCL premises. As a result, a couple of articles appeared about me on a climate website and on the right-wing “Breitbart” site.

In the more measured article by Christopher Monckton (who has more reason to be ticked off, since he was one of the meeting organisers) I’m called a “useless bureaucrat” and “forgettable”. The other article uses “cockwomble” (a favourite insult of mine), “bullying” and “climate prat”. Obviously it is difficult to respond to such disarming eloquence, but I do want to set the record straight on one thing.

My involvement was by means of an email to an honorary – ie unpaid – research fellow in the department of Physics & Astronomy, of which I am currently head, based on concerns expressed to me by UCL colleagues on the nature of the meeting and the way it was being advertised. The letter I sent is quoted partially, and reproducing the full text might give a better idea of the interaction. It’s below. The portions quoted by Monckton et al are in italics.


Dear Prof [redacted]

Although you have been an honorary research associate with the department since before I became head, I don’t believe we have ever met, which is a shame. I understand you have made contributions to outreach at the observatory on occasion, for which thanks.

It has been brought to my attention that you have booked a room at UCL for an external conference in September for a rather fringe group discussing aspects of climate science. This is apparently an area well beyond your expertise as an astronomer, and this group is also one which many scientists at UCL have had negative interactions. The publicity gives the impression that you are a professor of astronomy at UCL, which is inaccurate, and some of the publicity could be interpreted as implying that UCL, and the department of Physics & Astronomy in particular, are hosting the event, rather than it being an external event booked by you in a personal capacity.

If this event were to go ahead at UCL, it would generate a great deal of strong feeling, indeed it already has, as members of the UCL community are expressing concern to me that we are giving a platform to speakers who deny anthropogenic climate change while flying in the face of accepted scientific methods. I am sure you have no desire to bring UCL into disrepute, or to cause dissension in the UCL community, and I would encourage you to think about moving the event to a different venue, not on UCL premises. If it is going to proceed as planned I must insist that the website and other publicity is amended to make clear that the event has no connection to UCL or this department in particular, and that you are not a UCL Professor.

Best wishes,
Jon


 

After receiving this, the person concerned expressed frustration at the impression given by the meeting publicity, and decided to cancel the room booking. I understand the meeting was successfully rebooked at Conway Hall, which seems like a decent solution to me. As you can see in the full letter, the meeting wasn’t in any sense banned.

Free speech and debate are good things, though the quality that I’ve experienced during this episode hasn’t much impressed me. As far as I’m concerned, people are welcome to have meetings like this to their hearts’ content, so long as they don’t appropriate spurious endorsement from the place in which they have booked a room.

I have since met the honorary research fellow concerned, and while mildly embarrassed by the whole episode (which is why I haven’t mentioned his name here, though it’s easy to find it if you really care), he did not seem at all upset or intimidated, and we had a friendly and interesting discussion about his scientific work and other matters.

Bit of a storm in a teacup, really, I think, though I’m sure James Delingpole was glad of the opportunity to deploy the Eng. Lit. skills of which he seems so proud.

 

 


Filed under: Climate Change, Politics, Science, Science Policy Tagged: UCL

by Jon Butterworth at August 05, 2016 06:15 AM

August 04, 2016

Matt Strassler - Of Particular Significance

A Flash in the Pan Flickers Out

Back in the California Gold Rush, many people panning for gold saw a yellow glint at the bottom of their pans, and thought themselves lucky.  But more often than not, it was pyrite — iron sulfide — fool’s gold…

Back in December 2015, a bunch of particle physicists saw a bump on a plot.  The plot showed the numbers of events with two photons (particles of light) as a function of the “invariant mass” of the photon pair.  (To be precise, they saw a big bump on one ATLAS plot, and a bunch of small bumps in similar plots by CMS and ATLAS [the two general purpose experiments at the Large Hadron Collider].)  What was that bump?  Was it a sign of a new particle?

A similar bump was the first sign of the Higgs boson, though that was far from clear at the time.  What about this bump?

As I wrote in December,

  “Well, to be honest, probably it’s just that: a bump on a plot. But just in case it’s not…”

and I went on to describe what it might be if the bump were more than just a statistical fluke.  A lot of us — theoretical particle physicists like me — had a lot of fun, and learned a lot of physics, by considering what that bump might mean if it were a sign of something real.  (In fact I’ll be giving a talk here at CERN next week entitled “Lessons from a Flash in the Pan,” describing what I learned, or remembered, along the way.)

But updated results from CMS, based on a large amount of new data taken in 2016, have been seen.   (Perhaps these have leaked out early; they were supposed to be presented tomorrow along with those from ATLAS.)  They apparently show that where the bump was before, they now see nothing.  In fact there’s a small dip in the data there.

So — it seems that what we saw in those December plots was a fluke.  It happens.  I’m certainly disappointed, but hardly surprised.  Funny things happen with small amounts of data.

At the ICHEP 2016 conference, which started today, official presentation of the updated ATLAS and CMS two-photon results will come on Friday, but I think we all know the score.  So instead our focus will be on  the many other results (dozens and dozens, I hear) that the experiments will be showing us for the first time.  Already we had a small blizzard of them today.  I’m excited to see what they have to show us … the Standard Model, and naturalness, remain on trial.


Filed under: LHC News, Particle Physics Tagged: atlas, cms, diphoton, LHC

by Matt Strassler at August 04, 2016 09:45 PM

Symmetrybreaking - Fermilab/SLAC

Higgs boson resurfaces in LHC data

The Higgs appeared in the second run of the LHC about twice as fast as it did in the first.

The Higgs boson is peeking out of the new data collected during the second run of the Large Hadron Collider, scientists reported today at the International Conference on High Energy Physics in Chicago.

The Higgs boson is a short-lived particle that transforms into a cascade of more stable particles immediately after it is produced. Because scientists cannot measure the Higgs directly, they look instead at the more stable particles it leaves behind.

In 2012, during the LHC’s first run, scientists discovered the Higgs boson based on its decay into three different types of particles: photons, W bosons and Z bosons. In the data from the second run, which began in 2015, scientists have reconfirmed its decay into photons and Z bosons.

The Standard Model predicts that the Higgs boson can transform into at least eight different pairs of particles. The most common transformation should be the Higgs transforming into bottom quarks; this has not yet been observed.

Particle collisions during the LHC’s second run have been 1.6 times more energetic than those produced during the first run. The higher-energy collisions produce Higgs bosons more than twice as fast. This rediscovery in the new data looked at around 1500 trillion collisions recorded during 2015 and 2016 and saw the Higgs re-emerge exactly where expected with unmistakable significance.

But finding the Higgs boson is only the beginning. Now that scientists have re-established its existence, they want to use the new data to study its properties more in depth.

“The particle itself is just one of the little elements,” says Ivan Pogrebnyak, a graduate student at Michigan State University who worked on the ATLAS two-photon rediscovery analyses. “The ultimate goal is to understand the laws of nature—not just discover a particle but measure its properties and how it fits inside the whole scheme.”

The Higgs boson helps explain the masses of certain elementary particles and is a cornerstone of the Standard Model of particle physics, the best model scientists have to explain the fundamental interactions of the subatomic universe.

In addition to measuring its predicted properties, physicists want to push beyond the Standard Model and see if the Higgs holds any clues to new physics.

“We can use the Higgs as a key to look beyond the Standard Model,” says Andrea Massironi, a postdoctoral researcher at Northeastern University who presented some of the CMS experiment’s latest Higgs results today at ICHEP. “We’ve only begun to study the Higgs, and don’t know what secrets it might hold.”

by Sarah Charley at August 04, 2016 07:00 PM

August 02, 2016

Symmetrybreaking - Fermilab/SLAC

Q&A: The future of CERN

CERN’s Director General is enthusiastic about the progress and prospects of the LHC research program, but it’s not the only thing on her plate.

Physicist Fabiola Gianotti started her mandate as the fifteenth Director General of CERN on January 1. She recently answered a few questions for Symmetry about what her biggest priorities and challenges are moving forward.

LHC Run 2 is in full swing. What does the scientific program promise for this year?

FG:

This year is going to be very important. Last year we made a big step in energy, a factor 1.7, and it will be a long time before another such step will be made in the future. 

This year is going to be the year of ‘luminosity production,’ as we call it. The goal is to deliver to the experiments at least a factor of five to six times more data than last year. With these data the experiments will be able to perform more precise measurements of Standard Model processes and particles, including the Higgs boson. We need to know this very special and relatively newly discovered particle with a much higher precision than today, also because it is a door into new physics. The experiments will also measure rare processes and look for new particles with increased opportunities to discover new physics, if nature is kind enough to have put new physics at the energy scale explored by LHC.

What are the main priorities and objectives in your plan of work for CERN during your mandate?

FG:

The priorities are to maintain and expand CERN’s excellence in all its components. 

Research in fundamental physics is our first mission. We are operating and upgrading the most powerful accelerator in the world. We have a compelling scientific diversity program. And we are preparing for the future. 

Our field requires very complex, high-tech instruments, so another essential component of our activity is to develop the needed cutting-edge technologies, which are transferred to society. They cover a variety of domains, including superconducting magnets, vacuum, cryogenics, electronics, computing. Another important element is training young people—not only tomorrow’s scientists but also school kids and school teachers.  Last but not least, peaceful collaboration, i.e.  maintaining CERN as a place where people from around the world can work together in the name of science.

And the main challenges for the next five years?

FG:

Every day brings new challenges. In my opinion, the most important challenge for our community in the years to come is to prepare the future for CERN and the discipline in Europe, within the worldwide context. Between 2019 and 2020 we will update the European Strategy for Particle Physics and define the roadmap of the field for future years. It will be a very important and intense time for the community. We will have to build on what we’ve learned since the previous ESPP in 2013. A big role will be played by the LHC results—what we find at the LHC and what we don’t find.

Is CERN already making plans for a successor to the LHC?

FG:

We are already preparing for the future. It’s not too early; first discussions of the LHC took place at the beginning of the ’80s, and the LHC started operation in 2010. This project required 25 to 30 years from first ideas through first operation. 

Preparation for the future proceeds along three lines: a vigorous accelerator R&D program; design studies of future high-energy colliders, including CLIC and FCC; and exploration of additional opportunities offered by the CERN accelerator complex—complementary to high-energy colliders. Indeed, the outstanding questions in today’s particle physics are so numerous and difficult that a single approach is not sufficient.

How do you see CERN and its role in the global picture of big science today?

FG:

The open questions in fundamental physics cover a broad range of issues, from dark matter to dark energy, from the matter-antimatter asymmetry to the flavor problem, etc. There is no single project, no single smoking gun that allows us to answer them all. The only way to address them successfully is to deploy the full set of approaches that the field has developed. These include high-energy accelerators, underground detectors looking for dark matter or proton decay, cosmic surveys, neutrino experiments, etc. 

No single country, no single region can build and run all these projects. That’s why particle and astroparticle physics are becoming more and more global. We have to share the facilities in order to optimize the human, technological and financial resources. We have to collaborate, still maintaining a little bit of competition, which is always very healthy and very stimulating.

August 02, 2016 01:00 PM

August 01, 2016

Quantum Diaries

Particles over Politics: The More the Merrier

Earlier last month, Romania became the 22nd Member State of the European Organisation for Nuclear Research, or CERN, home to the world’s most powerful atom-smasher. But the hundred Romanian scientists working on experiments there have already operated under a co-operation agreement with CERN for the last 25 years. So why have Romania decided to commit the money and resources needed to become a full member? Is this just bureaucratic reshuffling or the road to a more fruitful collaboration between scientists?

Image: CERN

On 18th July, Romania became a full member state of CERN. In doing so, it joined twenty one other countries, which over the years have created one of the largest scientific collaborations in the world. Last year, the two largest experimental groups at CERN, ATLAS and CMS, broke the world record for the total number of authors on a research article (detailing the mass of the Higgs Boson).

To meet its requirements for becoming a member, Romania has committed $11mil USD towards the CERN budget this year, three times as much as neighbouring member Bulgaria and more than seven times as much as Serbia, which holds Associate Membership, aiming to follow in Romania’s footsteps. In return, Romania now holds a place on CERN’s council, having a say in all the major research decisions of the ground-breaking organization where the forces of nature are probed, antimatter is created and Higgs Bosons discovered.

Romania’s accession to the CERN convention marks another milestone in the organisation’s history of international participation over the last sixty years. In that time it has built bridges between the members of nations where diplomacy and international relations were less than favourable, uniting researchers from across the globe towards the goal of understanding the universe on its most fundamental level.

CERN was founded in 1954 with the acceptance of its convention by twelve European nations in a joint effort for nuclear research, the year where “nuclear research” included the largest ever thermonuclear detonation by the US in its history and the USSR deliberately testing the effects of nuclear radiation from a bomb on 45,000 of its own soldiers. Despite the Cold War climate and the widespread use of nuclear physics as a means of creating apocalyptic weapons, CERN’s founding convention alongside UNESCO, which member states adhere to today, states:

“The Organization shall provide for collaboration among European States in nuclear research of a pure scientific and fundamental character…The Organization shall have no concern with work for military requirements,”

The provisional Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research) was dissolved and its legacy was carried by the labs built and operated under the convention it had laid and the name it bore: CERN. Several years later in 1959, the British director of the Proton Synchrotron division at CERN, John Adams, received a gift of vodka from Soviet scientist Vladimir Nikitin of the Dubna accelerator, just north of Moscow, and at the time the most powerful accelerator in the world. 

The vodka was to be opened in the event the Proton Synchrotron accelerator at CERN was successfully operated at an energy greater than Dubna’s maximum capacity: 10 GeV. It more than doubled the feat, reaching 24 GeV, and with the vodka dutifully polished off, the bottle was stuffed with a photo of the proton beam readout and sent back to Moscow.

John Adams, holding the empty vodka bottle in celebration of the Proton Synchroton’s successful start (Image: CERN-HI-5901881-1 CERN Document Server)

Soviet scientists contributed more than vodka to the international effort in particle physics. Nikitin would later go on to work alongside other soviet and US scientists in a joint effort at Fermilab in 1972. Over the next few decades, ten more member states would join CERN permanently, including Israel, its first non-European member. On top of this, researchers at CERN now join from four associate member nations, four observer states (India, Japan, USA and Russia) and holds a score of cooperation agreements with other non-member states.

While certainly the largest collaboration of this kind, CERN is certainly no longer unique in being a collaborative effort in particle physics. Quantum Diaries is host to the blogs of many experiments all of whom comprise of a highly diverse and internationally sourced research cohort. The synchrotron lab for the Middle East, SESAME, expected to begin operation next year, will involve both the Palestinian and Israeli authorities with hopes it “will foster dialogue and better understanding between scientists of all ages with diverse cultural, political and religious backgrounds,”. It was co-ordinated in part, by CERN.

I have avoided speaking personally so far, but one needs to address the elephant in the room. As a British scientist, I speak from a nation where the dust is only just settling on the decision to cut ties with the European Union, against the wishes of the vast majority of researchers. Although our membership to CERN will remain secure, other projects and our relationship with european collaborators face uncertainty.

While I certainly won’t deign to give my view on the matter of a democratic vote, it is encouraging to take a look back at a fruitful history of unity between nations and celebrate Romania’s new Member State status as a sign that that particle physics community is still, largely an integrated and international one. In the short year that I have been at University College London, I have not yet attended any international conferences, yet have had the pleasure to meet and learn from visiting researchers from all over the globe. As this year’s International Conference on High Energy Physics kicks off this week, (chock-full of 5-σ BSM discovery announcements, no doubt*), there is something comforting in knowing I will be sharing my excitement, frustration and surprise with like-minded graduate students from the world over.

Kind regards to Ashwin Chopra and Daniel Quill of University College London for their corrections and contributions, all mistakes are unreservedly my own.
*this is, obviously, playful satire, except for the case of an announcement in which case it is prophetic foresight.

by Ricky Nathvani at August 01, 2016 03:36 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
August 24, 2016 11:06 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at