Particle Physics Planet
August 01, 2015
Peter Coles  In the Dark
Today I’ve been mainly taking part in the 25th Brighton Pride celebrations. The Parade started out 90 minutes late and on a diverted route because of what appears to have been a hoax bomb (in the words of the police, a “suspect package” – no jokes please) but the atmosphere was incredible. Not only was the parade huge, but the streets were lined with thousands and thousands of people. It was all very friendly so my worries that my fear of crowds would resurface were unfounded.
I walked with the Sussex University student society. Hopefully next year there will be an official staff presence!
The Pride Carnival in Preston Park after the Parade wasn’t so interesting for me so I only stayed a couple of hours before returning to Kemptown for the Village Party, which will go on all night and all day tomorrow. I am just taking a break for a cup of tea and a bite to eat before deciding whether to rejoin the party a bit later. I am however a bit oldy for that sort of thing and may instead decide to listen to the Proms instead..
Anyway, here are a few pictures of the parade and village party..
Follow @telescoperastrobites  astroph reader's digest
Authors: Victoria Scowcroft, Wendy L. Freedman, Barry F. Madore, Andy Monson, S. E. Persson, Jeff Rich, Mark Seibert, Jane R. Rigby
First Author’s Institution: Observatories of the Carnegie Institution of Washington
Status: Accepted for publication in ApJ
Cepheid variable stars have long been famous for the role they play in the cosmic distance ladder. Intrinsic luminosities are generally difficult to measure, but because the luminosities and periods of variation of Cepheids follow a wellknown relationship called the Leavitt Law (LL—you might also see this called the PeriodLuminosity or PL relation), we can measure their periods and then calculate their intrinsic luminosities. The LL for a variety of wavelength bands is shown in Figure 1. Once we know their intrinsic luminosities, we can determine their distances. In addition, since Cepheids are supergiant stars, and are therefore very luminous, we can also use them to derive distances to objects much farther than we could with parallax.
Usually when we look at distances—for example, to other galaxies—that we’ve derived with Cepheids, we consider all of the Cepheids in each galaxy in aggregate. Though Cepheid periods and luminosities are quite closely related, there is still some dispersion intrinsic to the LL, so we get more accurate distance measures when we consider what the LL looks like for all of the Cepheids in a galaxy rather than just using distances derived from individual stars.
The authors of today’s paper have used Cepheids in the Small Magellanic Cloud (SMC) in the midinfrared to derive a distance modulus to the galaxy. Their average distance modulus of 18.96 ± 0.01 (stat) ± 0.03 (sys) mag for the SMC, which corresponds to 62 ± 0.3 kpc, is consistent with previous estimates of the distance. However, they have gone a step further and used the Cepheids not only to determine a mean distance modulus to the SMC, but also to study its structure.
Previous observations of Cepheids in the SMC have indicated that the southwest side of the galaxy was farther away from us than the central or northeast portion. The authors of today’s paper noticed that while the intrinsic dispersion of the LL in the midinfrared (3.6 µm) for the Large Magellanic Cloud (LMC) and Milky Way is about ±0.10 mag, in the SMC, it is 0.16 mag. They focus in particular on this bandwidth because it has the lowest intrinsic dispersion (see Figure 1), which also makes it easier to calculate precise distance moduli. This intrinsic dispersion can be traced back to the width of the instability strip—the part of the stellar colormagnitude diagram where Cepheids and other variable stars live. The instability strip looks narrower in the midinfrared, causing smaller dispersion in the LL at those wavelengths as well.
They attribute the higher dispersion in the LL for the SMC to the geometry of the galaxy—meaning that some of the Cepheids are closer to us than others—this is possible given the SMC’s known large lineofsight depth. Since extinction can contribute to this spread in the LL as well, they also correct each Cepheid for extinction. However, even after dereddening, they find that there is still a higher than expected dispersion in the Leavitt Law that cannot be attributed to extinction. With this additional dispersion attributed to the geometry of the SMC, they find that the southwest corner of the SMC is about 20 kpc farther away than the northeast corner. Figure 2 shows the individual distance moduli of the SMC Cepheids.
Finally, the authors also compare the distance moduli that they measured for their Cepheids with theoretical models that seek to describe the mechanism that produced the irregular shape of the SMC. In particular, they look at a model of the mechanism that produces the “wing” of the SMC—a portion of the galaxy that is being drawn towards the LMC. They find that the locations of the young stars (like Cepheids) in the galaxy are in good agreement with where they are predicted to be by the models. They suggest that in the future, such measurements of Cepheids can also be used to inform simulations of galaxy dynamics as well, thus also contributing to our understanding of the dynamical histories of these galaxies. While we will probably have to wait a bit longer for reliable individual Cepheid distance moduli, it’s exciting to think that we could eventually use these stars not only to figure out how far away another galaxy is, but also to understand its structure.
July 31, 2015
Tommaso Dorigo  Scientificblogging
For the time being, I can offer a couple of very inspiring pictures. CMS recorded a spectacular event featuring two extremely highenergy jets in the first 40 inverse picobarns of data that was collected and reconstructed by the experiment with all detector component properly working.
Peter Coles  In the Dark
Some time I wrote a post on this blog about the 1st Ashes Test between England and Australia at Cardiff which resulted in an England victory. In that piece I celebrated the team spirit of England’s cricketers and some memorable performances with both bat and ball. I also suggested that England had a realistic prospect of regaining the Ashes.
More recently, however, in the light of Australia’s comprehensive victory in the 2nd Ashes Test at Lord’s during which the England bowlers were ineffectual, their batsmen inept and the team spirit nonexistent, I accepted that my earlier post was misleading and that England actually had absolutely no chance of regaining the Ashes.
Today England breezed to an emphatic 8wicket victory over Australia in the 3rd Ashes Test at Edgbaston in the Midlands. The manner of this victory, inside three days, and bouncing back from the crushing defeat in the previous Test, makes it clear that my previous post was wrong and England’s bowlers are far from ineffectual, their batsmen highly capable, and the team not at all lacking in team spirit.
Moreover, with England now leading 21 with two matches to play, I now accept that England do indeed have a realistic prospect of regaining the Ashes.
I apologize for my earlier apology and for any inconvenience caused.
I hope this clarifies the situation.
P.S. Geoffrey Boycott is 74 not out.
Follow @telescoperSean Carroll  Preposterous Universe
I had some spare minutes the other day, and had been thinking about the fate of spacetime in a quantum universe, so I took to the internet to let my feelings be heard. Only a few minutes, though, so I took advantage of Twitter rather than do a proper blog post. But through the magic of Storify, I can turn the former into the latter!
Obviously the infamous 140character limit of Twitter doesn’t allow the level of precision and subtlety one would always like to achieve when talking about difficult topics. But restrictions lead to creativity, and the results can actually be a bit more accessible than unfettered prose might have been.
Anyway, spacetime isn’t fundamental, it’s just a useful approximation in certain regimes. Someday we hope to know what it’s an approximation to.
Emily Lakdawalla  The Planetary Society Blog
Ben Still  Neutrino Blog
Now You See Me, Now You Don’t
The possible configurations of the thetaplus either a pentaquark particle where all quarks are bound together (left) or as a molecule made from a bound Baryon and Meson (right). 
A LEGO diagram showing the creation of a Pc pentaquark in the decay of a Lambda baryon. 
Ben Still  Neutrino Blog
The pentaquark might be a whole new type of particle containing 4 quarks and 1 antiquark within itself. 
Or the pentaquark might be a bound state of a Baryon and Meson. 
The Exotic Baryon Antidecuplet: an extension of quark symmetries showing the lightest possible pentaquark states. Here I show the states as BaryonMeson molecules. 
The Exotic Baryon Antidecuplet: an extension of quark symmetries showing the lightest possible pentaquark states. Here I show the pentaquark states as 4 quark, 1 antiquark bound states. 
Detection of particles used to reconstruct the pentaquark state. Borrowed from here. 
In 2003 the LEPS experiment in Japan published a paper [2] which suggested evidence that a particle with a mass the same as the Θ^{+ }(within errors) had been seen within its detectors. Over the next year this claim was followed by some nine other experiments all saying that they too had seen an excess in their data around the predicted Θ^{+ }mass. The evidence for this pentaquark seemed compelling, but there were some problems and questions surrounding the data. In some cases the number of background events were underestimated, which exaggerated and excesses there might have been. Some experiments chose specific techniques to enhance data around the predicted mass of the Θ^{+}. When considering the results of all ten experiments the range of masses determined by each, although similar, varied far more than one would expect from the given theory. It was obvious that further experiments were needed, with much more data, if the existence of the Θ^{+ }were to be confirmed or refuted.
Emily Lakdawalla  The Planetary Society Blog
ATLAS Experiment
In 1996, Morocco officially became a member of the ATLAS collaboration. The eagerly awaited day had finally arrived, and the first Arabic and African country signed a collaborative agreement with CERN to participate in the great scientific adventure of particle physics. This achievement was possible thanks to the efforts of a small group of physicists that recognised the potential benefits of collaborating with large accelerator centres.
Motivated to improve science, technology and innovation, the Moroccan High Energy Physics Cluster (RUPHE) founded in 1996 to enhance the scientific training of young people and advances in pure scientific knowledge. RUPHE includes ATLAS collaborators from University of Hassan II Casablanca, Mohammed V University (Rabat), Mohamed I University (Oujda), Cadi Ayyad University (Marrakech) and the National Energy Centre of Science and Nuclear Techniques (CNESTEN) in Rabat.
Morocco’s participation in ATLAS started even before its membership was approved in 1996. In 1992, Moroccan researchers contributed to the construction of a neutron irradiation station. After that, they continued boosting their contribution by playing a key role in the construction, testing and commissioning of the ATLAS Electromagnetic Calorimeter (ECAL) presampler during 19982003 period. Since then, Moroccan researchers have been working to strengthen the longstanding cooperation with CERN. Currently, there are 27 faculty members and research assistants, including 9 active PhD students.
The research interests focus on these topics: the search for new physics phenomena in association with top physics, Higgs physics and B physics, including a significant participation on the detector performance studies. During the LHC’s Run 1, Moroccan researchers contributed to the success of the ATLAS experiment. This success has motivated our researchers to look forward to a very successful Run 2.
In addition, we are involved in the distributed computing effort. During ATLAS data taking periods, user support becomes a challenging task. With many scientists analysing data, user support is becoming crucial to ensure that everyone is able to analyse the collision data distributed among hundreds of computing sites worldwide. The Distributed Analysis Support Team (DAST) is a team of expert shifters who provide the first direct support for all help requests on distributed data analysis. Alden Stradling (University of Texas, Arlington) and I (Mohammed V University) coordinate the overall activity of this team.
In terms of building local expertise, several schools and workshops have been organized. Outstanding worldwide experts have participated, giving lectures on particle physics, nuclear physics, applied physics and grid computing. Most participants are master’s degree or PhD students already working in these fields, or in related fields and seeking a global dimension to their training. Such schools include: “L’Ecole de Physique Avancée au Maghreb 2011” in Taza, “tutorial training on statistics tools for data analysis” and the “Master of HighEnergy Physics and Scientific Computing” in Casablanca. High school students from Oujda participated in the International Masterclasses in March 2015, which aimed to encourage them in doing science, and gave them an introduction to what we do in ATLAS and why it is interesting and exciting.
After the success of the ATLAS Liquid Argon Week organized in Marrakech in 2009, the ATLAS Overview Week for 2013 was hosted in Morocco. It was our great pleasure to invite our ATLAS colleagues to this important event in Marrakech. There were many interesting talks and discussions at the event. We took a brief time out to watch the announcement of the 2013 Nobel Prize in Physics. To our delight, it was awarded to François Englert and Peter Higgs for their pioneering work on the electroweaksymmetrybreaking mechanism in 1964. It was a very exciting moment for me.
Lubos Motl  string vacua and pheno
The ATLAS' bump at \(2\TeV\) or so – possibly a new gauge boson – is probably the most attractive excess the LHC teams are seeing in their data. However, Pauline Gagnon of ATLAS has ironically pointed out another pair of cute excesses seen by her competitors at the CMS:
The bumpy road to discoveriesHere are the two graphs:
Both graphs show the invariant masses of dijets – a dijet spectrum.
The left graph is brand new, coming from the 2015 \(\sqrt{s}=13\TeV\) data. Only 37 inverse picobarns of data have been collected but that was enough to see a very highenergy event, a dijet with the invariant mass of \(m_{\rm inv} = 5\TeV\).
What makes it even more interesting is that the right graph shows some \(\sqrt{s}=8\TeV\) data from 2012. After analyzing 19.7 inverse femtobarns of data, they saw a bump at \(m_{\rm inv} = 5.15\TeV\) which may be the sign of the same particle (or family of new particles) as the new 2015 bump.
I haven't known about this fun \(5.15\TeV\) event. It only appears in 8 or so uncited experimental articles – and the first article was written by E. Quark and R.S. Graviton, pretty famous experts. ;) If you search for 5.15 in the paper by Fitzpatrick, Kaplan, Randall, and Wang, you will discover \(5.15\TeV\) in a numerator of a numerical formula for some \(S\) which is hopefully just a coincidence. ;)
One must always realize that small statistical flukes may occur by chance. But one must be doubly careful if they occur at the highestenergy bin – more generally, in the last point(s) in the graph – because those tend to be the most unreliable and noisy ones. In his popular book, Feynman recalled how he and GellMann knew that the experimenters were wrong when they thought that they had falsified the FG (FeynmanGellMann) AV theory (axial vector minus vector) theory of the weak force:
I went out and found the original article on the experiment that said the neutronproton coupling is T [tensor], and I was shocked by something. I remembered reading that article once before (back in the days when I read every article in the Physical Review—it was small enough). And I remembered, when I saw this article again, looking at that curve and thinking, “That doesn’t prove anything!”Amen to that. But in different situations, the bumps may be more real than in others. ;)
You see, it depended on one or two points at the very edge of the range of the data, and there’s a principle that a point on the edge of the range of the data—the last point—isn’t very good, because if it was, they’d have another point further along. And I had realized that the whole idea that neutronproton coupling is T was based on the last point, which wasn’t very good, and therefore it’s not proved. I remember noticing that!
And when I became interested in beta decay, directly, I read all these reports by the “betadecay experts,” which said it’s T. I never looked at the original data; I only read those reports, like a dope. Had I been a good physicist, when I thought of the original idea back at the Rochester Conference I would have immediately looked up “how strong do we know it’s T?”—that would have been the sensible thing to do. I would have recognized right away that I had already noticed it wasn’t satisfactorily proved.
Since then I never pay any attention to anything by "experts." I calculate everything myself. When people said the quark theory was pretty good, I got two Ph.D.s, Finn Ravndal and Mark Kislinger, to go through the whole works with me, just so I could check that the thing was really giving results that fit fairly well, and that it was a significantly good theory. I'll never make that mistake again, reading the experts' opinions. Of course, you only live one life, and you make all your mistakes, and learn what not to do, and that's the end of you.
By the way, at least two supersymmetric explanations of LHC excesses were posted to the hepph archive today. Well, one of them is a SUSY explanation and the other is a superstring explaining. ;)
by Luboš Motl (noreply@blogger.com) at July 31, 2015 01:07 PM
Lubos Motl  string vacua and pheno
So far, no black hole has destroyed the Earth.
It's possible that the LHC will discover nothing new, at least for years. But it is in no way inevitable. I would say that it's not even "very likely". We have various theoretical reasons to expect one discovery or another. A theoryindependent vague argument is that the electroweak scale has no deep reason to be too special. And every time we added an order of magnitude to the energies, we saw something new.
But in this blog post, I would like to recall some excesses – inconclusive but tantalizing upward deviations from the Standard Model predictions – that have been mentioned on this blog. Most of them emerged from ATLAS or CMS analyses at the LHC. Some of them may be confirmed soon.
Please submit your corrections if some of the "hopeful hints" have been killed. And please submit those that I forgot.
The hints below will be approximately sorted from those that I consider most convincing at this moment. The energy at the beginning is the estimated mass of a new particle.
 \(2\TeV\), a new charged \(W'\) boson suggested by an ATLAS \(WZ\) channel, 3.4 sigma locally, 2.5 sigma globally
 \(2.1\TeV\), a new charged \(W_R\) boson suggested by a CMS \(\ell\ell q\bar q\) channel, 2.8 sigma. May it be the same particle as the one in the previous entry? A possible fast route to leftrightsymmetric models and \(SO(10)\) or \(E_6\) grand unification
 \(5\TeV\), a heavy particle decaying to two jets, CMS in both cases, one \(5.15\TeV\) event in 2012 and one \(5\TeV\) in early 2015
 \(79\GeV\), a superpartnerrelated invariant mass in edge, CMS' dileptons, 2.6 sigma
 \(90\GeV\), a mass difference, Zpeaked excess, ATLAS, 3 sigma, perhaps a sign of NMSSM or MSSM
 \(650\GeV\), sbottom in Rparityviolating SUSY or leptoquark, CMS in \(\ell q\) decays, 2.4 sigma
 unknown mass, probably due to new Higgses, flavorviolating Higgs decays \(\mu^\pm \tau^\mp\) at CMS, 2.4 sigma (1% of Higgses seem to decay in this weird way)
 \(1.85\TeV\), a triplet or littlest Higgs, CMS' \(WH\) resonance, 2.9 sigma , see also Dorigo's new text
 \(830\GeV\), a sgluon (scalar superpartner of gluon), ATLAS in \(tt\bar t\bar t\) channel, 2.5 sigma
 \(105\GeV\) invariant mass, ATLAS samesign dimuons \(\mu^\pm \mu^\pm\), several sigma
 unknown mass, an LHCb penguin excess in \(B\)meson decays, 3.7 sigma
 \(105\GeV\), ATLAS' likesign dimuons, Christmas rumor, 14 events
 unknown mass, an LHCb flavor anomaly
 \(560\GeV\), a CPodd Higgs boson \(A\), CMS' \(\ell\ell b\bar b\) channel, 2.62.9 sigma; see also another pair of CMS excesses at mass \(286\GeV\) with 2.6/1.6 sigma locally/globally and \(662\GeV\) with 2.85/1.9 sigma locally/globally
 \(136.5\GeV\), a CPeven neutral Higgs boson, another one, CMS \(\gamma\gamma\), 2.93 sigma
 \(325\GeV\), another CPeven Higgs boson, CDF's 4 leptons
 \(320\GeV\), another CPeven Higgs boson, perhaps the same as the previous entry, CMS, 2 sigma
 below \(200\GeV\), higgsino, CMS trileptons or tetraleptons, 2.6 sigma; see another higgsino excess text from CMS
 \(1600\)\(1650\GeV\), effective mass, ATLAS \(\ell b\bar\,{\rm MET}\)
 unknown mass, CMS' top+higgs+dileptons, 2.5 sigma
 unknown mass, stops or sbottoms or staus, CMS' multileptons with \(\tau\), 2 sigma or so
 unknown mass, ATLAS' 1 lepton and 7 jets, 4 sigma; more multijets and ATLAS' multileptons with more similar links
 \(98\GeV\), another Higgs boson, LEP
 \(300\)\(600\GeV\), top squark, ATLAS, 2.5 sigma; see also ATLAS' top partners
 \(700\GeV\), a shadron, longlived particle, CMS' HCSP, 2 sigma
 \(WW\) excess is dead now, I think, much like \(h\to\gamma\gamma\) excess
 unknown mass, multimuon ghost events at CDF, 9 sigma
The most accurate photographs of the Standard Model's elementary particles provided by CERN so far. The zoo may have to be expanded.
Stay tuned.
by Luboš Motl (noreply@blogger.com) at July 31, 2015 12:43 PM
Peter Coles  In the Dark
Tonight’s a Blue Moon, which happens whenever there are two full moons in a calendar month, although the phrase used to mean the third full moon of a season in which there are four in a quarteryear (or season). A Blue Moon isn’t all that rare an occurence actually. In fact there’s one every two or three years on average. But it does at least provide an excuse to post this again…
Incidentally, today marks the halfway mark in my fiveyear term as Head of the School of Mathematical and Physical Sciences at the University of Sussex. I started on 1st February 2013, so it’s now been exactly two years and six months. It’s all downhill from here!
Follow @telescoperCERN Bulletin
CERN Bulletin
arXiv blog
Answers on the Web vary from a few thousand meters to 48 kilometers. Now a pair of physicists have carried out an experiment to find out.
July 30, 2015
Christian P. Robert  xi'an's og
As announced at the 60th ISI World Meeting in Rio de Janeiro, my friend, coauthor, and former PhD student Judith Rousseau got the first Ethel Newbold Prize! Congrats, Judith! And welldeserved! The prize is awarded by the Bernoulli Society on the following basis
The Ethel Newbold Prize is to be awarded biannually to an outstanding statistical scientist for a body of work that represents excellence in research in mathematical statistics, and/or excellence in research that links developments in a substantive field to new advances in statistics. In any year in which the award is due, the prize will not be awarded unless the set of all nominations includes candidates from both genders.
and is funded by Wiley. I support very much this (inclusive) approach of “recognizing the importance of women in statistics”, without creating a prize restricted to women nominees (and hence exclusive). Thanks to the members of the Program Committee of the Bernoulli Society for setting that prize and to Nancy Reid in particular.
Ethel Newbold was a British statistician who worked during WWI in the Ministry of Munitions and then became a member of the newly created Medical Research Council, working on medical and industrial studies. She was the first woman to receive the Guy Medal in Silver in 1928. Just to stress that much remains to be done towards gender balance, the second and last woman to get a Guy Medal in Silver is Sylvia Richardson, in 2009… (In addition, Valerie Isham, Nicky Best, and Fiona Steele got a Guy Medal in Bronze, out of the 71 so far awarded, while no woman ever got a Guy Medal in Gold.) Funny occurrences of coincidence: Ethel May Newbold was educated at Tunbridge Wells, the place where Bayes was a minister, while Sylvia is now head of the Medical Research Council biostatistics unit in Cambridge.
Filed under: Books, Kids, Statistics, University life Tagged: Bayesian nonparametrics, Bernoulli society, Brazil, Cambridge University, compound Poisson distribution, England, Ethel Newbold, Guy Medal, industrial statistics, ISI, Medical Research Council, Rio de Janeiro, Royal Statistical Society, Turnbridge Wells
astrobites  astroph reader's digest
Title: Interloper bias in future largescale structure surveys
Authors: A. R. Pullen, C. M. Hirata, O. Dore, A. Raccanelli
First Author’s Institution: Department of Physics, Carnegie Mellon University, Pittsburgh, PA
Status: To be submitted to PASJ
We look out into a universe that appears deceivingly twodimensional. Our favorite constellations are often composed of stars that are separated by distances more immense than their proximity to each other suggests. This artificial twodimensionality of the observed universe has forever been a bane of astronomy, for it takes a lot to squeeze information about the third dimension out of the universe. Deprojecting our 2D sky into a true 3D map by measuring distances to objects is an astronomical enterprise of its own, built up first from inchlong measuring sticks used exclusively for nearby objects, which are replaced by yardsticks as we move further out, to mile markers even further out, and so on. We can use predictably varying stars called classical Cepheids to determine distances up to about 30 Mpc, a little beyond the nearest galaxy cluster, Virgo; Type Ia supernovae, stellar explosions that achieve the same brightness each and every time, no matter when or where they exploded, help us measure distances as much as 30 times further. Each measuring stick in the sequence is calibrated by the sequence of shorter measuring sticks that came before it, a sequence which astronomers have called the “distance ladder.” Thus errors and uncertainties in calibrating one yardstick can propagate up the sequence, much like falling dominoes. We’ve directly measured the distances of only a small fraction of celestial objects; for a vast majority of the objects in the universe, we must turn to our sequence of sticks.
For objects far beyond the gravitational influence of our galactic neighborhood, the measuring stick of choice is the object’s redshift. This is unique to a universe that’s expanding uniformly and homogenously, causing things further from you to appear to move away from you faster. Much like how the pitch of an emergency siren falls after it flys away from you, the wavelength of the light from an object moving away from you becomes longer and longer, causing it to look redder. The amount an object’s light is “redshifted” depends predictably on the object’s distance—a relation so robust that it has been codified into what’s known as Hubble’s law.
Hubble’s law has embolded cosmological cartographers to take up the herculean task of drawing a 3D map of our universe. The feat requires measuring redshifts of a huge sample of galaxies via large spectroscopic surveys. The first such survey, begun in the 1970s, contained a few thousand galaxies. The biggest survey completed to date, the Sloan Digital Sky Survey (SDSS), contains nearly a million galaxies. These maps have revealed that the universe on its largest scales is fascinatingly varied and structured. There are walls of galaxies surrounding vast, empty voids; galaxies are often assembled together to form fractallike filamentary strands; at nodes where the filaments intersect, one can find the densest and largest clusters of galaxies. The maps also contain clues to the physics and the cosmological parameters that govern the past and future evolution of our universe.
Thus even more ambitious surveys are in the works. Our quest for more galaxies requires us to search for ever fainter galaxies, for which reliable redshifts are difficult to measure. But it’s not impossible. One can look for an easytofind, strong spectral feature typical in galaxies and measure how much redder it’s become. It would have been a fairly straightforward task, except for one catch—there’s a handful of strong features that can easily be mistaken for each other. These interloping lines could cause a galaxy to be mistakenly given an incorrect redshift, and thus distance.
The authors of today’s paper thus asked, how much do galaxies with incorrect distances based on a single emission line affect our maps and the physics we infer from them? They looked at how upcoming spectroscopic redshift surveys undertaken with the Prime Focus Spectrograph (PFS) to be installed on the 8.2meter Subaru Telescope and the WideField InfraRed Survey Telescope (WFIRST) could be affected by interloping galaxies. In particular, the authors studied how the matter power spectrum, an important measure of the amount of mass found at varying cosmological size scales, derived from the two surveys would be affected. They found that if more than 0.2% of the galaxies were interlopers with incorrect distances, they can increase the total error by 10%. If more than 0.5% of the galaxies were interlopers, they can drastically skew the matter power spectrum at small scales. Such effects have consequences for many other cosmological studies, including those concerning dark energy and modified gravity.
Can the interlopers be weeded out somehow? The authors investigate two methods to identify interlopers. One could repeat the emission line analysis but for pairs of strong lines, since each of the strong lines pairs that PFS and WFIRST could measure have unique wavelength separations. Alternatively, one could independently measure the redshift of each galaxy based on the galaxy’s color, derived from a separate photometric survey. The authors tested these two interloper removal methods on a mock sample of galaxies and found that finding strong line pairs alone can help remove most of the interlopers in the PFS survey, while a combination of finding pairs and calculating photometric redshifts must be done together to remove interlopers in the WFIRST survey.
To see a video of the first author A. Pullen explaining this paper, follow this link.
ZapperZ  Physics and Physicists
This report briefly describes the achievement of getting to 13 TeV collision energy at the LHC.
At 10.40 a.m. on 3 June, the LHC operators declared "stable beams" for the first time at a beam energy of 6.5 TeV. It was the signal for the LHC experiments to start taking physics data for Run 2, this time at a collision energy of 13 TeV – nearly double the 7 TeV with which Run 1 began in March 2010.
So far, they haven't been swallowed by a catastrophic black hole that is supposed to destroy our world. Darn it! What's next? Sighting of supersymmetry particles? You must be joking!
Zz.
CERN Bulletin
Machine development weeks are carefully planned in the LHC operation schedule to optimise and further study the performance of the machine. The first machine development session of Run 2 ended on Saturday, 25 July. Despite various hiccoughs, it allowed the operators to make great strides towards improving the longterm performance of the LHC.
The main goals of this first machine development (MD) week were to determine the minimum beamspot size at the interaction points given existing optics and collimation constraints; to test new beam instrumentation; to evaluate the effectiveness of performing part of the beamsqueezing process during the energy ramp; and to explore the limits on the number of protons per bunch arising from the electromagnetic interactions with the accelerator environment and the other beam.
Unfortunately, a series of events reduced the machine availability for studies to about 50%. The most critical issue was the recurrent trip of a sextupolar corrector circuit – a circuit with 154 small sextupole magnets used to correct errors in the main dipoles – in arc 78 at high energy. This problem resulted in the cancellation of the last test runs at high energy and the MD session stopping some 8 hours earlier than planned. However, the time with beam was effective in terms of the results achieved. A large set of instruments were developed or tested, including highresolution beam position monitors (DOROS), robust beam current monitors and two systems to examine the frequency content of the beam.
Thanks to the MD studies, the beam sizes at the two highluminosity interaction points (where the ATLAS and CMS detectors are installed) were reduced by a factor of 1.4. The corresponding machine optics were finely tuned to be ready for highintensity beams. However, before these optics can be used in operation, further studies are mandatory to understand and validate other important parameters, including the machine aperture, new collimator settings, a reduced crossing angle and, possibly, nonlinear corrections in the quadrupole triplets next to the interaction points. These topics will be addressed in future MD weeks to pave the way towards higher luminosities in Run 2.
For the first time, operators were able to perform the beamsize squeeze during the energy ramp. This opens up the possibility of saving up to 10 minutes per fill in a slightly more ambitious configuration than that tested last week. Results on higher bunch populations require careful analysis of the collected beam data. These will soon be available in detailed reports to be published as LHC MD notes.
At the end of the MD period, the LHC went into its second scrubbing run, a twoweek period that aims to prepare the machine fully for operation with 25nanosecond bunch spacing, planned for the first weeks of August.
We would like to take this opportunity to thank all the MD teams, system experts, management, operators and physics experiments involved during the MDs for their high flexibility, dedication and endurance.
Clifford V. Johnson  Asymptotia
Emily Lakdawalla  The Planetary Society Blog
Tommaso Dorigo  Scientificblogging
CERN Bulletin
Symmetrybreaking  Fermilab/SLAC
Physicists discovered one type of Higgs boson in 2012. Now they’re looking for more.
When physicists discovered the Higgs boson in 2012, they declared the Standard Model of particle physics complete; they had finally found the missing piece of the particle puzzle.
And yet, many questions remain about the basic components of the universe, including: Did we find the one and only type of Higgs boson? Or are there more?
A problem of mass
The Higgs mechanism gives mass to some fundamental particles, but not others. It interacts strongly with W and Z bosons, making them massive. But it does not interact with particles of light, leaving them massless.
These interactions don’t just affect the mass of other particles, they also affect the mass of the Higgs. The Higgs can briefly fluctuate into virtual pairs of the particles with which it interacts.
Scientists calculate the mass of the Higgs by multiplying a huge number—related to the maximum energy for which the Standard Model applies—with a number related to those fluctuations. The second number is determined by starting with the effects of fluctuations to forcecarrying particles like the W and Z bosons, and subtracting the effects of fluctuations to matter particles like quarks.
While the second number cannot be zero because the Higgs must have some mass, almost anything it adds up to, even at very small numbers, makes the mass of the Higgs gigantic.
But it isn’t. It weighs about 125 billion electronvolts; it’s not even the heaviest fundamental particle.
“Having the Higgs boson at 125 GeV is like putting an ice cube into a hot oven and it not melting,” says Flip Tanedo, a theoretical physicist and postdoctoral researcher at the University of California, Irvine.
A lightweight Higgs, though it makes the Standard Model work, doesn’t necessarily make sense for the big picture. If there are multiple Higgses—much heavier ones—the math determining their masses becomes more flexible.
“There’s no reason to rule out multiple Higgs particles,” says Tim Tait, a theoretical physicist and professor at UCI. “There’s nothing in the theory that says there shouldn’t be more than one.”
The two primary theories that predict multiple Higgs particles are Supersymmetry and compositeness.
Supersymmetry
Popular in particle physics circles for tying together all the messy bits of the Standard Model, Supersymmetry predicts a heavier (and whimsically named) partner particle, or “sparticle,” for each of the known fundamental particles. Quarks have squarks and Higgs have Higgsinos.
“When the math is redone, the effects of the particles and their partner particles on the mass of the Higgs cancel each other out and the improbability we see in the Standard Model shrinks and maybe even vanishes,” says Don Lincoln, a physicist at Fermi National Accelerator Laboratory.
The Minimal Supersymmetric Standard Model—the supersymmetric model that most closely aligns with the current Standard Model—predicts four new Higgs particles in addition to the Higgs sparticle, the Higgsino.
While Supersymmetry is maybe the most popular theory for exploring physics beyond the Standard Model, physicists at the LHC haven’t seen any evidence of it yet. If Supersymmetry exists, scientists will need to produce more massive particles to observe it.
“Scientists started looking for Supersymmetry five years ago in the LHC,” says Tanedo. “But we don’t really know where they will find it: 10 TeV? 100 TeV?”
Compositeness
The other popular theory that predicts multiple Higgs bosons is compositeness. The composite Higgs theory proposes that the Higgs boson is not a fundamental particle but is instead made of smaller particles that have not yet been discovered.
“You can think of this like the study of the atom,” says Bogdan Dobrescu, a theoretical physicist at Fermi National Accelerator Laboratory. “As people looked closer and closer, they found the proton and neutron. They looked closer again and found the ‘up’ and ‘down’ quarks that make up the proton and neutron.”
Composite Higgs theories predict that if there are more fundamental parts to the Higgs, it may assume a combination of masses based on the properties of these smaller particles.
The search for composite Higgs bosons has been limited by the scale at which scientists can study given the current energy levels at the LHC.
On the lookout
Physicists will continue their Higgs search with the current run of the LHC.
At 60 percent higher energy, the LHC will produce Higgs bosons more frequently this time around. It will also produce more top quarks, the heaviest particles of the Standard Model. Top quarks interact energetically with the Higgs, making them a favored place to start picking at new physics.
Whether scientists find evidence for Supersymmetry or a composite Higgs (if they find either), that discovery would mean much more than just an additional Higgs.
“For example, finding new Higgs bosons could affect our understanding of how the fundamental forces unify at higher energy,” Tait says.
“Supersymmetry would open up a whole ‘super’ world out there to discover. And a composite Higgs might point to new rules on the fundamental level beyond what we understand today. We would have new pieces of the puzzle to look at it.”
Peter Coles  In the Dark
Some great news arrived this morning. The Planning Inspectorate has given approval to the University of Sussex’s Campus Masterplan, which paves the way for some muchneeded new developments on the Falmer Campus and a potential £500 million investment in the local economy. As a scientist working at the University I’m particularly delighted with this decision as it will involve muchneeded new science buildings which should ease the pressure on our existing estate. The planned developments include new stateoftheart academic and research facilities, the creation of an estimated 2400 new jobs in the local community and 2500 new student rooms on the campus, while still preserving the famous listed buildings designed by architect Sir Basil Spence when the University was founded back in the 1960s. We’re in for an exciting few years as these new developments take shape, especially a new building for Life Sciences and redevelopment of the East Slope site. The expansion of residential accommodation on campus will take some of the pressure off the housing stock in central Brighton while the other new buildings will provide muchneeded replacements and extensions for some older ones that are at the end of their useful life.
Here’s a video flythrough that illustrates the general scale of the development – although the individual buildings shown are just indicative, as detailed designs are still being drawn up and each new building will need further planning permission.
But it is not just as an employee of the University that I am delighted by this news. I also live in Brighton and I honestly believe that the expansion of the University is an extremely good thing for the City, which is already turning into a thriving hightech economy owing to the presence of so many skilled graduates and spinout enterprises. There’s a huge amount of work to do in order to turn these plans into reality, but within a couple of years I think we’ll start to see the dividend.
Follow @telescoperJuly 29, 2015
Christian P. Robert  xi'an's og
Ingmar Schuster, who visited ParisDauphine last Spring (and is soon to return here as a postdoc funded by Fondation des Sciences Mathématiques de Paris) has arXived last week a paper on gradient importance sampling. In this paper, he builds a sequential importance sampling (or population Monte Carlo) algorithm that exploits the additional information contained in the gradient of the target. The proposal or importance function being essentially the MALA move as its proposal, mixed across the elements of the previous population. When compared with our original PMC mixture of random walk proposals found in e.g. this paper, each term in the mixture thus involves an extra gradient, with a scale factor that decreases to zero as 1/t√t. Ingmar compares his proposal with an adaptive Metropolis, an adaptive MALTa and an HM algorithms, for two mixture distributions and the banana target of Haario et al. (1999) we also used in our paper. As well as a logistic regression. In each case, he finds both a smaller squared error and a smaller bias for the same computing time (evaluated as the number of likelihood evaluations). While we discussed this scheme when he visited, I remain intrigued as to why it works so well when compared with the other solutions. One possible explanation is that the use of the gradient drift is more efficient on a population of particles than on a single Markov chain, provided the population covers all modes of importance on the target surface: the “fatal” attraction of the local model is then much less of an issue…
Filed under: Books, pictures, Statistics, University life Tagged: adaptive importance sampling, Fondation Sciences Mathématiques de Paris, Langevin MCMC algorithm, Leipzig, population Monte Carlo, sequential Monte Carlo, Université Paris Dauphine
Emily Lakdawalla  The Planetary Society Blog
astrobites  astroph reader's digest
 Title: Radio Crickets: Chirping Jets from Black Hole Binaries Entering their Gravitational Wave Inspiral
 Authors: Girish Kulkarni and Abraham Loeb
 First Author’s Institution: Institute of Astronomy and Kavli Institute of Cosmology, University of Cambridge
 Paper Status: Submitted to MNRAS Letters
This November marks the 100th anniversary of Einstein’s Theory of General Relativity (GR), our modern theory of gravity that describes its true nature and intimate connection with space and time. A century after the formulation of GR, one of the phenomena predicted by this theory has remained elusive to detection – ripples of gravitational energy propagating through spacetime like waves. These gravitational waves have remained elusive for good reason.
Spacetime is very stiff, and even extremely massive objects accelerating through spacetime produce feeble gravitational wave signals (so feeble that when Einstein predicted their existence he believed we would never be able to detect their minuscule effects on spacetime). Coincidentally, the centennial year of GR is also when the upgraded and unbelievably sensitive Advanced Laser Interferometer Gravitational Wave Observatory (aLIGO) will commence science runs. This machine is predicted to make the first direct detections of these ripples in spacetime over the next few years and open up a new window to the Universe through multimessenger astronomy.
Though the first detection of gravitational waves will be more than enough to celebrate about, a true goldmine of scientific wealth will come from finding an electromagnetic counterpart of a gravitational wave signal, allowing these astrophysical objects to be accessed by two completely independent forms of information. Today’s paper considers a possible electromagnetic counterpart of what is thought to be the loudest gravitational wave event in the Universe – the merging of two supermassive black holes (SMBHs). These events would be screaming in gravitational wave radiation, and may be able to reach gravitational wave luminosities of about 10^50 Watts right before they merge. As comparison, this is about as luminous as all the stars shining in all the galaxies in the observable Universe! Though aLIGO is not sensitive to these frequencies of gravitational waves, pulsar timing arrays and future spacebased interferometers like eLISA will be.
SMBH binaries are believed to emerge via the collision of two large galaxies, each of which hosting a massive black hole at its center. After the galaxies collide, the SMBHs lose angular momentum through dynamical friction, creeping close enough to the remnant galaxy’s center to form a gravitationally bound binary. After entering their orbital dance, the black holes continue to lose angular momentum by scattering gas and stars, causing their orbit to shrink (though this phase of binary SMBH evolution is up for debate, since theoretical models have a hard time making the orbits shrink when their orbital separation is on the order of 1 parsec, or about 200,000 astronomical units, an issue known as the final parsec problem). When they reach a separation of about 1/1000 parsecs (a couple hundred astronomical units…pretty close given that the event horizon of a billion Solar mass black hole situated at our Sun would stretch 20 astronomical units, or all the way to the orbit of Uranus), gravitational wave emission becomes the key player in angular momentum loss, quickly diminishing the orbital separation until the two SMBHs merge. It is this final phase of orbital evolution that may be probed with future spacebased gravitational wave observatories. But alternatively, as today’s paper suggests, we may be able to gain insight about this period of evolution from electromagnetic radiation as well.
The key to the electromagnetic counterpart presented in today’s paper is that the black holes are able to hold onto an accretion disk and continue accreting gas during this gravitational wave dominated stage of orbital evolution, shown to be possible in recent studies. With accretion disks come jets of highly relativistic particles, and charged particles spiraling in the strong magnetic field of a SMBH emit synchrotron radiation detectable by radio telescopes. As the binary orbits, the jet will trace out a conical surface. This is easily seen by looking at figure 1 and recalling simple vector addition (remember, for an observer very far away, the solid black jet vector, which represents the velocity of the jet neglecting orbital motion, is essentially fixed, while the orbital velocity vector is constantly changing). The red jet vector, which is the combination of both the jet velocity and orbital velocity, therefore precesses about the black jet vector as the black hole orbits. This would be the end of the story if these binaries were not emitting gravitational waves.
Since the system emits gravitational waves during this phase, the orbital separation decreases, causing the orbital speed to increase. Imagine the black orbital velocity vector from figure 1 increasing. The red vector, which is the sum of the orbital and jet velocities, will therefore have an increasing contribution from the orbital velocity, causing it to precess about the black jet vector with an increasing opening angle. The increase in orbital speed and decrease in orbital separation of the binary will also cause the jet to precess faster, winding the jet tighter closer to the source and resulting in a classical “chirping” morphology in the jet (hence the “radio crickets” in the title of this bite). These effects can be seen in figure 2, which simulates the evolution of a SMBH jet during the first 100 years after entering the gravitational wave dominated regime of orbital decay.
Long baseline radio interferometry from telescope arrays such as the Square Kilometer Array, set to have its first light in 2020, will achieve the resolution necessary to observe these subtle milliarcsecond jet features caused by gravitational wave inspiral. These observations would also put a lower bound on the abundance of bright gravitational wave sources detectable by future spacebased detectors. Moreover, supplementing gravitational wave observations with electromagnetic observations of compact binary mergers will enable detailed studies on the strongfield regime of GR, where Einstein’s theory of gravity might break down and create problems that need to be solved by future generations of scientists.
arXiv blog
When it comes to robotic flocks, do you control each machine individually or the entire swarm overall? A new programming language allows both.
ZapperZ  Physics and Physicists
Here is another triumph out of condensed matter physics experiment. This is the first reported discovery of the Weyl fermions, first predicted and now found in a Tantalum arsenide compound.
Another solution of the Dirac equation – this time for massless particles – was derived in 1929 by the German mathematician Hermann Weyl. For some time it was thought that neutrinos were Weyl fermions, but now it looks almost certain that neutrinos have mass and are therefore not Weyl particles.
Now, a group headed by Zahid Hasan at Princeton University has found evidence that Weyl fermions exist as quasiparticles – collective excitations of electrons – in the semimetal tanatalum arsenide (TaAs).
For those who are keeping score, this means that these condensed matter systems have, so far, detected Majorana fermions, and analogous signatures of magnetic monopoles.
And many people still think condensed matter physics is all "applied" and not "fundamental"?
Zz.
Quantum Diaries
Impressionnant, excitant et plein de nouvelles perspectives. Cela résume mon impression alors que se termine aujourd’hui la conférence de physique des particules de la Société européenne de physique (EPS) à Vienne.
Nous avons été exposés à une quantité impressionnante de nouvelles données. Non seulement les expériences du Grand collisionneur de hadrons (LHC) du CERN ont finalisé la plupart de leurs analyses sur l’ensemble des données recueillies avant l’arrêt début 2013, mais elles ont aussi déjà commencé à analyser les nouvelles données. Ceci confirme que tout, des détecteurs aux logiciels de reconstruction, fonctionne parfaitement après le vaste programme d’améliorations et de réparations.
Tous les outils nécessaires aux analyses de physique – simulations, systèmes d’acquisition de données, trigger, calibrations et algorithmes d’analyse – produisent déjà des résultats de haute qualité avec les données des collisions à une énergie de 13 TeV. Les expériences sont clairement en mesure de reprendre les analyses là où elles les avaient laissées avec les données collectées à 8 TeV. Bien sûr, il n’y a encore aucuns signes de nouveaux phénomènes mais les expériences LHCb, CMS et ATLAS ont toutes de petites anomalies qui devraient être élucidées avec les nouvelles données du LHC.
Durant cette conférence, on a pu apprécié aussi la variété des expériences en place et les nouveaux résultats qui commencent déjà à arriver sur la matière sombre et l’énergie sombre. De nouvelles avenues sont aussi explorées pour élargir les recherches dans l’espoir de découvrir les 95 % du contenu de l’Univers qui manquent toujours à l’appel. Les expériences ont fait des pas de géants et on s’attend à des percées majeures d’ici à peine quelques années. On peut aussi espérer des développements dans le secteur des neutrinos, un domaine de recherche prolifique mais aussi un des plus déconcertants et embrouillants depuis de nombreuses années.
Comme l’a souligné Pierre Binetruy, un théoricien travaillant en cosmologie : « Les découvertes simultanées du boson de Higgs et la confirmation de quelques unes des caractéristiques de l’inflation (la période marquée par une expansion fulgurante juste après le Big Bang) a ouvert une nouvelle ère dans la compréhension commune de la cosmologie et de la physique des particules ». Nous sommes clairement à la veille de percées majeures et de nouvelles découvertes dans plusieurs domaines. La prochaine conférence sera sans aucun doute un événement à ne pas manquer.
Pauline Gagnon
Pour recevoir un avis lors de la parution de nouveaux blogs, suivezmoi sur Twitter: @GagnonPauline ou par email en ajoutant votre nom à cette liste de distribution ou consultez mon site web.
Quantum Diaries
Impressive, exciting and eyeopening. This is how I would summarize the European Physics Society (EPS) particle physics conference that is ending today in Vienna.
The participants were treated to an impressive amount of new data. Not only had the Large Hadron Collider (LHC) experiments at CERN finalised most of their analyses on the entire set of data collected prior to the long shutdown of the last two years, but they had also already started analysing the new data. This confirms that everything, from hardware to software, is up and running after extensive upgrades, repairs and improvements.
All the tools for physics analysis – simulations, data acquisition systems, trigger menus, calibration and analysis algorithms – are already performing beautifully at the new collision energy of 13 TeV. The experiments are clearly in a position to take up the analyses where they had left them with the 8 TeV data. True, there are no signs for new physics anywhere yet but LHCb, CMS and ATLAS all have little hints that will soon be elucidated with the new data.
A wealth of new experiments and results were also presented at the conference on dark matter and dark energy. New avenues are also explored to broaden the searches in the hope of accounting for the 95% of the content of the Universe that is still completely unknown. Giant steps have already been taken and major breakthroughs are expected in the very near future. Developments are also expected in the neutrino sector, a prolific research domain that has been most puzzling and confusing for many years.
As stated by Pierre Binetruy, a theorist working on cosmology: “The simultaneous discovery of the Higgs and confirmation of some of the basic features of inflation (the rapid expansion that followed the Big Bang) has opened a new era in the common understanding of cosmology and particle physics“. It is clear that we are on the eve of major advances and discoveries. The next conference is sure to be an event not to be missed.
Pauline Gagnon
To be alerted of new postings, follow me on Twitter: @GagnonPauline or signup on this mailing list to receive an email notification. You can also visit my website
Peter Coles  In the Dark
Here’s a nice little promotional video about the Department of Mathematics at the University of Sussex, featuring some of our lovely staff and students along with some nice views of the campus and the city of Brighton. Above all, I think it captures what a friendly place this is to work and study. Enjoy!
Follow @telescoperEmily Lakdawalla  The Planetary Society Blog
The nCategory Cafe
(guest post by Chris Kapulkin)
I recently posted the following preprint to the arXiv:
 Locally cartesian closed quasicategories from type theory, arXiv:1507.02648.
Btw if you’re not the kind of person who likes to read mathematical papers, I also gave a talk about the above mentioned work in Oxford, so you may prefer to watch it instead. (:
I see this work as contributing to the idea/program of HoTT as the internal language of higher categories. In the last few weeks, there has been a lot of talk about it, prompted by somewhat provocative posts on Michael Harris’ blog.
My goal in this post is to survey the state of the art in the area, as I know it. In particular, I am not going to argue that internal languages are a solution to many of the problems of higher category theory or that they are not. Instead, I just want to explain the basic idea of internal languages and what we know about them as far as HoTT and higher category theory are concerned.
Disclaimer. The syntactic rules of dependent type theory look a lot like a multisorted essentially algebraic theory. If you think of sorts called types and terms then you can think of rules like $<semantics>\Sigma <annotation\; encoding="application/xtex">\backslash Sigma</annotation></semantics>$types and $<semantics>\Pi <annotation\; encoding="application/xtex">\backslash Pi</annotation></semantics>$types as algebraic operations defined on these sorts. Although the syntactic presentation of type theory does not quite give an algebraic theory (because of complexities such as variable binding), it is possible to formulate dependent type theory as an essentially algebraic theory. However, actually showing that these two presentations are equivalent has proven complicated and it’s a subject of ongoing work. Thus, for the purpose of this post, I will take dependent type theories to be defined in terms of contextual categories (a.k.a. Csystems), which are the models for this algebraic theory (thus leaving aside the Initiality Conjecture). Ultimately, we would certainly like to know that these statements hold for syntacticallypresented type theories; but that is a very different question from the $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$categorical aspects I will discuss here.
A final comment before we begin: this post derives greatly from my (many) conversations with Peter Lumsdaine. In particular, the two of us together went through the existing literature to understand precisely what’s known and what’s not. So big thanks to Peter for all his help!
Internal languages of categories
First off, what is the internal language? Without being too technical, let me say that it is typically understood as a correspondence:
On the right hand side of this correspondence, we have a category of categories with some extra structure and functors preserving this structure. On the left hand side, we have certain type theories, which are extensions of a fixed core one, and their interpretations (which are, roughly speaking, maps taking types to types and terms to terms, preserving typing judgements and the constructors of the core theory). Notice that the core theory is the initial object in the category of theories in the above picture.
The functor $<semantics>\mathrm{Cl}<annotation\; encoding="application/xtex">\backslash mathrm\{Cl\}</annotation></semantics>$ takes a theory to its initial model, built directly out of syntax of the theory: the objects are contexts and the morphisms are (sequences of) terms of the theory (this category is often called the classifying category, hence the notation $<semantics>\mathrm{Cl}<annotation\; encoding="application/xtex">\backslash mathrm\{Cl\}</annotation></semantics>$). The functor $<semantics>\mathrm{Lang}<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}</annotation></semantics>$ takes a category to the theory whose types are generated in a suitable sense by the objects and whose terms are generated by the morphisms of the category. In particular, constructing $<semantics>\mathrm{Lang}(C)<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}(C)</annotation></semantics>$ for some category $<semantics>C<annotation\; encoding="application/xtex">C</annotation></semantics>$ from the right hand side is the same as establishing a model of the core theory in $<semantics>C<annotation\; encoding="application/xtex">C</annotation></semantics>$.
Finally, these functors are supposed to form some kind of an adjoint equivalence (with $<semantics>\mathrm{Cl}\u22a3\mathrm{Lang}<annotation\; encoding="application/xtex">\backslash mathrm\{Cl\}\; \backslash dashv\; \backslash mathrm\{Lang\}</annotation></semantics>$), be it an equivalence of categories, biequivalence, or $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$equivalence, depending on whether the two sides of the correspondence are categories, $<semantics>2<annotation\; encoding="application/xtex">2</annotation></semantics>$categories, or $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$categories.
The cleanest example of this phenomenon is the correspondence between $<semantics>\lambda <annotation\; encoding="application/xtex">\backslash lambda</annotation></semantics>$theories (that is, theories in simply typed $<semantics>\lambda <annotation\; encoding="application/xtex">\backslash lambda</annotation></semantics>$calculus) and cartesian closed categories:
which you can read about in Part I of the standard text by Jim Lambek and Phil Scott.
Extensional MartinLöf Type Theory
Unfortunately, as soon as we move to dependent type theory, things are getting more complicated. Starting with the work of Robert Seely, it has become clear that one should expect Extensional MartinLöf Type Theory (with dependent products, dependent sums, and extensional Identity types) to be the internal language of locally cartesian closed categories:
Seely overlooked however an important coherence problem: since types are now allowed to depend on other types, we need to make coherent choices of pullbacks. The reason for that is that typetheoretically substitution (into a type) is a strictly functorial operation, whereas its categorical counterpart, pullback, without making any choices, is functorial only up to isomorphism. (If you find the last sentence slightly too brief, I recommend Peter Lumsdaine’s talk explaining the problem and known solutions.) The fix was later found by Martin Hofmann; but the resulting pair of functors does not form an equivalence of categories, only a biequivalence of $<semantics>2<annotation\; encoding="application/xtex">2</annotation></semantics>$categories, as was proven in 2011 by Pierre Clairambault and Peter Dybjer.
Intensional MartinLöf Type Theory and locally cartesian closed $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$categories
Next let us consider Intensional MartinLöf Type Theory with dependent products and sums, and the identity types; additionally, we will assume the (definitional) eta rule for dependent functions and functional extensionality.
Such type theory has been, at least informally, conjectured to be the internal language of locally cartesian closed $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$categories and thus, we expect the following correspondence:
where the functors $<semantics>\mathrm{Cl}<annotation\; encoding="application/xtex">\backslash mathrm\{Cl\}</annotation></semantics>$ and $<semantics>\mathrm{Lang}<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}</annotation></semantics>$ should form an adjunction (an adjoint $<semantics>(\mathrm{\infty},1)<annotation\; encoding="application/xtex">(\backslash infty,1)</annotation></semantics>$equivalence? or maybe even $<semantics>(\mathrm{\infty},2)<annotation\; encoding="application/xtex">(\backslash infty,2)</annotation></semantics>$equivalence?) between the typetheoretic and categorical sides.
Before I summarize the state of the art, let me briefly describe what the two functors ought to be. Starting with a type theory, one can take its underlying category of contexts and regard it as category with weak equivalences (where the weak equivalences are syntacticallydefined equivalences), to which one can then apply the simplicial localization. This gives a welldefined functor from type theories to $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$categories. Of course, it is a priori not clear what the (essential) image of such a functor would be.
Conversely, given a locally cartesian closed $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$category $<semantics>C<annotation\; encoding="application/xtex">C</annotation></semantics>$, one can look for a category with weak equivalences (called the presentation of $<semantics>C<annotation\; encoding="application/xtex">C</annotation></semantics>$) whose simplicial localization is $<semantics>C<annotation\; encoding="application/xtex">C</annotation></semantics>$ and then try to establish the structure of a categorical model of type theory on such a category.
What do we know? The verification that $<semantics>\mathrm{Cl}<annotation\; encoding="application/xtex">\backslash mathrm\{Cl\}</annotation></semantics>$ takes values in locally cartesian closed $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$categories can be found in my paper. The other functor is known only partially. More precisely, if $<semantics>C<annotation\; encoding="application/xtex">C</annotation></semantics>$ is a locally presentable locally cartesian closed $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$category, then one can construct $<semantics>\mathrm{Lang}(C)<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}(C)</annotation></semantics>$. As mentioned above, the construction is done in two steps. The first, that is presenting such an $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$category by a typetheoretic model category (which is, in particular, a category with weak equivalences) was given by DenisCharles Cisinski and Mike Shulman in these blog comments, and independently in Theorem 7.10 of this paper by David Gepner and Joachim Kock. The second step (the structure of a model of type theory) is precisely Example 4.2.5.3 of the local universe model paper by Peter Lumsdaine and Michael Warren.
What don’t we know? First off, how to define $<semantics>\mathrm{Lang}(C)<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}(C)</annotation></semantics>$ when $<semantics>C<annotation\; encoding="application/xtex">C</annotation></semantics>$ is not locally presentable and whether the existing definition of $<semantics>\mathrm{Lang}(C)<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}(C)</annotation></semantics>$ for locally presentable quasicategories is even functorial? We also need to understand what the homotopy theory of type theories is (if we’re hoping for an equivalence of homotopy theories, we need to understand the homotopy theory of the left hand side!)? In particular, what are the weak equivalences of type theories? Next in line: what is the relation between $<semantics>\mathrm{Cl}<annotation\; encoding="application/xtex">\backslash mathrm\{Cl\}</annotation></semantics>$ and $<semantics>\mathrm{Lang}<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}</annotation></semantics>$? Are they adjoint and if so, can we hope that they will yield an equivalence of the corresponding $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$categories?
Univalence, Higher Inductive Types, and (elementary) $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$toposes
Probably the most interesting part of this program is the connection between Homotopy Type Theory and higher topos theory (HoTT vs HTT?). Conjecturally, we should have a correspondence:
This is however not a welldefined problem as it depends on one’s answer to the following two questions:
 What is HoTT? Obviously, it should be a system that extends Intensional MartinLöf Type Theory, includes at least one, but possibly infinitely many univalent universes, and some Higher Inductive Types, but what exactly may largely depend on the answer to the next question…
 What is an elementary $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$topos? While there exist some proposals (for example, that presented by Andr'e Joyal in 2014), this question also awaits a definite answer. By analogy with the $<semantics>1<annotation\; encoding="application/xtex">1</annotation></semantics>$categorical case, every Grothendieck $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$topos should be an elementary $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$topos, but not the other way round. Moreover, the axioms of an elementary $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$topos should imply (maybe even explicitly include) that it is locally cartesian closed and that it has finite colimits, but should not imply local presentability.
What do we know? As of today, only partial constructions of the functor $<semantics>\mathrm{Lang}<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}</annotation></semantics>$ exist. More precisely, there are:
 Theorem 6.4 of Mike Shulman’s paper contains the construction of $<semantics>\mathrm{Lang}(C)<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}(C)</annotation></semantics>$ if $<semantics>C<annotation\; encoding="application/xtex">C</annotation></semantics>$ is a Grothendieck $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$topos that admits a presentation as simplicial presheaves on an elegant Reedy category and HoTT is taken the extension of Intensional MartinLöf Type Theory but as many univalent universes 'a la Tarski as there are inaccessible cardinals greater than the cardinality of the site.
 Remark 1.1 of the same paper can be interpreted as saying that if one considers HoTT with weak (univalent) universes instead, then the construction of $<semantics>\mathrm{Lang}(C)<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}(C)</annotation></semantics>$ works for an arbitrary $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$topos $<semantics>C<annotation\; encoding="application/xtex">C</annotation></semantics>$.
 A forthcoming paper by Peter Lumsdaine and Mike Shulman will supplement the above two points: for some reasonable range of higher toposes, the resulting type theory $<semantics>\mathrm{Lang}(C)<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}(C)</annotation></semantics>$ will be also shown to possess certain Higher Inductive Types (e.g. homotopy pushouts and truncations), although the details remain to be seen.
What don’t we know? It still remains to define $<semantics>\mathrm{Lang}(C)<annotation\; encoding="application/xtex">\backslash mathrm\{Lang\}(C)</annotation></semantics>$ outside of the presentable setting, as well as give the construction of $<semantics>\mathrm{Cl}<annotation\; encoding="application/xtex">\backslash mathrm\{Cl\}</annotation></semantics>$ in this case (or rather, check that the obvious functor from type theories to $<semantics>\mathrm{\infty}<annotation\; encoding="application/xtex">\backslash infty</annotation></semantics>$categories takes values in higher toposes). The formal relation between these functors (which are yet to be defined) remains wide open.
by shulman (viritrilbia@gmail.com) at July 29, 2015 03:42 AM
Clifford V. Johnson  Asymptotia
July 28, 2015
Christian P. Robert  xi'an's og
[A 2013 post that somewhat got lost in a pile of postponed entries and referee’s reports…]
In this review paper, now published in Statistical Analysis and Data Mining 6, 3 (2013), David Parkinson and Andrew R. Liddle go over the (Bayesian) model selection and model averaging perspectives. Their argument in favour of model averaging is that model selection via Bayes factors may simply be too inconclusive to favour one model and only one model. While this is a correct perspective, this is about it for the theoretical background provided therein. The authors then move to the computational aspects and the first difficulty is their approximation (6) to the evidence
where they average the likelihood x prior terms over simulations from the posterior, which does not provide a valid (either unbiased or converging) approximation. They surprisingly fail to account for the huge statistical literature on evidence and Bayes factor approximation, incl. Chen, Shao and Ibrahim (2000). Which covers earlier developments like bridge sampling (Gelman and Meng, 1998).
As often the case in astrophysics, at least since 2007, the authors’ description of nested sampling drifts away from perceiving it as a regular Monte Carlo technique, with the same convergence speed n^{1/2} as other Monte Carlo techniques and the same dependence on dimension. It is certainly not the only simulation method where the produced “samples, as well as contributing to the evidence integral, can also be used as posterior samples.” The authors then move to “population Monte Carlo [which] is an adaptive form of importance sampling designed to give a good estimate of the evidence”, a particularly restrictive description of a generic adaptive importance sampling method (Cappé et al., 2004). The approximation of the evidence (9) based on PMC also seems invalid:
is missing the prior in the numerator. (The switch from θ in Section 3.1 to X in Section 3.4 is confusing.) Further, the sentence “PMC gives an unbiased estimator of the evidence in a very small number of such iterations” is misleading in that PMC is unbiased at each iteration. Reversible jump is not described at all (the supposedly higher efficiency of this algorithm is far from guaranteed when facing a small number of models, which is the case here, since the moves between models are governed by a random walk and the acceptance probabilities can be quite low).
The second quite unrelated part of the paper covers published applications in astrophysics. Unrelated because the three different methods exposed in the first part are not compared on the same dataset. Model averaging is obviously based on a computational device that explores the posteriors of the different models under comparison (or, rather, averaging), however no recommendation is found in the paper as to efficiently implement the averaging or anything of the kind. In conclusion, I thus find this review somehow anticlimactic.
Filed under: Books, Statistics, University life Tagged: adaptive importance sampling, Astrophysics, Bayes factor, bridge sampling, computational statistics, evidence, likelihood, model averaging, Monte Carlo technique, population Monte Carlo, statistical analysis and data mining
Symmetrybreaking  Fermilab/SLAC
Our universe could be just one small piece of a bubbling multiverse.
Human history has been a journey toward insignificance.
As we’ve gained more knowledge, we’ve had our planet downgraded from the center of the universe to a chunk of rock orbiting an average star in a galaxy that is one among billions.
So it only makes sense that many physicists now believe that even our universe might be just a small piece of a greater whole. In fact, there may be infinitely many universes, bubbling into existence and growing exponentially. It’s a theory known as the multiverse.
One of the best pieces of evidence for the multiverse was first discovered in 1998, when physicists realized that the universe was expanding at ever increasing speed. They dubbed the force behind this acceleration dark energy. The value of its energy density, also known as the cosmological constant, is bizarrely tiny: 120 orders of magnitude smaller than theory says it should be.
For decades, physicists have sought an explanation for this disparity. The best one they’ve come up with so far, says Yasunori Nomura, a theoretical physicist at the University of California, Berkeley, is that it’s only small in our universe. There may be other universes where the number takes a different value, and it is only here that the rate of expansion is just right to form galaxies and stars and planets where people like us can observe it. “Only if this vacuum energy stayed to a very special value will we exist,” Nomura says. “There are no good other theories to understand why we observe this specific value.”
For further evidence of a multiverse, just look to string theory, which posits that the fundamental laws of physics have their own phases, just like matter can exist as a solid, liquid or gas. If that’s correct, there should be other universes where the laws are in different phases from our own—which would affect seemingly fundamental values that we observe here in our universe, like the cosmological constant. “In that situation you’ll have a patchwork of regions, some in this phase, some in others,” says Matthew Kleban, a theoretical physicist at New York University.
These regions could take the form of bubbles, with new universes popping into existence all the time. One of these bubbles could collide with our own, leaving traces that, if discovered, would prove other universes are out there. We haven't seen one of these collisions yet, but physicists are hopeful that we might in the not so distant future.
If we can’t find evidence of a collision, Kleban says, it may be possible to experimentally induce a phase change—an ultrahighenergy version of coaxing water into vapor by boiling it on the stove. You could effectively prove our universe is not the only one if you could produce phasetransitioned energy, though you would run the risk of it expanding out of control and destroying the Earth. “If those phases do exist—if they can be brought into being by some kind of experiment—then they certainly exist somewhere in the universe,” Kleban says.
No one is yet trying to do this.
There might be a (relatively) simpler way. Einstein’s general theory of relativity implies that our universe may have a “shape.” It could be either positively curved, like a sphere, or negatively curved, like a saddle. A negatively curved universe would be strong evidence of a multiverse, Nomura says. And a positively curved universe would show that there’s something wrong with our current theory of the multiverse, while not necessarily proving there’s only one. (Proving that is a nexttoimpossible task. If there are other universes out there that don’t interact with ours in any sense, we can’t prove whether they exist.)
In recent years, physicists have discovered that the universe appears almost entirely flat. But there’s still a possibility that it’s slightly curved in one direction or the other, and Nomura predicts that within the next few decades, measurements of the universe’s shape could be precise enough to detect a slight curvature. That would give physicists new evidence about the nature of the multiverse. “In fact, this evidence will be reasonably strong since we do not know any other theory which may naturally lead to a nonzero curvature at a level observable in the universe,” Nomura says.
If the curvature turned out to be positive, theorists would face some very difficult questions. They would still be left without an explanation for why the expansion rate of the universe is what it is. The phases within string theory would also need reexamining. “We will face difficult problems,” Nomura says. “Our theory of dark energy is gone if it’s the wrong curvature.”
But with the right curvature, a curved universe could reframe how physicists look at values that, at present, appear to be fundamental. If there were different universes with different phases of laws, we might not need to seek fundamental explanations for some of the properties our universe exhibits.
And it would, of course, mean we are tinier still than we ever imagined. “It’s like another step in this kind of existential crisis,” Kleban says. “It would have a huge impact on people’s imaginations.”
astrobites  astroph reader's digest
Title: Gone without a bang: An archival HST survey for disappearing massive stars
Authors: Thomas Reynolds, Morgan Fraser, Gerard Gilmore
First author’s institution: University of Cambridge
Status: Submitted to MNRAS
It’s well known that stars with a mass about 10 times that of the Sun will explode in a supernova and leave behind a neutron star. We also know that colossal stars, those greater than 40 times more massive than our Sun, will also explode as a supernova and leave behind a black hole.
So what happens to stars in between? You might guess that they will also explode in a spectacular supernova, following the pattern of their siblings. As it turns out, many of these stars can die by collapsing into a black hole…without their characteristic supernova. How does this work?
Corecollapse supernovae usually explode when the inner iron core of a massive star has reached its Chandrasekhar mass and cannot support itself against its own gravity. The core then collapses until it reaches the density of an atomic nucleus – or about 5 billion tons for a teaspoon of matter. At this point, the infalling material rebounds outwards, producing a shockwave that blasts the outer layers of the star with the help of neutrinos, leading to a supernova. Things can go awry in this last step. If insufficient energy is supplied to the shock, the shockwave may stall before leaving the star. This lapse allows the black hole formed by the inner core of the star to simply gobble up the star before an explosion can occur. And–poof!–just like that, the star is gone.
Well, sorta. It’s predicted that there will be a very dim (about 10,000 times fainter than a supernova), red and long transient from the explosion, but we have never seen such an event! This is especially odd because these events shouldn’t be that rare; about 1/3 of all corecollapse may actually result in a failed supernova.The evidence for these events is the fact that red supergiants should end their lives as supernovae, but we haven’t found many that have done so (greater than about 15 times the mass of our Sun). This is known as the “Red Supergiant Problem”. The question then seems obvious: could these red supergiants be disappearing into the night sky as failed supernovae?
This is what the authors of today’s paper explore this question by looking for the culprits themselves. The astronomers look for stars which have disappeared using archival data from the Hubble Space Telescope (HST). HST has been orbiting the Earth since 1990, so it has the unique advantage of having plenty of high quality images of galaxies, such as the Antenna galaxies in Figure 1. The authors look for galaxies which have been observed multiple times by HST over the course of its life, searching for any disappearing stars. In theory, you would only need three HST images in sequence to do this: two images before the failed supernova to ensure that the star was not extremely variable and a second image to capture its disappearance. By narrowing down the possible galaxies using this criterion, as well as some distance and galaxytype cuts, the authors are able to find six potential failed supernovae in fifteen galaxies.
Of these six candidates, two are actually bright, variable stars and three others are far too dim to be red supergiants. This leaves a single, potential failed supernova! The lightcurve of this mysterious star is shown in Figure 2.
The authors can’t be entirely sure that this is truly a failed supernova based on the HST data alone; other transients or variable objects can mimic the predicted lightcurve of a failed supernova. One notable possibility are R Coronae Borealis stars (RCB), which are evolved stars lacking hydrogen. RCB stars can dim by many order of magnitude, likely due to intense clouds of carbon dust in their atmosphere. If these stars dim during the last few observations, they might seem to have disappeared altogether. Future data on this candidate would certainly help to unveil its true identity.
This is now the second survey to search for failed supernovae, following a survey by Gerke et al. in 2014 which used the groundbased Large Binocular Telescope. Both of these surveys resulted in single candidates, but, unfortunately, neither had sufficient data to build a complete transient lightcurve. In this regard, the mystery surrounding failed supernovae remains unsolved. This leaves an extremely exciting open question about the nature of black hole creation when a supernova fails to explode and the solution to the Red Supergiant Problem. Lastly, it’s worth emphasizing that this study was done on a small fraction of the thousands of galaxies publically available on the Hubble Archive – there is nothing stopping you, the reader, from trying to find a failed supernova for yourself!
Quantum Diaries
La conférence de physique des particules de la Société de Physique Européenne (EPS) se poursuit à Vienne, les sessions parallèles ayant cédé la place aux sessions plénières. Les présentateurs et présentatrices ont maintenant la dure tâche de récapituler les centaines de résultats présentés jusqu’ici à la conférence et d’en tirer une vue d’ensemble.
Durant les deux dernières années, le Grand Collisionneur de Hadrons (LHC) a subi des améliorations majeures. Les expérimentalistes en ont profité pour examiner sous toutes les coutures (et même plus!) l’ensemble des données accumulées avant l’arrêt. Avec les calibrations finales et des algorithmes améliorés, presque toutes les analyses incluent maintenant la totalité des données récoltées à une énergie de 8 TeV. Dans la plupart des cas, ces mois de travail acharné effectué par des centaines de personnes n’auront produit qu’une légère amélioration dans la précision des résultats. Ces récents résultats, bien que solides comme le roc, n’ont malheureusement rien révélé de nouveau.
C’est la mauvaise nouvelle. La bonne nouvelle : on s’attend à quatre fois plus de données dans l’année qui vient et à plus haute énergie, ce qui rendra de nouveaux phénomènes accessibles.
En voici un exemple. Les expériences CMS et ATLAS cherchent, entre autres, des particules lourdes mais encore hypothétiques qui se désintègreraient en deux bosons connus, à savoir des photons, ou des bosons Z, W ou de Higgs. Les trois derniers bosons peuvent à leur tour se désintégrer en jets de particules légères faites de quarks.
La désintégration d’une particule s’apparente à faire la monnaie pour une grosse pièce de monnaie : la pièce de monnaie initiale ne contient pas de petites pièces, mais peut être échangée pour des pièces de valeur égale, comme sur le diagramme cidessous. Les quatre pièces de 50 centimes pourraient provenir d’une pièce de deux euros ou de deux pièces de un euro. De même, dans nos détecteurs, quand nous trouvons quatre jets de particules, ils peuvent provenir de deux bosons produits indépendamment (dans l’exemple cidessus, deux bosons Z), ou venir de quatre quarks produits directement. Tout ceci constitue le bruit de fond, tandis que le signal correspond dans ce cas au nouveau boson, celui qui s’est désintégré en deux bosons.
Une pièce de monnaie n’a qu’une valeur mais une particule possède à la fois masse et énergie. Quand on échange une grosse pièce pour de la monnaie, la valeur initiale est conservée. Avec des particules, nous devons prendre en compte la masse et l’énergie de tous les produits de désintégration pour calculer la masse combinée de la particule originale. Dernier détail : si la particule qui se désintègre est beaucoup plus lourde que les deux bosons qu’elle produit, les jets venant de ces bosons seront à peine séparés. Ils se déplaceront côte à côte. On n’observera alors non pas quatre jets, mais seulement deux jets plus évasés.
Si ces deux larges jets proviennent de deux Z bosons produits indépendamment, la valeur totale de leur masse combinée sera aléatoire, comme si nous additionnions la valeur de la monnaie au fond de nos poches. Si des milliers de personnes notaient sur un graphe la valeur de leur petite monnaie, nous obtiendrions une distribution comme celle de la ligne bleue cidessous. La majorité des gens ne traîne qu’un peu de monnaie, mais certaines personnes trimbalent une petite fortune en pièces de monnaie.
L’axe horizontal donne la valeur de la masse combinée des deux jets pour chaque événement récolté par la Collaboration d’ATLAS qui en contenait deux. L’axe vertical montre combien d’événements ont été trouvés avec une valeur de masse particulière. La ligne bleue montre les contributions du bruit de fond et les autres lignes colorées correspondent à diverses hypothèses théoriques. Les points noirs représentent les données réelles et devraient être distribués de façon similaire à la ligne bleu en l’absence de nouvelles particules.
Une petite bosse est visible autour d’une valeur de masse de 2 TeV : il y a plus d’événements dans les données que ce à quoi on s’attend venant de sources connues. Mais il y a toujours un certain flou dans toute mesure à cause des erreurs expérimentales. Si on répétait la même mesure mille fois, au moins une de ces mesures aurait un écart semblable. Il est donc beaucoup trop tôt pour dire qu’il pourrait s’agir des premiers signes de la présence d’une nouvelle particule, comme un boson W’ hypothétique par exemple. Mais ce sera à suivre dans les nouvelles données.
La Collaboration CMS a aussi quelques événements intrigants, comme celui cidessus à gauche trouvé parmi les toutes nouvelles données recueillies depuis la reprise du LHC à 13 TeV. Les deux jets ont une masse combinée d’environ 5,0 TeV. Un évènement semblable ayant une masse combinée de 5,15 TeV (droite) a aussi été trouvé dans les données accumulées à 8 TeV. Il y a 500 fois moins de données à 13 TeV qu’à 8 TeV, mais les expériences peuvent déjà poursuivre les analyses effectuées à 8 TeV.
Il est beaucoup trop tôt pour dire quoi que ce soit. Un peu comme si nous regardions à distance, par un jour brumeux et à la tombée de la nuit, essayant de voir si le train s’en vient. La forme floue aperçue au loin estelle réelle ou juste une illusion ? Personne ne le sait, il faut attendre que le train se rapproche. Mais pas pour longtemps puisque le LHC est déjà en marche. Les expériences CMS et ATLAS devraient bientôt avoir suffisamment de nouvelles données pour pouvoir trancher. Et là, attachez bien vos tuques, ça va devenir excitant!
Pauline Gagnon
Pour recevoir un avis lors de la parution de nouveaux blogs, suivezmoi sur Twitter: @GagnonPauline ou par email en ajoutant votre nom à cette liste de distribution ou consultez mon site web.
Quantum Diaries
Yesterday, at the European Physics Society (EPS) Particle Physics conference in Vienna, we moved from parallel sessions to plenary sessions. The tasks of the speakers is now to summarize the hundreds of results presented so far at the conference, and draw the big picture.
For the past two years, the Large Hadron Collider underwent major upgrade work. Experimentalists have used this downtime to look at all collected data from all possible angles (and a few more!). With final calibrations and improved algorithms everywhere, nearly all analyses now included all data collected at 8 TeV. In most cases, months of hard work for hundreds of people only slightly improved the resolution. But these rock solid results have unfortunately not revealed new discoveries.
That’s the bad news. The good news is that four times more data is expected in the coming year at higher energy, making new phenomena accessible.
Here is one example. Both the CMS and ATLAS experiments are looking for heavy hypothetical particles that would decay into two of the known bosons, namely photons, Z, W or Higgs bosons. In turns, the last three bosons could decay into jets of light particles made of quarks.
A particle decay is very similar to making change for a large coin: the initial coin does not contain the smaller coins but can be exchanged for smaller coins of equal value, like on the diagram below. The four pieces of 50 centimes could come either from a two euro coin or from two coins of one euro. Likewise in our detectors, when we find four jets of particles, they can come from two independently produced Z, W or H bosons, or simply from four quarks produced directly. All this is called the background while the signal in this case would be a new boson that first decayed into two bosons.
A coin only has one value but a particle carries both mass and energy. When one breaks a large coin, its total value is conserved. With particles, we must take into account the mass and the energy of all the decay products to calculate the combined mass of the original particle. One last detail: when the initial decaying particle is much heavier than the two bosons it produces, the jets coming from these bosons will hardly be separated. They will fly along side each other. In the end, we will not see four jets but rather two broader jets.
If the two broad jets come from two unrelated Z bosons, their total combined mass will be random, just as if we were to sum up the values of the small coins we carry in our pocket. If thousands of people told us the value of their small change, we would get a distribution like the one shown below by the blue line. Most people have only a little change, but some carry a small fortune in coins.
The horizontal axis gives the combined mass value of each event containing two broad jets found by the ATLAS Collaboration. The vertical axis shows (on a logarithmic scale) how many events were found with a particular value. The blue line shows what is expected from various backgrounds and the other colourful lines correspond to a few hypotheses. The black dots represent the real data and would look similar to the blue line if nothing new were there.
A small bump shows up around a mass value of 2 TeV, that is, more events are seen in data than what is predicted. The excess is 3.4 σ. Since there is always a spread in measured values due to the experimental errors, such a difference would occur at least once if we were to measure this quantity 1000 times. Hence, it is to early to say this could be the first sign of something new like a hypothetical boson denoted W’.
The CMS Collaboration also showed a few intriguing events. One is found in the newest data collected at 13 TeV after the restart of the LHC. The two jets combined mass is 5 TeV (left figure). The second event comes from the data collected earlier at 8 TeV and has a mass of 5.15 TeV. With 500 times less data at 13 TeV than 8 TeV, the experiments are already extending the analyses started with the 8 TeV data.
At this stage, it is way too early to tell. This is similar to looking in the distance on a foggy day, at dusk, trying to see if the train is coming. A faint shape is visible but is this real or just a mirage? No one knows, we must wait for the train to come closer. But not for long since the LHC is on track. Both experiments should soon have enough new data to be more definitive. And then, hold on to your hat, it’s going to get really exciting.
Pauline Gagnon
To be alerted of new postings, follow me on Twitter: @GagnonPauline or signup on this mailing list to receive an email notification. You can also visit my website
Lubos Motl  string vacua and pheno
He must believe that he has become much more effective in answering people's questions. That's why he agreed to answer questions posted at reddit.com/r/science:
Science Ama Series: I am Stephen Hawking, theoretical physicist. Join me to talk about making the future of technology more human, reddit. AMA!So far, there are over 8,000 comments over there.
If you assume that there is approximately one question in each comment and the Sun will go red giant in less than 8 billion years, you may conclude that Stephen Hawking will have slightly less than one million years to address the average comment. Whether it's enough is yet to be seen.
A somewhat more serious topic than the destruction of the mankind by malicious, excessively clever robots (sorry, I am still much more afraid of ordinary machines controlled by the stupid people!): Alon E. Faraggi and Marco Guzzi have a new paper providing us with the free fermionic heterotic explanations of the \(2\TeV\) excess observed at ATLAS.
It could be due to a new \(Z'\) and/or \(W'\) boson, they say, but they argue that the possible new bosons of this kind that their seemingly "special" class of compactifications, the free fermionic heterotic ones, offer may be divided to seven basic categories:
 \(U(1)_{Z'}\in SO(10)\) of the grand unification
 \(U(1)_{Z'}\notin SO(10)\) of the grand unification but familyuniversal
 nonuniversal \(U(1)_{Z'}\)
 hidden sector \(U(1)\) symmetries with kinetic mixing
 leftrightsymmetric models
 PatiSalam models
 leptophobic and custodial symmetries
By the way, you may always watch the plenary talks at EPSHEP 2015 in Vienna (live).
Update
I have looked at the HawkingMusk Luddite letter a bit more carefully. They're not the only signatories. The letter is against "autonomous weapons". There are thousands or tens of thousands of signatures including those of Steve Wozniak, Lisa Randall, Frank Wilczek, Max Tegmark, Noam Chomsky, Barbara Grosz (the exchairbitch of an antiSummers femiNazi task force) and so on and so on.
Autonomous weapons hysteria must be some new global warminglike hysteria that I must have almost completely missed!
These autonomous weapons are surely dangerous for the targets – which may include innocent targets if something goes wrong – but they're still weapons that can make certain operations more efficient and that are likely to give the civilized parties of various conflicts an edge – because they have a technological edge. Autonomous weapons could be helpful to fight the Islamic State and similar foes which is why I think that they may be a good idea. (And indeed, I can't get rid of the worry that those leftists oppose autonomous weapons exactly because they could be threatening for forces such as the Islamic State – organizations that these leftists semisecretly root for.)
America builds an army of robots for the future
One must be careful what he produces and that it can't be abused by wrong overlords but otherwise it seems better to me when an autonomous robot, and not a 20yearold soldier, is sacrificed in a fight with bloody savages. So my letter to the robotics experts is: Don't listen to these Luddites and keep on doing your job.
by Luboš Motl (noreply@blogger.com) at July 28, 2015 12:09 PM
July 27, 2015
astrobites  astroph reader's digest
Title: Constraining Big Bang lithium production with recent solar neutrino data
Authors: Marcell P. Takacs, Daniel Bemmerer, Tamas Szucs, Kai Zuber
First Author’s Institution: HelmholtzZentrum, DresdenRossendorf
Notes: in press at Phys. Rev. D
Today’s post was written by Tom McClintock, third year graduate student in Physics at the University of Arizona. His research interests include cosmology and large scale structure. Tom did his undergrad at Amherst College and a MSc in high performance computing at the University of Edinburgh. In addition to his research, he is in a long term relationship with ultimate frisbee and dungeons and dragons.
Among the tests passed by the standard cosmological model, Big Bang Nucleosynthesis (BBN) may be the most rigorous, in that predictions of light element abundances are consistent with observations over ten orders of magnitude. All of this production occurs within the first fifteen minutes(!) following the Big Bang, and ceases once weak reactions producing neutrons fall out of equilibrium. However, for over thirty years there has been tension over the abundance of lithium7 between theoretical BBN calculations and measurements of metalpoor stars known as the Cosmic Lithium Problem (which astrobites has discussed here and here). The numerical simulations of CDM predict an abundance that is over three times that found on the surface of Population II stars. Something has to give.
The authors of today’s paper investigate a nuclear physics solution, the reaction rate 3He + 4He + 7Be, shortened to 3He(a,g)7Be. Production of beryllium7 is important because 7Be eventually decays to 7Li through electron capture. Nuclear reactions are described by reaction rates, which in turn are described by interaction cross sections, which can be measured by experiments. In the case of 3He(a,g)7Be, any change in the measured cross section affects the theoretical BBN 7Li yield, and thus the compatibility between the standard cosmological model and abundance observations.
In addition, 3He(a,g)7Be is a critical step in both the pp2 and pp3 branches of the ppchain of hydrogen burning in the Sun. Both of these branches also produce electron neutrinos, observable on Earth. The authors use new stellar neutrino flux data published by the BOREXINO collaboration in order to constrain the 3He(a,g)7Be reaction rate. From there they recalculate the theoretical 7Li yield and confirm the significant tension between theory and observation.
The Tricky Part
Nuclear reaction cross sections have a temperature dependent sweet spot, called the Gamow peak, which allows for a maximum reaction rate to occur. For this reason, it is much easier for experiments to probe cross sections near the Gamow peak; at lower energies there isn’t enough juice to get the nuclei to smash together and at higher energies they whiz by each other too fast. Unfortunately, the energy range of interest (0.1 – 0.5 MeV) for BBN temperatures (~500,000 K) is too low, and lies just out of the capabilities of most experiments. Therefore, in order to perform BBN calculations it has been necessary to extrapolate the cross section down to these energies.
Takacs et al. sidestepped this limitation by utilizing the solar neutrino data to constrain the reaction rate at an energy lower than that of BBN, thereby removing the need for extrapolation.
Method
By assuming a standard solar model (SSM) as well as the standard neutrino oscillation model, the authors determine that the predicted neutrino flux depends on a variety of parameters such as solar luminosity, age, opacity, and nuclear reaction rates. They then use calculations of the sensitivity of the neutrino flux for a variation in the 3He(a,g)7Be reaction rate in order to write this rate in terms of the observed flux, the expected flux from SSM, and the best theoretical reaction rate from SSM. As shown in the figure below, their data point was measured at an energy almost a factor of ten below all previous measurements of the cross section.
Results
The cross section for 3He(a,g)7Be was lower by about 5% compared to the value previously used in several BBN calculations, and the precision increased almost by a factor of three, mostly due to the elimination of extrapolation. Using this cross section, the authors updated the reaction rate in a public BBN code and determined a small increase in the disagreement between the theoretical 7Li abundance and the abundance observed on the surface of metal poor stars. However, they caution that further work on the SSM may change their error budget for the 3He(a,g)7Be cross section.
This study both confirms and exacerbates the cosmic lithium problem (albeit slightly), yet it demonstrates how astrophysical processes even in our solar system can serve as probes into fundamental physics. BBN marks the boundary between precision and speculative cosmology, and the lithium problem restricts researchers from pushing this boundary further.
Tommaso Dorigo  Scientificblogging
Symmetrybreaking  Fermilab/SLAC
A new result from the LHCb collaboration weakens previous hints at the existence of a new type of W boson.
A measurement released today by the LHCb collaboration dumped some cold water on previous results that suggested an expanded cast of characters mediating the weak force.
The weak force is one of the four fundamental forces, along with the electromagnetic, gravitational and strong forces. The weak force acts on quarks, fundamental building blocks of nature, through particles called W and Z bosons.
Just like a pair of gloves, particles can in principle be lefthanded or righthanded. The new result from LHCb presents evidence that the W bosons that mediate the weak force are all lefthanded; they interact only with lefthanded quarks.
This weakens earlier hints from the Belle and BaBar experiments of the existence of righthanded W bosons.
The LHCb experiment at the Large Hadron Collider examined the decays of a heavy and unstable particle called Lambdab—a baryon consisting of an up quark, down quark and bottom quark. Weak decays can change a bottom quark into either a charm quark, about 1 percent of the time, or into a lighter up quark. The LHCb experiment measured how often the bottom quark in this particle transformed into an up quark, resulting in a proton, muon and neutrino in the final state.
“We found no evidence for a new righthanded W boson,” says Marina Artuso, a Professor of Physics at Syracuse University and a scientist working on the LHCb experiment.
If the scientists on LHCb had seen bottom quarks turning into up quarks more often than predicted, it could have meant that a new interaction with righthanded W bosons had been uncovered, Artuso says. “But our measured value agreed with our model’s value, indicating that the righthanded universe may not be there.”
Earlier experiments by the Belle and BaBar collaborations studied transformations of bottom quarks into up quarks in two different ways: in studies of a single, specific type of transformation, and in studies that ideally included all the different ways the transformation occurs.
If nothing were interfering with the process (like, say, a righthanded W boson), then these two types of studies would give the same value of the bottomtoup transformation parameter. However, that wasn’t the case.
The difference, however, was small enough that it could have come from calculations used in interpreting the result. Today’s LHCb result makes it seem like righthanded W bosons might not exist after all, at least not in a way that is revealed in these measurements.
Michael Roney, spokesperson for the BaBar experiment, says, "This result not only provides a new, precise measurement of this important Standard Model parameter, but it also rules out one of the interesting theoretical explanations for the discrepancy... which still leaves us with this puzzle to solve."
ATLAS Experiment
To the best of our knowledge, it took the Universe about 13.798 billion years (plus or minus 37 million) to allow funny looking condensates of mostly oxygen, carbon and hydrogen to ponder on their own existence, the fate of the cosmos and all the rest. Some particularly curious specimens became scientists, founded CERN, dug several rings into the ground near Geneva, Switzerland, built the Large Hadron Collider in the biggest ring, and also installed a handful of large detectors along the way. All of that just in order to understand a bit better why we are here in the first place. Well, here we are!
CERN was founded after World War II as a research facility dedicated to peaceful science (in contrast to military research). Germany is one of CERN’s founding members and it is great to be a part of it. Thousands of scientists are associated with CERN from over 100 countries, including some nations that do not have the most relaxed diplomatic relationships with each other. Yet this doesn’t matter at CERN, as we are working handinhand for the greater good of science and technology.
In the ATLAS collaboration, Germany has institutes from 14 different cities contributing to one of the largest and most complex detectors ever built. My institute, the KirchhoffInstitut für Physik (KIP) in Heidelberg, was (and is) involved in the development and operation of the trigger mechanism that selects the interesting interactions from the not so interesting ones. Furthermore, we are doing analyses on the data to confirm the Standard Model of Particle Physics or – better yet – to find hints of excess events that point to dark matter particles (although we are still waiting for that…).
But let’s start with the trigger. The interaction rate (that is the rate at which bunches of LHC protons collide within the ATLAS detector) is way too high to save every single event. That is why a selection process is needed to decide which events to save and which to let go. This trigger mechanism is split up into several stages; the first of which handles such high rates that it needs to be implemented using custom hardware, as commercial PCs are not fast enough.
This first stage (also called the level1 trigger) is what we work on here at KIP. For instance, together with a fellow student, I took care of one of the first timing checks after the long shutdown. This was important, because we wanted to know if the extensive maintenance that started after the Run 1 (wherein we had personally installed new hardware) had somehow changed the timing behaviour of the level1 trigger. Having a timed system is crucial, since if you are off by even a few nanoseconds, your trigger starts misbehaving and you might miss Higgs bosons or other interesting events.
In order to determine the timing of our system we used “splash” events. Instead of collisions at the centre of the detector, a “splash” is an energetic spray of a huge number of particles that comes from only one direction (more information on splashes here). They are great for timing the system, because they light up the entire detector like a Christmas tree. Also, they came from the first LHC beam since Run 1 – so it was the first opportunity to see the detector at work. This work was intense and cool. The beam splashes were scheduled over Easter, but we did not care. We gladly spent our holiday together in the ATLAS control room with other highly motivated people who sacrificed their long weekend for science. To see the first beams live in the control room after a long shutdown was a special experience. Extremely enthusiastic!
But of course, timing is not the only thing that has to be done. We also write the firmware for our hardware, code software (for instance, to monitor our system in real time), plan future upgrades (in both hardware and software) and do even more calibration. Each of these items is important for the operation of the detector and also very exciting to work on. I find it cool to know that the stuff I worked on helps keep ATLAS running.
Once we have the data – what do we do with it? Each student at KIP can choose which topic he or she wants to work on, yet the majority of us study processes that are related to electroweak interactions. This part of the Standard Model has become even more interesting after the discovery of the Higgs boson and has potential for the discovery of new physics. For example, dark matter. Many models predict dark matter interacts electroweakly, which is what I am working on. We can search for this in the data by looking for events from which we know that particles escaped the detector without interacting with it (leaving “missing transverse energy“; neutrinos do this too) and than comparing the results to models of electroweak coupling to dark matter. The discovery of dark matter would be awesome. The cosmological evidence for dark matter is convincing (for instance galactic rotation curves or the agreement between observations from the Planck satellite and models such as ΛCDM). It is just a matter of finding it…
Going back to the beginning – literally. I am extremely curious to see what we – those funnylooking condensates of mostly oxygen, carbon and hydrogen – will find out about the Universe, its beginning, end, inbetween, composition, geometry, behaviour and countless other aspects. And CERN, and especially the ATLAS collaboration, is a great environment in which to do so.
arXiv blog
Every sporting event tells a story. Now the first computational analysis of “game stories” suggests that future sports could be designed to prefer certain kinds of stories over others.
“Serious sport is war minus the shooting.” Many athletes will agree with George Orwell’s famous observation. But many fans might add that the best sport is a form of unscripted storytelling: the dominant onesided thrashing, the backand forth struggle, the improbable comeback, and so on.
Quantum Diaries
La plupart des physiciens et physiciennes sont d’accord: la physique est bien trop passionnante pour la réserver seulement aux scientifiques. Et pour la première fois, la Société Européenne de Physique (EPS) y a consacré une session entière samedi lors de sa conférence de physique des particules en cours à Vienne. Plusieurs y ont rapporté des initiatives variées visant à partager le meilleur de la physique des particules avec le grand public.
La plupart des activités décrites visaient des étudiants et étudiantes de tous âges, venant de pays développés ou en développement. Kate Shaw, chercheure au Centre International de Physique Théorique (ICTP) de Trieste en Italie, a souligné comment la science peut aider à résoudre divers problèmes d’environnement et de développement. Le monde a besoin de plus de scientifiques, a déclaré Kate. Investir dans l’éducation, ainsi que dans les institutions technologiques et culturelles jouent un rôleclé dans le développement d’une économie basée sur la connaissance. La recherche fondamentale stimule les sciences appliquées par l’innovation, la technologie et l’ingénierie. Elle a aussi souligné l’importance d’inclure toutes les minorités et les jeunes issus de familles à faible revenu.
Kate a fondé le programme “Physique sans Frontières” au ICTP et organisé des “Masterclasses” (voir cidessous) et autres activités en Palestine, en Égypte, au Népal, au Liban, au ViêtNam et en Algérie. Non seulement elle inspire les jeunes à entreprendre des études en science, mais elle les assiste aussi, les aidant à accéder à des programmes de maîtrise et de doctorat. Kate a reçu aujourd’hui le Outreach Award de l’EPS « pour son travail de dissémination de la physique des particules dans des pays qui n’ont pas de programmes bien établis ».
Une Masterclasse consiste en une journée entière d’activités interactives conçues pour des élèves. Des scientifiques décrivent d’abord la physique des particules et l’expérience à laquelle ils ou elles participent. Un repas pris en commun facilite les échanges avant de se lancer dans de vraies analyses avec de vraies données. Chaque année, une masterclasse internationale réunit environ 10 000 élèves de 42 pays. Ils et elles rejoignent des scientifiques de 200 universités ou laboratoires voisins, pour effectuer de véritables mesures de physique en collaboration internationale avec les autres élèves. Pourquoi ne pas participer à une Masterclasse?
Ces élèves ainsi que d’autres groupes peuvent aussi prendre part à une visite virtuelle d’une expérience de physique. Un ou une scientifique sur place au laboratoire interagit avec le groupe, avant de leur faire visiter les installations à l’aide d’une connexion vidéo en direct.
Vous cherchez une activité inspirante qui soit simple, bon marché et accessible pour un événement spécial, une conférence ou un groupe? Invitezles à une visite virtuelle au CERN (ATLAS ou CMS). Ainsi en janvier, 500 élèves de Mumbai ont profité de leur “visite” de l’expérience IceCube située à 12 000 km au pôle sud, pour bombarder les scientifiques avec leurs questions.
Le Teacher Programme du CERN a déjà accueilli un millier de personnes. Les enseignants et enseignantes du niveau secondaire venus de partout dans le monde s’en font mettre plein la vue pendant plusieurs semaines afin de s’assurer qu’ils partageront leur enthousiasme avec leurs élèves à leur retour.
Les présentations publiques et les livres de vulgarisation scientifique visent un public plus général. Beaucoup de scientifiques, moi y compris, se feront un plaisir de venir donner une conférence près de chez vous. Il suffit de demander.
Pauline Gagnon
Pour recevoir un avis lors de la parution de nouveaux blogs, suivezmoi sur Twitter: @GagnonPauline ou par email en ajoutant votre nom à cette liste de distribution ou consultez mon site web.
July 26, 2015
Clifford V. Johnson  Asymptotia
July 25, 2015
arXiv blog
The best of the rest from the Physics arXiv this week.
Robust PerformanceDriven 3D Face Tracking in LongRange Depth Scenes
July 24, 2015
Clifford V. Johnson  Asymptotia
Tommaso Dorigo  Scientificblogging
Sean Carroll  Preposterous Universe
The question of how information escapes from evaporating black holes has puzzled physicists for almost forty years now, and while we’ve learned a lot we still don’t seem close to an answer. Increasingly, people who care about such things have been taking more seriously the intricacies of quantum information theory, and learning how to apply that general formalism to the specific issues of black hole information.
Now two students and I have offered a small contribution to this effort. Aidan ChatwinDavies is a grad student here at Caltech, while Adam Jermyn was an undergraduate who has now gone on to do graduate work at Cambridge. Aidan came up with a simple method for getting out one “quantum bit” (qubit) of information from a black hole, using a strategy similar to “quantum teleportation.” Here’s our paper that just appeared on arxiv:
How to Recover a Qubit That Has Fallen Into a Black Hole
Aidan ChatwinDavies, Adam S. Jermyn, Sean M. CarrollWe demonstrate an algorithm for the retrieval of a qubit, encoded in spin angular momentum, that has been dropped into a nofirewall unitary black hole. Retrieval is achieved analogously to quantum teleportation by collecting Hawking radiation and performing measurements on the black hole. Importantly, these methods only require the ability to perform measurements from outside the event horizon and to collect the Hawking radiation emitted after the state of interest is dropped into the black hole.
It’s a very specific — i.e. not very general — method: you have to have done measurements on the black hole ahead of time, and then drop in one qubit, and we show how to get it back out. Sadly it doesn’t work for two qubits (or more), so there’s no obvious way to generalize the procedure. But maybe the imagination of some clever person will be inspired by this particular thought experiment to come up with a way to get out two qubits, and we’ll be off.
I’m happy to host this guest post by Aidan, explaining the general method behind our madness.
If you were to ask someone on the bus which of Stephen Hawking’s contributions to physics he or she thought was most notable, the answer that you would almost certainly get is his prediction that a black hole should glow as if it were an object with some temperature. This glow is made up of thermal radiation which, unsurprisingly, we call Hawking radiation. As the black hole radiates, its mass slowly decreases and the black hole decreases in size. So, if you waited long enough and were careful not to enlarge the black hole by throwing stuff back in, then eventually it would completely evaporate away, leaving behind nothing but a bunch of Hawking radiation.
At a first glance, this phenomenon of black hole evaporation challenges a central notion in quantum theory, which is that it should not be possible to destroy information. Suppose, for example, that you were to toss a book, or a handful of atoms in a particular quantum state into the black hole. As the black hole evaporates into a collection of thermal Hawking particles, what happens to the information that was contained in that book or in the state of (what were formerly) your atoms? One possibility is that the information actually is destroyed, but then we would have to contend with some pretty ugly foundational consequences for quantum theory. Instead, it could be that the information is preserved in the state of the leftover Hawking radiation, albeit highly scrambled and difficult to distinguish from a thermal state. Besides being very pleasing on philosophical grounds, we also have evidence for the latter possibility from the AdS/CFT correspondence. Moreover, if the process of converting a black hole to Hawking radiation conserves information, then a stunning result of Hayden and Preskill says that for sufficiently old black holes, any information that you toss in comes back out almost a fast as possible!
Even so, exactly how information leaks out of a black hole and how one would go about converting a bunch of Hawking radiation to a useful state is quite mysterious. On that note, what we did in a recent piece of work was to propose a protocol whereby, under very modest and special circumstances, you can toss one qubit (a single unit of quantum information) into a black hole and then recover its state, and hence the information that it carried.
More precisely, the protocol describes how to recover a single qubit that is encoded in the spin angular momentum of a particle, i.e., a spin qubit. Spin is a property that any given particle possesses, just like mass or electric charge. For particles that have spin equal to 1/2 (like those that we consider in our protocol), at least classically, you can think of spin as a little arrow which points up or down and says whether the particle is spinning clockwise or counterclockwise about a line drawn through the arrow. In this classical picture, whether the arrow points up or down constitutes one classical bit of information. According to quantum mechanics, however, spin can actually exist in a superposition of being part up and part down; these proportions constitute one qubit of quantum information.
So, how does one throw a spin qubit into a black hole and get it back out again? Suppose that Alice is sitting outside of a black hole, the properties of which she is monitoring. From the outside, a black hole is characterized by only three properties: its total mass, total charge, and total spin. This latter property is essentially just a much bigger version of the spin of an individual particle and will be important for the protocol.
Next, suppose that Alice accidentally drops a spin qubit into the black hole. First, she doesn’t panic. Instead, she patiently waits and collects one particle of Hawking radiation from the black hole. Crucially, when a Hawking particle is produced by the black hole, a bizarro version of the same particle is also produced, but just behind the black hole’s horizon (boundary) so that it falls into the black hole. This bizarro ingoing particle is the same as the outgoing Hawking particle, but with opposite properties. In particular, its spin state will always be flipped relative to the outgoing Hawking particle. (The outgoing Hawking particle and the ingoing particle are entangled, for those in the know.)
The picture so far is that Alice, who is outside of the black hole, collects a single particle of Hawking radiation whilst the spin qubit that she dropped and the ingoing bizarro Hawking particle fall into the black hole. When the dropped particle and the bizarro particle fall into the black hole, their spins combine with the spin of the black hole—but remember! The bizarro particle’s spin was highly correlated with the spin of the outgoing Hawking particle. As such, the new combined total spin of the black hole becomes highly correlated with the spin of the outgoing Hawking particle, which Alice now holds. So, Alice measures the black hole’s new total spin state. Then, essentially, she can exploit the correlations between her held Hawking particle and the black hole to transfer the old spin state of the particle that she dropped into the hole to the Hawking particle that she now holds. Alice’s lost qubit is thus restored. Furthermore, Alice didn’t even need to know the precise state that her initial particle was in to begin with; the qubit is recovered regardless!
That’s the protocol in a nutshell. If the words “quantum teleportation” mean anything to you, then you can think of the protocol as a variation on the quantum teleportation protocol where the transmitting party is the black hole and measurement is performed in the total angular momentum basis instead of the Bell basis. Of course, this is far from a resolution of the information problem for black holes. However, it is certainly a neat trick which shows, in a special set of circumstances, how to “bounce” a qubit of quantum information off of a black hole.
arXiv blog
Matching an infrared image of a face to its visible light counterpart is a difficult task, but one that deep neural networks are now coming to grips with.
One problem with infrared surveillance videos or infrared CCTV images is that it is hard to recognize the people in them. Faces look different in the infrared and matching these images to their normal appearance is a significant unsolved challenge.
July 23, 2015
Symmetrybreaking  Fermilab/SLAC
The Japanbased neutrino experiment has seen its first three candidate electron antineutrinos.
Scientists on the T2K neutrino experiment in Japan announced today that they have spotted their first possible electron antineutrinos.
When the T2K experiment first began taking data in January 2010, it studied a beam of neutrinos traveling 295 kilometers from the JPARC facility in Tokai, on the east coast, to the SuperKamiokande detector in Kamioka in western Japan. Neutrinos rarely interact with matter, so they can stream straight through the earth from source to detector.
From May 2014 to June 2015, scientists used a different beamline configuration to produce predominantly the antimatter partners of neutrinos, antineutrinos. After scientists eliminated signals that could have come from other particles, three candidate electron antineutrino events remained.
T2K scientists hope to determine if there is a difference in the behavior of neutrinos and antineutrinos.
“That is the holy grail of neutrino physics,” says Chang Kee Jung of State University of New York at Stony Brook, who until recently served as international cospokesperson for the experiment.
If scientists caught neutrinos and their antiparticles acting differently, it could help explain how matter came to dominate over antimatter after the big bang. The big bang should have produced equal amounts of each, which would have annihilated one another completely, leaving nothing to form our universe. And yet, here we are; scientists are looking for a way to explain that.
“In the current paradigm of particle physics, this is the best bet,” Jung says.
Scientists have previously seen differences in the ways that other matter and antimatter particles behave, but the differences have never been enough to explain our universe. Whether neutrinos and antineutrinos act differently is still an open question.
Neutrinos come in three types: electron neutrinos, muon neutrinos and tau neutrinos. As they travel, they morph from one type to another. T2K scientists want to know if there’s a difference between the oscillations of muon neutrinos and muon antineutrinos. A possible upgrade to the SuperKamiokande detector could help with future datataking.
One other currently operating experiment can look for this matterantimatter difference: the NOvA experiment, which studies a beam that originates at Fermilab near Chicago with a detector near the Canadian border in Minnesota.
“This result shows the principle of the experiment is going to work,” says Indiana University physicist Mark Messier, cospokesperson for the NOvA experiment. “With more data, we will be on the path to answering the big questions.”
It might take T2K and NOvA data combined to get scientists closer to the answer, Jung says, and it will likely take until the construction of the even larger DUNE neutrino experiment in South Dakota to get a final verdict.
Ben Still  Neutrino Blog
Antiparticles have opposite properties like electric charge. 
Antiquarks and Anticolour
In the last post I mentioned that particles made from quarks must be strong charge neutral, this can be achieved if each quark is colour charged with a primary colour of light (red, green, and blue) so that the overall colour charge of the particle is white. There is another way to build particles with a neutral, white colour, overall strong charge but for this we must talk about antiparticles. The three generations of fundamental particle also have mirror versions of themselves; the antiparticles. When you look into a mirror left becomes right but you still look the same size. A similar thing is true in the particle world  mirror antiparticle versions of particles have the same mass but they see the world in opposite ways. They see the world differently in the way they feel and interact through the forces of nature. We say an electron particle has a negative electric charge, then its antimatter version, the positron, will have a positive electric charge. The antielectron (positron) was first seen in experiment in 1932 (the same year the neutron was discovered), and since then it has been confirmed that antiparticles do indeed exist for all of the three generations of particle.Rainbow spectrum of white light. 
Mixing the three primary colours of light to make the secondary colours and white. 
Magenta: the antigreen. Image "borrowed" from Steve Mould 
The whole set of Quarks and Antiquarks that are know to exist; they are one half of the building blocks that make up all particles in our visible Universe. 
Just like the Baryons there is a pattern that GellMann theorised in his Eightfold Way for the possible Meson particles that can be made from up, down and strange quarks; the Meson Octet (below). Mesons do not survive very long because particles and antiparticles are not very stable around one another. Generally when a particle meets its own antiparticle they annihilate one another to produce pure energy. Mesons, as they are constructed by quark and antiquark, use the first opportunity available to either form pure energy or a number of lighter particles. The middle row of the Meson Octet are particles called pions (π) which play a role in keeping protons and neutrons together in the nucleus but also in the production of neutrino particle beams.
The Meson Octet shows all possible Mesons that can be constructed with up, down, strange, antidown, antiup, and antistrange quark. 


Ben Still  Neutrino Blog
The two charges of the electromagnetic force and the three charges of the strong force. 
Rule of three …
Overlapping light 
It doesn't matter which of the quarks have which strong charge just that there is at least one of each primary colour. 
We can then say that the stable proton is white as green plus blue plus red light equals white. The same rules applies for all other particles made in a similar way, the group of particles known as Baryons. Almost any combinations of three quarks can create a Baryon as long as the Baryon is white in strong charge. Remember I am in no way saying that quarks have colour in the traditional sense, because we cannot see quarks in the traditional sense  assigning them a colour is an analogy that fits the way in which the strong force behaves. Below are diagrams showing Murray GellMann's mathematical idea of explaining experimental data of the time, called the Eightfold way. These two diagrams shows all ways you can create Baryons made from up, down, and strange quark building blocks. The particle made of three strange quarks at the very bottom of the second diagram (Baryon Decuplet) is the Ω^{ }particle that GellMann predicted to exist and won him the nobel prize in 1968 after it was discovered.
The Baryon Octet: The central combination of quarks manifests itself as two distinct type of particle so there is eight in all, hence the name Oct. 
The Baryon Decuplet: Show more possibilities of Baryon using the up, down, and strange quarks. In the 60's the heaviest quark known of was the strange quark. 
July 22, 2015
Symmetrybreaking  Fermilab/SLAC
The SuperKamiokande collaboration has approved a project to improve the sensitivity of the SuperK neutrino detector.
SuperKamiokande, buried under about 1 kilometer of mountain rock in Kamioka, Japan, is one of the largest neutrino detectors on Earth. Its tank is full of 50,000 tons (about 13 million gallons) of ultrapure water, which it uses to search for signs of notoriously difficulttocatch particles.
Recently members of the SuperK collaboration gave the goahead to a plan to make the detector a thousand times more sensitive with the help of a chemical compound called gadolinium sulfate.
Neutrinos are made in a variety of natural processes. They are also produced in nuclear reactors, and scientists can create beams of neutrinos in particle accelerators. These particles are electrically neutral, have little mass and interact only weakly with matter—characteristics that make them extremely difficult to detect even though trillions fly through any given detector each second.
SuperK catches about 30 neutrinos that interact with the hydrogen and oxygen in the water molecules in its tank each day. It keeps its water ultrapure with a filtration system that removes bacteria, ions and gases.
Scientists take extra precautions both to keep the ultrapure water clean and to avoid contact with the highly corrosive substance.
“Somebody once dropped a hammer into the tank,” says experimentalist Mark Vagins of the University of Tokyo's Kavli Institute for the Physics and Mathematics of the Universe. “It was chromeplated to look nice and shiny. Eventually we found the chrome and not the hammer.”
When a neutrino interacts in the SuperK detector, it creates other particles that travel through the water faster than the speed of light, creating a blue flash. The tank is lined with about 13,000 phototube detectors that can see the light.
Looking for relic neutrinos
On average, several massive stars explode as supernovae every second somewhere in the universe. If theory is correct, all supernovae to have exploded throughout the universe’s 13.8 billion years have thrown out trillions upon trillions of neutrinos. That means the cosmos would glow in a faint background of relic neutrinos—if scientists could just find a way to see even a fraction of those ghostlike particles.
For about half of the year, the SuperK detector is used in the T2K experiment, which produces a beam of neutrinos in Tokai, Japan, some 183 miles (295 kilometers) away, and aims it at SuperK. During the trip to the detector, some of the neutrinos change from one type of neutrino to another. T2K studies that change, which could give scientists hints as to why our universe holds so much more matter than antimatter.
But a T2K beam doesn’t run continuously during that half year. Instead, researchers send a beam pulse every few seconds, and each pulse lasts just a few microseconds long. SuperK still detects neutrinos from natural processes while scientists are running T2K.
In 2002, at a neutrino meeting in Munich, Germany, experimentalist Vagins and theorist John Beacom of The Ohio State University began thinking of how they could better use SuperK to spy the universe’s relic supernova neutrinos.
“For at least a few hours we were standing there in the Munich subway station somewhere deep underground, hatching our underground plans,” Beacom says.
To pick out the few signals that come from neutrino events, you have to battle a constant clatter of background noise of other particles. Other incoming cosmic particles such as muons (the electron’s heavier cousin) or even electrons emitted from naturally occurring radioactive substances in rock can produce signals that look like the ones scientists hope to find from neutrinos. No one wants to claim a discovery that later turns out to be a signal from a nearby rock.
SuperK already guards against some of this background noise by being buried underground. But some unwanted particles can get through, and so scientists need ways to separate the signals they want from deceiving background signals.
Vagins and Beacom settled on an idea—and a name for the next stage of the experiment: Gadolinium Antineutrino Detector Zealously Outperforming Old Kamiokande, Super! (GADZOOKS!). They proposed to add 100 tons of the compound gadolinium sulfate—Gd2(SO4)3—to SuperK’s ultrapure water.
When a neutrino interacts with a molecule, it releases a charged lepton (a muon, electron, tau or one of their antiparticles) along with a neutron. Neutrons are thousands of times more likely to interact with the gadolinium sulfate than with another water molecule. So when a neutrino traverses SuperK and interacts with a molecule, its muon, electron, or antiparticle (SuperK can’t see tau particles) will generate a first pulse of light, and the neutron will create a second pulse of light: “two pulses, like a knockknock,” Beacom says.
By contrast, a background muon or electron will make only one light pulse.
To extract only the neutrino interactions, scientists will use GADZOOKS! to focus on the twosignal events and throw out the singlesignal events, reducing the background noise considerably.
The prototype
But you can’t just add 100 tons of a chemical compound to a huge detector without doing some tests first. So Vagins and colleagues built a scaleddown version, which they called Evaluating Gadolinium’s Action on Detector Systems (EGADS). At 0.4 percent the size of SuperK, it uses 240 of the same phototubes and 200 tons (52,000 gallons) of ultrapure water.
Over the past several years, Vagins’ team has worked extensively to show the benefits of their idea. One aspect of their efforts has been to build a filtration system that removes everything from the ultrapure water except for the gadolinium sulfate. They presented their results at a collaboration meeting in late June.
On June 27, the SuperK team officially approved the proposal to add gadolinium sulfate but renamed the project SuperKGd. The next steps are to drain SuperK to check for leaks and fix them, replace any burned out phototubes, and then refill the tank.
But this process must be coordinated with T2K, says Masayuki Nakahata, the SuperK collaboration spokesperson.
Once the tank is refilled with ultrapure water, scientists will add in the 100 tons of gadolinium sulfate. Once the compound is added, the current filtration system could remove it any time researchers would like, Vagins says.
“But I believe that once we get this into SuperK and we see the power of it, it’s going to become indispensable,” he says. “It’s going to be the kind of thing that people wouldn’t want to give up the extra physics once they’re used to it.”
ZapperZ  Physics and Physicists
Zz.
July 21, 2015
Lubos Motl  string vacua and pheno
A random picture of intersecting Dbranes
Alternatively, if that bump were real, it could have been a sign of compositeness, a heavy scalar (instead of a spinone boson), or a triboson pretending to be a diboson. However, on Sunday, six string phenomenologists proposed a much more exciting explanation:
Stringy origin of diboson and dijet excesses at the LHCThe multinational corporation (SUNY, Paris, Munich, Taiwan, Bern, Boston) consisting of Anchordoqui, Antoniadis, Goldberg, Huang, Lüst, and Taylor argues that the bump has the required features to grow into the first package of exclusive collider evidence in favor of string theory – yes, I mean the theory that stinky brainless chimps yell to be disconnected from experiments.
Why would such an ambitious conclusion follow from such a seemingly innocent bump on the road? We need just a little bit of patience to understand this point.
They agree with the defenders of the leftrightsymmetric explanation of the bump that the particle that decays in order to manifest itself as the bump is a new spinone boson, namely a \(Z'\). But its corresponding \(U(1)_a\) symmetry may be anomalous: there may exist a mixed anomaly in the triangle\[
U(1)_a SU(2)_L SU(2)_L
\] with two copies of the regular electroweak \(SU(2)\) gauge group. An anomaly in the gauge group would mean that the field theory is inconsistent. In the characteristic field theory constructions, the right multiplicities and charges of the spectrum are needed to cancel the anomaly. However, string theory has one more trick that may cancel gauge anomalies. It's a trick that actually launched the First Superstring Revolution in 1984.
It's the GreenSchwarz mechanism.
In 1984, Green and Schwarz figured out how the anomaly works in type I superstring theory with the \(SO(32)\) gauge group – which is given by a hexagon diagram in \(d=10\) much like it needs a triangle in \(d=4\) – but the same trick may apply even after compactification. The new spinone gauge field is told to transform surprisingly nontrivially under a gauge invariance of a seemingly independent field, a twoindex field, and the hexagon is then cancelled against a 2+4 tree diagram with the exchange of the twoindex field.
In the \(d=4\) case, we may see that this GreenSchwarz mechanism makes the previously anomalous \(U(1)_a\) gauge boson massive – and the "Stückelberg" mass is just an order of magnitude or so lower than the string scale (which they therefore assume to be \(M_s\approx 20\TeV\)). This is normally viewed as an extremely high energy scale which is why these possibilities don't enter the conventional quantum field theoretical models.
But string theory may also be around the corner – in the case of some stringy braneworld models, particularly the intersecting braneworlds. In these braneworlds, which are very concrete stringy realizations of the "old large dimensions" paradigm, the Standard Model fields live on stacks of branes, they have the form of open strings whose basic duty is to stay attached to a Dbrane. Some string modes (particles) live near the intersections of the Dbrane stacks because one of their endpoint is attached to one stack and the other to the other stack and the strings always want to be stringy short, not to carry insanely high energy.
To make the story short, the anomalyproducing triangle diagram may also be interpreted as the Feynman diagram for a decay of the new \(Z'\) boson of the \(U(1)_a\) groups into two \(SU(2)_L\) gauge bosons. When the latter pair is decomposed into the basis of the usual particles we know, the decays may be\[
\eq{
Z' &\to W^+ W^,\\
Z' &\to Z^0 Z^0,\\
Z' &\to Z^0 \gamma
}
\] All these three decays are made unavoidable in the GreenSchwarzmechanismbased models – and the relative branching ratios are pretty much given. Note that \(W^0\equiv W_3\) is a mixture of \(Z^0\) and \(\gamma\) so all three pairs created from \(Z^0\) and \(\gamma\) would be possible but the LandauYang theorem implies that the \(\gamma\gamma\) decay of \(Z'\) is forbidden (the rate is zero) for symmetry reasons.
Their storyline is so predictive that then may tell you that the new coupling constant is \(g_a\approx 0.36\), too.
So if their explanation is right, the bump near \(2\TeV\) will be growing – it may already be growing now: the first Run II results will be announced on EPSHEP in Vienna, a meeting that starts tomorrow (follow the conference website)! Only about 1 inverse femtobarn of \(13\TeV\) data has been accumulated in 2015 so far – much less than 2030/fb at \(8\TeV\) in 2012. And if the authors of the paper discussed here are right, one more thing is true. The decay channel \(Z\gamma\) of the new particle will soon be detected as well – and it will be a smoking gun for lowscale string theory!
No known consistent field theory predicts a nonzero \(Z\gamma\) decay rate of the new massive gauge boson. The stringtheoretical GreenSchwarz mechanism mixes what looks like a fieldtheoretical treelevel diagram with a oneloop diagram. Their being on equal footing implies that the regular QFTlike perturbation theory breaks down and instead, there is a hidden loop inside a vertex of the wouldbe treelevel diagram. This loop can't be expanded in terms of regular particles in a loop, however: it implies some stringy compositeness of the particles and processes.
A smoking gun. This particular one is a smoking gun of someone else than string theory, however.
This sounds to good to be true but it may be true. I still think it's very unlikely but these smart authors obviously think it's a totally sensible scenario. It's hard to figure out whether they really impartially believe that these lowscale intersecting braneworlds are likely; or their belief mostly boils down to a wishful thinking.
If these ideas were right, we could observe megatons of stringy physics with finiteprice colliders!
by Luboš Motl (noreply@blogger.com) at July 21, 2015 05:13 PM
Clifford V. Johnson  Asymptotia
Lubos Motl  string vacua and pheno
Identify an unknown decay phenomenonAgain, you will submit a file in which each "test" collision is labeled as either "interesting" or "uninteresting". But in this case, you may actually discover a phenomenon that is believed not to exist at the LHC, according to the stateoftheart theory (the Standard Model)!
The Higgs contest was all about the simulated data. They looked real but they were not real and several technicalities were switched off in the simulation, to simplify things. Incredibly enough, here you are going to work with the real data from the relevant detector at the LHC, the LHCb detector: the LHCb collaboration is the coorganizer.
For each test event, you will have to announce a probability \(P_i\) that the event involved the following decay of a tau:\[
\tau^\pm \to \mu^\pm \mu^+\mu^
\] The tau lepton decayed to three muons. The charge is conserved but the lepton number is not: among the decay products, the negative muon and the positive muon cancel but there's still another muon – and it was created from a tau. \(L_\mu\) and \(L_\tau\) conservation laws were violated.
At many leading orders of the Standard Model, the probability of such a decay is zero. I believe that the actual predicted rate is nonzero but unmeasurably tiny. New physics allows this "flavorviolating" process to take place, however.
To show you the unexpected relationships between different TRF blog posts, let me tell you that the blog post right before this one talked about the \(Z'\) boson and this new spinone particle could actually cause this "so far nonexistent" process.
In fact, this option appears in the logo of the contest! The \(\tau^\pm\) lepton decays to one \(\mu^\pm\) and a virtual \(Z'\), and the virtual \(Z'\) decays to \(\mu^+\mu^\). The first vertex violates the flavor numbers but it's not so shocking for a new heavy particle to couple to leptons in this "nondiagonal" way.
The LHCb contest is harder than the Higgs contest in several respects such as
 lower prizes: $7k, $5k, $3k for the winner, silver medal, and bronze medal. It's harder to write difficult programs if you're less financially motivated. But LHCb is smaller than ATLAS so you should have expected that. ;)
 no sharing of scripts: you won't be permitted to share your scripts for this contest so everyone has to start from "scratch". Sadly, you may still use your programs and experience from other projects so the machine learning folks will still have a huge advantage, perhaps a bigger one than in the Higgs contest.
 agreement and correlation prechecks: to make things worse, your submission won't be counted at all if it fails to pass two tests: the agreement test and the correlation test. This feature of the contest, along with the previous one, will make the leaderboard much smaller than in the Higgs contest. The two tests reflect the fact that the dataset is composed of several groups of events – real collisions, simulated realistic ones, and simulated newphysics ones for verification purposes.
 larger files to download: in total, you have to download 400 MB worth of ZIP files that decompress to many gigabytes.
 messy details of the LHC are kept: lots of the technical details that make the real life of experimental physicists hard were kept – although translated to the machinelearningfriendly conventions. Also, the evaluation metric is more sophisticated – some weighted area under the curve (depicting the graph relating the number of false positives and the false negatives).
 and I forgot about 3 more complications that have scared me...
So far, there are only 13 people in the leaderboard and it's plausible that the total number will remain very low throughout the contest. If you write a single script that passes the tests at all, chances are high that you will be immediately placed very high in the leaderboard.
At any rate, you have 2 months left to win this contest and proudly announce it to the world on this blog and in The Wall Street Journal. Your solution may be much more useful than in the Higgs case; technicalities weren't eliminated, so your ideas may be used directly. And what you may discover is a genuinely new, surprising process – but one that may actually be already present in the LHCb data (as the hints of a \(Z'\) and flavorviolating Higgs decays suggest).
Good luck.
Correction: the Higgs money was just $7k, $4k, $2k, so this contest actually has better prizes. The money comes from CERN, Intel, two subdivisions of Yandex (a Russian Google competitor), and universities in Zurich, Warwick, Poland, and Russia.
by Luboš Motl (noreply@blogger.com) at July 21, 2015 01:55 PM
Symmetrybreaking  Fermilab/SLAC
Explore the elementary particles that make up our universe.
The Standard Model is a kind of periodic table of the elements for particle physics. But instead of listing the chemical elements, it lists the fundamental particles that make up the atoms that make up the chemical elements, along with any other particles that cannot be broken down into any smaller pieces.
The complete Standard Model took a long time to build. Physicist J.J. Thomson discovered the electron in 1897, and scientists at the Large Hadron Collider found the final piece of the puzzle, the Higgs boson, in 2012.
Use this interactive model (based on a design by Walter Murch for the documentary Particle Fever) to explore the different particles that make up the building blocks of our universe.
Quarks
Up Quark
Discovered in:
1968
Mass:
2.3 MeV
Discovered at:
SLAC
Charge:
2/3
Generation:
First
Spin:
1/2
About:
Up and down quarks make up protons and neutrons, which make up the nucleus of every atom.
Charm Quark
Discovered in:
1974
Mass:
1.275 GeV
Discovered at:
Brookhaven & SLAC
Charge:
2/3
Generation:
Second
Spin:
1/2
About:
In 1974, two independent research groups conducting experiments at two independent labs discovered the charm quark, the fourth quark to be found. The surprising discovery forced physicists to reconsider how the universe works at the smallest scale.
Top Quark
Discovered in:
1995
Mass:
173.21 GeV
Discovered at:
Fermilab
Charge:
2/3
Generation:
Third
Spin:
1/2
About:
The top quark is the heaviest quark discovered so far. It has about the same weight as a gold atom. But unlike an atom, it is a fundamental, or elementary, particle; as far as we know, it is not made of smaller building blocks.
Down Quark
Discovered in:
1968
Mass:
4.8 MeV
Discovered at:
SLAC
Charge:
1/3
Generation:
First
Spin:
1/2
About:
Nobody knows why, but a down quark is a just a little bit heavier than an up quark. If that weren’t the case, the protons inside every atom would decay and the universe would look very different.
Strange Quark
Discovered in:
1947
Mass:
95 MeV
Discovered at:
Manchester University
Charge:
1/3
Generation:
Second
Spin:
1/2
About:
Scientists discovered particles with “strange" properties many years before it became clear that those strange properties were due to the fact that they all contained a new, “strange” kind of quark. Theorist Murray GellMann was awarded the Nobel Prize for introducing the concepts of strangeness and quarks.
Bottom Quark
Discovered in:
1977
Mass:
4.18 GeV
Discovered at:
Fermilab
Charge:
1/3
Generation:
Third
Spin:
1/2
About:
This particle is a heavier cousin of the down and strange quarks. Its discovery confirmed that all elementary building blocks of ordinary matter come in three different versions.
Leptons
Electron
Discovered in:
1897
Mass:
0.511 MeV
Discovered at:
Cavendish Laboratory
Charge:
1
Generation:
First
Spin:
1/2
About:
The electron powers the world. It is the lightest particle with an electric charge and a building block of all atoms. The electron belongs to the family of charged leptons.
Muon
Discovered in:
1937
Mass:
105.66 MeV
Discovered at:
Caltech & Harvard
Charge:
1
Generation:
Second
Spin:
1/2
About:
The muon is a heavier version of the electron. It rains down on us as it is created in collisions of cosmic rays with the Earth’s atmosphere. When it was discovered in 1937, a physicist asked, “Who ordered that?”
Tau
Discovered in:
1976
Mass:
1776.82 MeV
Discovered at:
SLAC
Charge:
1
Generation:
Third
Spin:
1/2
About:
The discovery of this particle in 1976 completely surprised scientists. It was the first discovery of a particle of the socalled third generation. It is the third and heaviest of the charged leptons, heavier than both the electron and the muon.
Electron Neutrino
Discovered in:
1956
Mass:
<2 eV
Discovered at:
Savannah River Plant
Charge:
0
Generation:
First
Spin:
1/2
About:
Measurements and calculations in the 1920s led to the prediction of the existence of an elusive particle without electric charge, the neutrino. But it wasn’t until 1956 that scientists observed the signal of an electron neutrino interacting with other particles. Nuclear reactions in the sun and in nuclear power plants produce electron antineutrinos.
Muon Neutrino
Discovered in:
1962
Mass:
<0.19 MeV
Discovered at:
Brookhaven
Charge:
0
Generation:
Second
Spin:
1/2
About:
Neutrinos come in three flavors. The muon neutrino was first discovered in 1962. Neutrino beams from accelerators are typically made up of muon neutrinos and muon antineutrinos.
Tau Neutrino
Discovered in:
2000
Mass:
<18.2 MeV
Discovered at:
Fermilab
Charge:
0
Generation:
Third
Spin:
1/2
About:
Based on theoretical models and indirect observations, scientists expected to find a third generation of neutrino. But it took until 2000 for scientists to develop the technologies to identify the particle tracks created by tau neutrino interactions.
Bosons
Higgs
Photon
Discovered in:
1923
Mass:
<1x10^^{18} eV
Discovered at:
Washington University
Charge:
0
Spin:
1
About:
The photon is the only elementary particle visible to the human eye—but only if it has the right energy and frequency (color). It transmits the electromagnetic force between charged particles.
Physicists and their quantum theories treat the photon as a massless particle; so far even the most sophisticated experiments haven’t found any evidence to the contrary.
Gluon
Discovered in:
1979
Mass:
0
Discovered at:
DESY
Charge:
0
Spin:
1
About:
The gluon is the glue that holds together quarks to form protons, neutrons and other particles. It mediates the strong nuclear force.
Z Boson
Discovered in:
1983
Mass:
91.1876 GeV
Discovered at:
CERN
Charge:
0
Spin:
1
About:
The Z boson is the electrically neutral cousin of the W boson and a heavy relative of the photon. Together, these particles explain the electroweak force.
W Boson
Discovered in:
1983
Mass:
80.385 GeV
Discovered at:
CERN
Charge:
±1
Spin:
1
About:
The W boson is the only force carrier that has an electric charge. It’s essential for weak nuclear reactions: Without it, the sun would not shine.
Higgs Boson
Discovered in:
2012
Mass:
125.7 GeV
Discovered at:
CERN
Charge:
0
Spin:
0
About:
Discovered in 2012, the Higgs boson was the last missing piece of the Standard Model puzzle. It is a different kind of force carrier from the other elementary forces, and it gives mass to quarks as well as the W and Z bosons. Whether it also gives mass to neutrinos remains to be discovered.
Launch the interactive model »
ZapperZ  Physics and Physicists
I wrote an entry on his work when he won the Nobel prize a few years ago. His legacy will live on long after him.
Zz.
July 20, 2015
Sean Carroll  Preposterous Universe
I love reading io9, it’s such a fun mixture of science fiction, entertainment, and pure science. So I was happy to respond when their writer George Dvorsky emailed to ask an innocentsounding question: “Why is the scale of the universe so freakishly large?”
You can find the fruits of George’s labors at this io9 post. But my own answer went on at sufficient length that I might as well put it up here as well. Of course, as with any “Why?” question, we need to keep in mind that the answer might simply be “Because that’s the way it is.”
Whenever we seem surprised or confused about some aspect of the universe, it’s because we have some preexisting expectation for what it “should” be like, or what a “natural” universe might be. But the universe doesn’t have a purpose, and there’s nothing more natural than Nature itself — so what we’re really trying to do is figure out what our expectations should be.
The universe is big on human scales, but that doesn’t mean very much. It’s not surprising that humans are small compared to the universe, but big compared to atoms. That feature does have an obvious anthropic explanation — complex structures can only form on inbetween scales, not at the very largest or very smallest sizes. Given that living organisms are going to be complex, it’s no surprise that we find ourselves at an inbetween size compared to the universe and compared to elementary particles.
What is arguably more interesting is that the universe is so big compared to particlephysics scales. The Planck length, from quantum gravity, is 10^{33} centimeters, and the size of an atom is roughly 10^{8} centimeters. The difference between these two numbers is already puzzling — that’s related to the “hierarchy problem” of particle physics. (The size of atoms is fixed by the length scale set by electroweak interactions, while the Planck length is set by Newton’s constant; the two distances are extremely different, and we’re not sure why.) But the scale of the universe is roughly 10^29 centimeters across, which is enormous by any scale of microphysics. It’s perfectly reasonable to ask why.
Part of the answer is that “typical” configurations of stuff, given the laws of physics as we know them, tend to be very close to empty space. (“Typical” means “high entropy” in this context.) That’s a feature of general relativity, which says that space is dynamical, and can expand and contract. So you give me any particular configuration of matter in space, and I can find a lot more configurations where the same collection of matter is spread out over a much larger volume of space. So if we were to “pick a random collection of stuff” obeying the laws of physics, it would be mostly empty space. Which our universe is, kind of.
Two big problems with that. First, even empty space has a natural length scale, which is set by the cosmological constant (energy of the vacuum). In 1998 we discovered that the cosmological constant is not quite zero, although it’s very small. The length scale that it sets (roughly, the distance over which the curvature of space due to the cosmological constant becomes appreciable) is indeed the size of the universe today — about 10^26 centimeters. (Note that the cosmological constant itself is inversely proportional to this length scale — so the question “Why is the cosmologicalconstant length scale so large?” is the same as “Why is the cosmological constant so small?”)
This raises two big questions. The first is the “coincidence problem”: the universe is expanding, but the length scale associated with the cosmological constant is a constant, so why are they approximately equal today? The second is simply the “cosmological constant problem”: why is the cosmological constant scale so enormously larger than the Planck scale, or event than the atomic scale? It’s safe to say that right now there are no widelyaccepted answers to either of these questions.
So roughly: the answer to “Why is the universe so big?” is “Because the cosmological constant is so small.” And the answer to “Why is the cosmological constant so small?” is “Nobody knows.”
But there’s yet another wrinkle. Typical configurations of stuff tend to look like empty space. But our universe, while relatively empty, isn’t *that* empty. It has over a hundred billion galaxies, with a hundred billion stars each, and over 10^50 atoms per star. Worse, there are maybe 10^88 particles (mostly photons and neutrinos) within the observable universe. That’s a lot of particles! A much more natural state of the universe would be enormously emptier than that. Indeed, as space expands the density of particles dilutes away — we’re headed toward a much more natural state, which will be much emptier than the universe we see today.
So, given what we know about physics, the real question is “Why are there so many particles in the observable universe?” That’s one angle on the question “Why is the entropy of the observable universe so small?” And of course the density of particles was much higher, and the entropy much lower, at early times. These questions are also ones to which we have no good answers at the moment.
John Baez  Azimuth
Here’s a puzzle from a recent issue of Quanta, an online science magazine:
Puzzle 1: I write down two different numbers that are completely unknown to you, and hold one in my left hand and one in my right. You have absolutely no idea how I generated these two numbers. Which is larger?
You can point to one of my hands, and I will show you the number in it. Then you can decide to either select the number you have seen or switch to the number you have not seen, held in the other hand. Is there a strategy that will give you a greater than 50% chance of choosing the larger number, no matter which two numbers I write down?
At first it seems the answer is no. Whatever number you see, the other number could be larger or smaller. There’s no way to tell. So obviously you can’t get a better than 50% chance of picking the hand with the largest number—even if you’ve seen one of those numbers!
But “obviously” is not a proof. Sometimes “obvious” things are wrong!
It turns out that, amazingly, the answer to the puzzle is yes! You can find a strategy to do better than 50%. But the strategy uses randomness. So, this puzzle is a great illustration of the power of randomness.
If you want to solve it yourself, stop now or read Quanta magazine for some clues—they offered a small prize for the best answer:
• Pradeep Mutalik, Can information rise from randomness?, Quanta, 7 July 2015.
Greg Egan gave a nice solution in the comments to this magazine article, and I’ll reprint it below along with two followup puzzles. So don’t look down there unless you want a spoiler.
I should add: the most common mistake among educated readers seems to be assuming that the first player, the one who chooses the two numbers, chooses them according to some probability distribution. Don’t assume that. They are simply arbitrary numbers.
The history of this puzzle
I’d seen this puzzle before—do you know who invented it? On G+, Hans Havermann wrote:
I believe the origin of this puzzle goes back to (at least) John Fox and Gerald Marnie’s 1958 betting game ‘Googol’. Martin Gardner mentioned it in his February 1960 column in Scientific American. Wikipedia mentions it under the heading ‘Secretary problem’. Gardner suggested that a variant of the game was proposed by Arthur Cayley in 1875.
Actually the game of Googol is a generalization of the puzzle that we’ve been discussing. Martin Gardner explained it thus:
Ask someone to take as many slips of paper as he pleases, and on each slip write a different positive number. The numbers may range from small fractions of 1 to a number the size of a googol (1 followed by a hundred 0s) or even larger. These slips are turned face down and shuffled over the top of a table. One at a time you turn the slips face up. The aim is to stop turning when you come to the number that you guess to be the largest of the series. You cannot go back and pick a previously turned slip. If you turn over all the slips, then of course you must pick the last one turned.
So, the puzzle I just showed you is the special case when there are just 2 slips of paper. I seem to recall that Gardner incorrectly dismissed this case as trivial!
There’s been a lot of work on Googol. Julien Berestycki writes:
I heard about this puzzle a few years ago from Sasha Gnedin. He has a very nice paper about this
• Alexander V. Gnedin, A solution to the game of Googol, Annals of Probability (1994), 1588–1595.
One of the many beautiful ideas in this paper is that it asks what is the best strategy for the guy who writes the numbers! It also cites a paper by Gnedin and Berezowskyi (of oligarchic fame).
Egan’s solution
Okay, here is Greg Egan’s solution, paraphrased a bit:
Pick some function such that:
•
•
• is monotonically increasing: if then
There are lots of functions like this, for example
Next, pick one of the first player’s hands at random. If the number you are shown is compute Then generate a uniformly distributed random number between 0 and 1. If is less than or equal to guess that is the larger number, but if is greater than guess that the larger number is in the other hand.
The probability of guessing correctly can be calculated as the probability of seeing the larger number initially and then, correctly, sticking with it, plus the probability of seeing the smaller number initially and then, correctly, choosing the other hand.
This is
This is strictly greater than since so .
So, you have a more than 50% chance of winning! But as you play the game, there’s no way to tell how much more than 50%. If the numbers on the other players hands are very large, or very small, your chance will be just slightly more than 50%.
Followup puzzles
Here are two more puzzles:
Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.
Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?
But watch out—here come Egan’s solutions to those!
Solutions
Egan writes:
Here are my answers to your two puzzles on G+.
Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.
Answer: If we adopt a deterministic strategy, that means there is a function that tells us whether on not we stick with the number x when we see it. If we stick with it, if we swap it for the other number.
If the two numbers are and with then the probability of success will be:
This is exactly the same as the formula we obtained when we stuck with with probability but we have specialised to functions valued in
We can only guarantee a more than 50% chance of choosing the larger number if is monotonically increasing everywhere, i.e. whenever But this is impossible for a function valued in To prove this, define to be any number in such that such an must exist, otherwise would be constant on and hence not monotonically increasing. Similarly define to be any number in such that We then have but
Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?
Answer: As Philip Gibbs noted, a deterministic pseudorandom number generator is still deterministic. Using a specific sequence of algorithmically random bits
to construct a number between and means takes on the specific value:
So rather than sticking with with probability for our monotonically increasing function we end up always sticking with if and always swapping if This is just using a function as in Puzzle 2, with:
if
if
So all the same consequences as in Puzzle 2 apply, and we cannot guarantee a more than 50% chance of choosing the larger number.
Puzzle 3 emphasizes the huge gulf between ‘true randomness’, where we only have a probability distribution of numbers and the situation where we have a specific number generated by any means whatsoever.
We could generate using a pseudorandom number generator, radioactive decay of atoms, an oracle whose randomness is certified by all the Greek gods, or whatever. No matter how randomly is generated, once we have it, we know there exist choices for the first player that will guarantee our defeat!
This may seem weird at first, but if you think about simple games of luck you’ll see it’s completely ordinary. We can have a more than 50% chance of winning such a game even if for any particular play we make the other player has a move that ensures our defeat. That’s just how randomness works.
July 19, 2015
Ben Still  Neutrino Blog
What Are Quarks?
Quarks are building blocks that cannot be broken into smaller things. 
Debris that results from smashing protons and protons into each other was seen in experiments to be a whole lot messier than debris from two electrons colliding headlong. GellMann and others reasoned that this would happen if the proton were not a single entity like the electron but instead, like a bag of groceries, containing multiple particles within itself.
Protons and neutrons are each made from three quark building blocks. 
Next Post: Rule of Three  Why are there not a different number of quarks in protons and other similar particles?
The nCategory Cafe
Just a quick note: you can see lots of talk slides here:
Category Theory 2015, Aveiro, Portugal, June 1419, 2015.
The Giry monad, tangent categories, Hopf monoids in duoidal categories, model categories, topoi… and much more!
The nCategory Cafe
In my last post I promised to follow up by explaining something about the relationship between homotopy type theory (HoTT) and computer formalization. (I’m getting tired of writing “publicity”, so this will probably be my last post for a while in this vein — for which I expect that some readers will be as grateful as I).
As a potential foundation for mathematics, HoTT/UF is a formal system existing at the same level as set theory (ZFC) and firstorder logic: it’s a collection of rules for manipulating syntax, into which we can encode most or all of mathematics. No such formal system requires computer formalization, and conversely any such system can be used for computer formalization. For example, the HoTT Book was intentionally written to make the point that HoTT can be done without a computer, while the Mizar project has formalized huge amounts of mathematics in a ZFClike system.
Why, then, does HoTT/UF seem so closely connected to computer formalization? Why do the overwhelming majority of publications in HoTT/UF come with computer formalizations, when such is still the exception rather than the rule in mathematics as a whole? And why are so many of the people working on HoTT/UF computer scientists or advocates of computer formalization?
To start with, note that the premise of the third question partially answers the first two. If we take it as a given that many homotopy type theorists care about computer formalization, then it’s only natural that they would be formalizing most of their papers, creating a close connection between the two subjects in people’s minds.
Of course, that forces us to ask why so many homotopy type theorists are into computer formalization. I don’t have a complete answer to that question, but here are a few partial ones.
HoTT/UF is built on type theory, and type theory is closely connected to computers, because it is the foundation of typed functional programming languages like Haskell, ML, and Scala (and, to a lesser extent, lessfunctional typed programming languages like Java, C++, and so on). Thus, computer proof assistants built on type theory are wellsuited to formal proofs of the correctness of software, and thus have received a lot of work from the computer science end. Naturally, therefore, when a new kind of type theory like HoTT comes along, the existing type theorists will be interested in it, and will bring along their predilection for formalization.
HoTT/UF is by default constructive, meaning that we don’t need to assert the law of excluded middle or the axiom of choice unless we want to. Of course, most or all formal systems have a constructive version, but with type theories the constructive version is the “most natural one” due to the CurryHoward correspondence. Moreover, one of the intriguing things about HoTT/UF is that it allows us to prove certain things constructively that in other systems require LEM or AC. Thus, it naturally attracts attention from constructive mathematicians, many of whom are interested in computable mathematics (i.e. when something exists, can we give an algorithm to find it?), which is only a short step away from computer formalization of proofs.
One could, however, try to make similar arguments from the other side. For instance, HoTT/UF is (at least conjecturally) an internal language for higher topos theory and homotopy theory. Thus, one might expect it to attract an equal influx of higher topos theorists and homotopy theorists, who don’t care about computer formalization. Why hasn’t this happened? My best guess is that at present the traditional 1topos theorists seem to be largely disjoint from the higher topos theorists. The former care about internal languages, but not so much about higher categories, while for the latter it is reversed; thus, there aren’t many of us in the intersection who care about both and appreciate this aspect of HoTT. But I hope that over time this will change.
Another possible reason why the influx from type theory has been greater is that HoTT/UF is less strangelooking to type theorists (it’s just another type theory) than to the average mathematician. In the HoTT Book we tried to make it as accessible as possible, but there are still a lot of tricky things about type theory that one seemingly has to get used to before being able to appreciate the homotopical version.
Another sociological effect is that Vladimir Voevodsky, who introduced the univalence axiom and is a Fields medalist with “charisma”, is also a very vocal and visible advocate of computer formalization. Indeed, his personal programme that he calls “Univalent Foundations” is to formalize all of mathematics using a HoTTlike type theory.
Finally, many of us believe that HoTT is actually the best formal system extant for computer formalization of mathematics. It shares most of the advantages of type theory, such as the abovementioned close connection to programming, the avoidance of complicated ZFencodings for even basic concepts like natural numbers, and the production of small easilyverifiable “certificates” of proof correctness. (The advantages of some type theories that HoTT doesn’t yet share, like a computational interpretation, are work in progress.) But it also rectifies certain infelicious features of previously existing type theories, by specifying what equality of types means (univalence), including extensionality for functions and truth values, providing wellbehaved quotient types (HITs), and so on, making it more comfortable for ordinary mathematicians. (I believe that historically, this was what led Voevodsky to type theory and univalence in the first place.)
There are probably additional reasons why HoTT/UF attracts more people interested in computer formalization. (If you can think of others, please share them in the comments.) However, there is more to it than this, as one can guess from the fact that even people like me, coming from a background of homotopy theory and higher category theory, tend to formalize a lot of our work on HoTT. Of course there is a bit of a “peer pressure” effect: if all the other homotopy type theorists formalize their papers, then it starts to seem expected in the subject. But that’s far from the only reason; here are some “real” ones.
Computer formalization of synthetic homotopy theory (the “uniquely HoTT” part of HoTT/UF) is “easier”, in certain respects, than most computer formalization of mathematics. In particular, it requires less infrastructure and library support, because it is “closer to the metal” of the underlying formal system than is usual for actually “interesting” mathematics. Thus, formalizing it still feels more like “doing mathematics” than like programming, making it more attractive to a mathematician. You really can open up a proof assistant, load up no prewritten libraries at all, and in fairly short order be doing interesting HoTT. (Of course, this doesn’t mean that there is no value in having libraries and in thinking hard about how best to design those libraries, just that the barrier to entry is lower.)
Precisely because, as mentioned above, type theory is hard to grok for a mathematician, there is a significant benefit to using a proof assistant that will automatically tell you when you make a mistake. In fact, messing around with a proof assistant is one of the best ways to learn type theory! I posted about this almost exactly four years ago.
I think the previous point goes double for homotopy type theory, because it is an unfamiliar new world for almost everyone. The types of HoTT/UF behave kind of like spaces in homotopy theory, but they have their own idiosyncracies that it takes time to develop an intuition for. Playing around with a proof assistant is a great way to develop that intuition. It’s how I did it.
Moreover, because that intuition is unique and recently developed for all of us, we may be less confident in the correctness of our informal arguments than we would be in classical mathematics. Thus, even an established “homotopy type theorist” may be more likely to want the comfort of a formalization.
Finally, there is an additional benefit to doing mathematics with a proof assistant (as opposed to formalizing mathematics that you’ve already done on paper), which I think is particularly pronounced for type theory and homotopy type theory. Namely, the computer always tells you what you need to do next: you don’t need to work it out for yourself. A central part of type theory is inductive types, and a central part of HoTT is higher inductive types; both of which are characterized by an induction principle (or “eliminator”) which says that in order to prove a statement of the form “for all $<semantics>x:W<annotation\; encoding="application/xtex">x:W</annotation></semantics>$, $<semantics>P(x)<annotation\; encoding="application/xtex">P(x)</annotation></semantics>$”, it suffices to prove some number of other statements involving the predicate $<semantics>P<annotation\; encoding="application/xtex">P</annotation></semantics>$. The most familiar example is induction on the natural numbers, which says that in order to prove “for all $<semantics>n\in \mathbb{N}<annotation\; encoding="application/xtex">n\backslash in\; \backslash mathbb\{N\}</annotation></semantics>$, $<semantics>P(n)<annotation\; encoding="application/xtex">P(n)</annotation></semantics>$” it suffices to prove $<semantics>P(0)<annotation\; encoding="application/xtex">P(0)</annotation></semantics>$ and “for all $<semantics>n\in \mathbb{N}<annotation\; encoding="application/xtex">n\backslash in\; \backslash mathbb\{N\}</annotation></semantics>$, if $<semantics>P(n)<annotation\; encoding="application/xtex">P(n)</annotation></semantics>$ then $<semantics>P(n+1)<annotation\; encoding="application/xtex">P(n+1)</annotation></semantics>$”. When using proof by induction, you need to isolate $<semantics>P<annotation\; encoding="application/xtex">P</annotation></semantics>$ as a predicate on $<semantics>n<annotation\; encoding="application/xtex">n</annotation></semantics>$, specialize to $<semantics>n=0<annotation\; encoding="application/xtex">n=0</annotation></semantics>$ to check the base case, write down $<semantics>P(n)<annotation\; encoding="application/xtex">P(n)</annotation></semantics>$ as the inductive hypothesis, then replace $<semantics>n<annotation\; encoding="application/xtex">n</annotation></semantics>$ by $<semantics>n+1<annotation\; encoding="application/xtex">n+1</annotation></semantics>$ to find what you have to prove in the induction step. The students in an intro to proofs class have trouble with all of these steps, but professional mathematicians have learned to do them automatically. However, for a general inductive or higher inductive type, there might instead be four, six, ten, or more separate statements to prove when applying the induction principle, many of which involve more complicated transformations of $<semantics>P<annotation\; encoding="application/xtex">P</annotation></semantics>$, and it’s common to have to apply several such inductions in a nested way. Thus, when doing HoTT on paper, a substantial amount of time is sometimes spent simply figuring out what has to be proven. But a proof assistant equipped with a unification algorithm can do that for you automatically: you simply say “apply induction for the type $<semantics>W<annotation\; encoding="application/xtex">W</annotation></semantics>$” and it immediately decides what $<semantics>P<annotation\; encoding="application/xtex">P</annotation></semantics>$ is and presents you with a list of the remaining goals that have to be proven.
To summarize this second list, then, I think it’s fair to say that compared to formalizing traditional mathematics, formalizing HoTT tends to give more benefit at lower cost. However, that cost is still high, especially when you take into account the time spent learning to use a proof assistant, which is often not the most userfriendly of software. This is why I always emphasize that HoTT can perfectly well be done without a computer, and why we wrote the book the way we did.
by shulman (viritrilbia@gmail.com) at July 19, 2015 08:19 AM