# Particle Physics Planet

## March 25, 2017

### Lubos Motl - string vacua and pheno

An isolated standard model contradicts nothing we know
Today, the Moriond 2017 particle physics conference ends. Especially the CMS has presented the newest results – analyses of some 35 inverse femtobarns of the data collected at the two protons' total energy of $$13\TeV$$.

Almost a decade ago, I made an asymmetric bet against Adam Falkowski, a particle phenomenologists now in Paris. He claimed that supersymmetry wouldn't be found before a deadline and I claimed it could be. If it were found, I would have won $10,000. If it weren't found, I would pay$100. So it was a 100-to-1 bet, basically implying the consensus probability of the early enough supersymmetry discovery at 1%. I accepted the bet because my subjective probability of a SUSY discovery was much higher than 1% and I still think it was reasonable – and an analogous assumption is still reasonable for the next collider.

The deadline was defined a bit arbitrarily – but it was "after the results of at least 30/fb of the data at design energy are collected". The design energy was $$14\TeV$$ and $$8\TeV$$ is clearly lower – the collisions at this lower energy may produce SUSY particles about 10 times less frequently than those at $$14\TeV$$ – but $$14\TeV$$ is close enough to $$13\TeV$$ so it's obvious that those 35/fb at $$13\TeV$$ that we have are basically equivalent to 30/fb at $$14\TeV$$. So right now it's the ideal balanced moment that almost exactly agrees with the conditions of our bet, I think, and because supersymmetry hasn't been discovered yet, I should pay $100 to Adam. As I have already mentioned, this lost bet is a technicality for me and doesn't change my belief that supersymmetry somewhere in Nature, beneath the Planck scale, is very likely and SUSY around the corner is always a possibility. I am sure that many of you agree that the opposite result would be way more interesting – from the financial viewpoint, from the viewpoint of our TRF community, and because of the excitement it would create among physicists. So I want to send him his$100 – although, obviously, there's still a potential that some game-changing paper based on the same dataset will be published in the future. This trivial transfer would have taken place if Falkowski had a PayPal account or could accept goods from Amazon etc. But he must live at some uncivilized place of the globe that is decoupled from all the technological and financial progress, among the sheep who look for, eat, and use lipsticks that tourists randomly threw away – it is in Paris, as I have mentioned – and he prefers the ancient transnational bank transfers over things like PayPal, Amazon etc. that he doesn't want to get involved with.

I could send a bank transfer to France – in a bank account of mine, it's a command that exists in the online banking – but it's been not tested. I don't want to send this money in ways that are so untested, or test them by smaller amounts etc. If someone finds it safe and easy to send payments to French banking accounts and can accept a PayPal payment from me or a $95-$105 plus shipping package from Amazon.com in the U.S. (with products according to his or her choice) as a compensation, please let me know.

This null result of the post-Higgs-discovery LHC experiments hasn't surprised us. It's not anything we could have been unprepared for. People were extremely prepared for it – although they hadn't wanted it: the outcome has often been referred to as the Nightmare Scenario. The work of the 6,000 people at the LHC since Summer 2012 has almost the same value as the statement "the SM is still OK, move on". A secretary would type this sentence for a cheaper salary than $10 billion. The Standard Model was a theory that worked up to energies up to $$200\GeV$$ or so – and needed particles of similar masses (top quark, massive gauge bosons, Higgs boson) – and it could have broken down right above $$200\GeV$$ and be extended or supplemented or replaced with a broader theory. But it didn't happen. The same Standard Model works up to $$1\TeV$$ or a bit higher. Just to be sure, different types of proposed new particles are excluded to different energies in different scenarios and lots of particles lighter than $$1\TeV$$ may still exist, of course. It's not a sharp contradiction with anything we know about physics that the Standard Model remained isolated, as James Wells et al. and Jon Butterworth have called it. Why have many particle phenomenologists preferred to believe that something else should have been discovered along with the Higgs boson or soon after the Higgs boson? The argument basically boils down to one parameter, the Higgs mass $$m_h$$ or the electroweak scale $$v$$ (the vacuum expectation value of the Higgs field). They are much (fifteen orders of magnitude or so) smaller than the fundamental scales of Nature – which are arguably close to the Planck mass $$10^{18}\GeV$$. And the "generic" value of this ratio would be "of order one" in a natural theory. The scalar bosons' masses should be driven towards the heaviest mass scale in physics, the Planck scale, by any quantum corrections unless there is something that changes the rules of the game and keeps the Higgs boson (along with all the massive elementary particles we know) much lighter. But let's look a bit more closely. The Higgs vev $$v$$ as well as the Higgs mass etc. are basically calculated from the Higgs potential$V(h) = \frac{1}{4}\lambda h^4 - \frac{1}{2}\mu^2 h^2$ It's the function that looks like the champagne bottle's bottom (in the first world) or Landau's buttocks (in the second world) or the Mexican hat potential (in the third world). And yes, the value of $$\lambda$$ is of order one, in some normalization close to one quarter. (Exercise: find what it is in my conventions.) Set the derivative $$V'(h)$$ to zero and you will find that the minimum of the potential i.e. the vev is at $$v^2 = h^2=\mu^2 / \lambda$$. Calculate the second derivative to see that the mass is of order $$\mu$$ again, $$m_h\sim \mu$$. OK, the value of $$\lambda$$ is of order one, as I mentioned, and completely "natural" while it's only the value of the fundamental parameter $$\mu$$ that looks unnaturally small. We may write it as$\mu^2 \sim 10^{-30} m_{\rm Planck}^2$ The coefficient as it actually appears in the Lagrangian, namely the square (tachyonic) mass term, is 30 orders of magnitude smaller than the fundamental unit with the same units. That's an apparent fine-tuning. However, this small value could result from some instanton-like or otherwise naturally small effect. Also, there could still be new physics or superpartner masses at $$\Lambda=3\TeV$$ or so in which case it would make sense to write$\mu^2 \sim 10^{-3} \Lambda^2$ OK, so the whole mystery of the "isolated Standard Model" could be just about one number in the Lagrangian that is about $$0.001$$ times its generically predicted value. The probability that a random number uniformly distributed between $$0$$ and $$1$$ is smaller than $$0.001$$ is $$0.001$$, too. It's unlikely but not insanely unlikely. This small probability corresponds to some slightly than higher 3-sigma noise. So you can say that all the "surprising" isolatedness of the Standard Model is basically just a 3-sigma or so deficit in the squared Higgs mass. It's not a big deal and because the explanations why the value is small may be clever, instanton-based, or otherwise naturally refuting the assumption that the distribution for $$\mu^2$$ is uniform, the deficit could very well be much less than 3 sigma, too. And even if it were 3 sigma or higher, you could also refer to the anthropic thinking that – even when used as a co-argument – may make the smallness look much more natural than it would look otherwise. I want to make you sure that once $$\mu$$ is comparable to $$100\GeV$$, everything else will be, too. I've told you why the Higgs vev and Higgs mass are comparable to $$\mu$$ for $$\lambda\sim O(1)$$. But the W-boson and Z-boson masses may be seen to be $$v$$ times some gauge couplings which are also of order one, so these masses are also of order $$\mu$$, and the fermion masses are at most $$\mu$$ or so (like the top quark) and generally equal to this constant multiplied by the Yukawa coupling constants (that are smaller than one). Again, if you filed a complaint with Nature about the "miraculously, unacceptably tiny" value of the elementary particle masses etc. relatively to the Planck mass with Mother Nature, She would ignore your complaint. You're just a bound state of strings that has no right to complain. And I think that She would be even morally justified not to give a damn about your complaints because it may be just a 3-sigma deficit in the value of a single coefficient in the Lagrangian, $$\mu^2$$. New physics at the $$100\TeV$$ collider is possible and well-motivated but I would only enter a similar bet as the one SUSY bet against Adam Falkowski. It's in no way guaranteed that there must be new physics. With such a bigger collider, the deficit as "calculated" above could increase to 4-sigma but it's still no big deal and with some hidden patterns violating the assumption about the uniformity of $$\mu$$, the deficit may really be much smaller or completely absent. Low-energy supersymmetry is almost the only other kind of physics that would explain smallness of $$\mu^2$$ without requiring some "new nearby physics" to explain the smallness of its own new parameters. But those who thought it was "almost certain" that the Standard Model has to be accompanied with some additional new physics – different than SUSY – were implicitly assuming (perhaps without even realizing or at least acknowledging it) that this line of reasoning basically implies that the whole logarithmic axis between the electroweak scale and Planck scale has to be filled with new scales and segments of physics. The Standard Model must be accompanied by another model, e.g. the Nude Model. But the Nude Model won't ever stay alone, either. So there would have to be another model at slightly higher energies, the Horny Model. But that model also has scalars that are small and unexplained so there has to be the nearby Morality Police Model. And then the Constitutional Court Model, and so on, up to the model of quantum gravity near the Planck scale. I find this scenario with lots of scales possible but rather unnatural according to my general interpretation of the word "natural". If the number of scales (layers of the physical onion) between the everyday life scales and the Planck scale were much greater than one, it would be just another parameter that should naturally be of order one but it isn't. So when you divide the desert between the electroweak and Planck scale to 16 pieces, it's like a pizza cut to 16 pieces. Madam, would you like me to cut the pizza to 8 or 16 pieces? The blonde answered: Only eight: sixteen would be too much for me to eat. Madam, I think that fundamentally, the cutting doesn't really change the overall severity of the problem (or the amount of food). So I have never really thought – and I still don't think – that adding too many scales (layers of the physical onion) that are too close to each other on the energy scale makes the physical picture more natural. You know, the gap between the electroweak scale and the Planck scale is a demonstrable fact that we already know. So a deep physical theory has to explain this small number in one way or another. Having an onion with lots of thin layers is just one extreme strategy to explain it. It's not the only one and it's not a particularly natural or elegant one, either, I think. If there is a big desert between the electroweak scale and the Planck scale that is explained by some theory that makes $$\mu^2$$ vanish in the leading approximation (conformal symmetry?) but produces some small $$\mu^2$$ by some naturally small (e.g. instanton) effects, it's fine with me. I still think that low-energy SUSY is better than a fundamental, slightly broken conformal symmetry in the spacetime, but I am not a bigot and there's no solid evidence that the smallness "has to be" explained by one type of an explanation or another. Here you have a logarithmic chart of the masses of known elementary particles. Note that photons, gluons, and gravitons are fundamentally "exactly massless" as far as all experiments and theories extracted from them go. On the other side, there is some reduced Planck scale near $$10^{18}\GeV$$. But what about the massive particles? The heaviest ones are near $$100\GeV$$, the top quark, the Higgs boson, and the massive electroweak gauge bosons. And then you have all the charged fermions with smaller and smaller masses down to the electron at $$5\times 10^{-4}\GeV$$. The widest ratio of the neighbors' masses is about 15 in this quasi-continuum. Beneath the electron, there is another gap (the teeth in the diagram mean that six floors are omitted!) and below this gap, you find neutrinos with masses comparable to $$10^{-11}\GeV$$. You see that the masses of an electron and the neutrinos differ by 7 orders of magnitude and there's a not so small desert in between. That's why I think that we should say that a desert of this magnitude – and perhaps even a bigger one – isn't a big deal. But the LHC hasn't actually proven any "really big gap" so far. If the "next heavier" particle above the (so far most massive) top quark were just 15 times heavier, and the ratio of 15 wouldn't be unprecedented, as I have mentioned, the next new particle would be around $$3\TeV$$ in mass. There are lots of scenarios like that which haven't excluded by the LHC yet! This chart of the particle masses is mixing apples and oranges, a particle physicist would say, because the origin of masses is very different for the different particles. I have sketched that the masses of all the particles heavier than neutrinos do boil down to the parameter $$\mu$$ after all. But the neutrino masses don't. The Majorana neutrino masses can't be obtained from a renormalizable mass term because that wouldn't be gauge-invariant. They can only be extracted from some non-renormalizable interactions – which include a higher power of the Higgs field $$h$$ that accompanies $$v$$ – and their magnitude is extracted from some physics at a very high energy scale, e.g. by the "seesaw mechanism". Such mechanisms and perhaps even more clever ones are probably used by Nature at many other places. They may guarantee that any potential contradiction or unnaturalness of small parameters is even more innocent than what it looks like. To summarize, I don't think that the null results of the 4+ post-Higgs years of the LHC contradict some theories or principles about Nature that we have learned. The discovery of new particles was possible but it was never guaranteed and the belief that it would emerge has always been driven by a wishful thinking to one extent or another (or by someone's plan to become famous as quickly as possible). I need to emphasize that this opinion of mine is in no way new. I've believed the same thing for decades. The fact that some parameters are as small as $$0.01$$ or $$0.001$$ isn't a terribly strong hint of anything. After all, the ordinary fine-structure constant is $$1/137.036$$ and we don't think that this small value proves some amazingly difficult hierarchy problem in Nature. The constant $$\alpha\sim 1/137.036$$ is small partly because of $$1/4\pi$$ that is naturally incorporated in it, partly because of the smallness of gauge couplings or their ratio (angle) calculated from them, partly from RG running that makes electromagnetism intrinsically weaker at long distance, and partly by some "slightly less than one" values of the fundamental gauge couplings at the GUT scale. There's simply no "shocking", unexplained smallness of $$\alpha$$. The "at least moderate desert" above the Standard Model has become a fact. The Standard Model physics has become a bit lonely place but the Standard Model is consistent up to much higher energies, plausible explanations for the relative smallness of the one dimensionful parameters exist, and one simply cannot derive any strong contradiction. One cannot make any big conclusion – e.g. that the world requires the anthropic principle – either. The estimated distance between the Standard Model island to the next islands or continents has simply increased a bit. That's it. It's probably not an exciting enough "story" but it seems to be the truth, anyway. Geological metaphor I actually think that the very analogy with the "islands" in geology implies a similar conclusion, "not a big deal". Imagine that you find yourself living on an island. What's the distance of this island from the closest other landmass – island or continent – divided by the Earth's radius? The naive naturalness consideration would lead you to say that the distance between continents is comparable to the Earth's radius – it's surely true for Europe (or Eurasia) and America or Australia or Antarctica. The islands should be uniformly distributed in the remaining ocean or seas so their distance from the nearby continent should be comparable to the Earth's radius, too. However, you find many more islands that are actually close to continents. For example, Greece has lots of islands from which the illegal immigrants may swim to the European continent. It makes sense that the islands are actually closer to the continents: the waters are shallower near the continental beaches which makes islands – localized fluctuations of the altitude above zero – more likely. Such a nearby landmass may serve as a candidate for an "explanation" why your island exists at all: it's some random fluctuation added next to some bigger nearby landmass. However, some islands are very far from the continents or other islands, too. Their distance may be comparable to the Earth's radius but it may also be "in between", e.g. 500 km. How is it possible? Well, it's just possible. I think that the argumentation in the case of geology – why there are islands whose distance from the continents is either very small or intermediate or comparable to the Earth's radius is qualitatively equivalent to the argumetation why $$\mu^2$$ may be very close to $$\Lambda^2$$ of some new physics, four orders of magnitude lower, and maybe even 30 orders of magnitude lower. Some islands are there as disconnected pieces of nearby continents. Others are tiny continents by themselves. Some islands may result from the landing of an asteroid. Yet another group of islands were libertarian paradises paid for and built by Peter Thiel, and so on. The diversity of explanations of "islands of phenomena in particle physics" may be analogously diverse although you shouldn't pick my list of causes literally. The belief in a clumping of islands and continents (or other islands) is just a belief, not a very solid argument. And when applied consistently within a theory of everything, it's basically equivalent to the opinion that the big oceans shouldn't exist at all (or, in physics, deserts are prohibited). The world in which the "islands of physics" would be densely distributed may look more intriguing or exciting for some people but that doesn't make it more likely. And the Universes with big deserts may be not only likely but also very elegant and intriguing, too. Their sexiness is of a different type than the sexiness of archipelagos near continents but it's very real. One must be able to see that some hypotheses are just hypotheses and they are driven by prejudices, not rational arguments, and the belief that new physics "must always be around the corner" was always a prejudice. It's possible but it's also possible that it's false. ### Christian P. Robert - xi'an's og ## March 24, 2017 ### Christian P. Robert - xi'an's og The Hanging Tree This is the fifth sixth volume of Ben Aaronovitch’s Rivers of London series. Which features PC Peter Grant from the London’s Metropolitan Police specialising in paranormal crime. Joining a line of magicians that was started by Isaac Newton. And with the help of water deities. Although this English magic sleuthing series does not compare with the superlative Jonathan Strange & Mr. Norrell single book, The Hanging Tree remains highly enjoyable, maybe more for its style and vocabulary than for the detective story itself, which does not sound completely coherent (unless I read it too quickly during the wee hours in Banff last week). And does not bring much about this part of London. Still a pleasure to read as the long term pattern of Aaronovitch’s universe slowly unravels and some characters get more substance and depth. Filed under: Books, Kids, Travel Tagged: Banff, Ben Aaronovitch, England, English magic, Hyde Park, London, Rivers of London, Thames ### Symmetrybreaking - Fermilab/SLAC A new gem inside the CMS detector This month scientists embedded sophisticated new instruments in the heart of a Large Hadron Collider experiment. Sometimes big questions require big tools. That’s why a global community of scientists designed and built gigantic detectors to monitor the high-energy particle collisions generated by CERN’s Large Hadron Collider in Geneva, Switzerland. From these collisions, scientists can retrace the footsteps of the Big Bang and search for new properties of nature. The CMS experiment is one such detector. In 2012, it co-discovered the elusive Higgs boson with its sister experiment, ATLAS. Now, scientists want CMS to push beyond the known laws of physics and search for new phenomena that could help answer fundamental questions about our universe. But to do this, the CMS detector needed an upgrade. “Just like any other electronic device, over time parts of our detector wear down,” says Steve Nahn, a researcher in the US Department of Energy’s Fermi National Accelerator Laboratory and the US project manager for the CMS detector upgrades. “We’ve been planning and designing this upgrade since shortly after our experiment first started collecting data in 2010.” The CMS detector is built like a giant onion. It contains layers of instruments that track the trajectory, energy and momentum of particles produced in the LHC’s collisions. The vast majority of the sensors in the massive detector are packed into its center, within what is called the pixel detector. The CMS pixel detector uses sensors like those inside digital cameras but with a lightning fast shutter speed: In three dimensions, they take 40 million pictures every second. For the last several years, scientists and engineers at Fermilab and 21 US universities have been assembling and testing a new pixel detector to replace the current one as part of the CMS upgrade, with funding provided by the Department of Energy Office of Science and National Science Foundation. The pixel detector consists of three sections: the innermost barrel section and two end caps called the forward pixel detectors. The tiered and can-like structure gives scientists a near-complete sphere of coverage around the collision point. Because the three pixel detectors fit on the beam pipe like three bulky bracelets, engineers designed each component as two half-moons, which latch together to form a ring around the beam pipe during the insertion process. Over time, scientists have increased the rate of particle collisions at the LHC. In 2016 alone, the LHC produced about as many collisions as it had in the three years of its first run together. To be able to differentiate between dozens of simultaneous collisions, CMS needed a brand new pixel detector. The upgrade packs even more sensors into the heart of the CMS detector. It’s as if CMS graduated from a 66-megapixel camera to a 124-megapixel camera. Each of the two forward pixel detectors is a mosaic of 672 silicon sensors, robust electronics and bundles of cables and optical fibers that feed electricity and instructions in and carry raw data out, according to Marco Verzocchi, a Fermilab researcher on the CMS experiment. The multipart, 6.5-meter-long pixel detector is as delicate as raw spaghetti. Installing the new components into a gap the size of a manhole required more than just finesse. It required months of planning and extreme coordination. “We practiced this installation on mock-ups of our detector many times,” says Greg Derylo, an engineer at Fermilab. “By the time we got to the actual installation, we knew exactly how we needed to slide this new component into the heart of CMS.” The most difficult part was maneuvering the delicate components around the pre-existing structures inside the CMS experiment. “In total, the full three-part pixel detector consists of six separate segments, which fit together like a three-dimensional cylindrical puzzle around the beam pipe,” says Stephanie Timpone, a Fermilab engineer. “Inserting the pieces in the right positions and right order without touching any of the pre-existing supports and protections was a well-choreographed dance.” For engineers like Timpone and Derylo, installing the pixel detector was the last step of a six-year process. But for the scientists working on the CMS experiment, it was just the beginning. “Now we have to make it work,” says Stefanos Leontsinis, a postdoctoral researcher at the University of Colorado, Boulder. “We’ll spend the next several weeks testing the components and preparing for the LHC restart.” ### Peter Coles - In the Dark March for Europe Just a quick post to say that I’ll be travelling from Cardiff to London first thing tomorrow morning in order to take part in this March to Parliament. After Wednesday’s terrorist attack near the Palace of Westminster, there has been some talk – some of apparently emanating from BrExit-supporters wanting to sabotage the event – about cancelling this demonstration against the folly of BrExit, and to celebrate the 60th Anniversary of the signing of the Treaty of Rome, but I’m glad to say it is going ahead. I think Wednesday’s events make it even more important that we exercise our democratic rights including the right to engage in peaceful protest. The march goes ahead with the full support of the Police. For more details please see the facebook page here. I hope this will be a big one! ## March 23, 2017 ### Christian P. Robert - xi'an's og parameter space for mixture models “The paper defines a new solution to the problem of defining a suitable parameter space for mixture models.” When I received the table of contents of the incoming Statistics & Computing and saw a paper by V. Maroufy and P. Marriott about the above, I was quite excited about a new approach to mixture parameterisation. Especially after our recent reposting of the weakly informative reparameterisation paper. Alas, after reading the paper, I fail to see the (statistical) point of the whole exercise. Starting from the basic fact that mixtures face many identifiability issues, not only invariance by component permutation, but the possibility to add spurious components as well, the authors move to an entirely different galaxy by defining mixtures of so-called local mixtures. Developed by one of the authors. The notion is just incomprehensible for me: the object is a weighted sum of the basic component of the original mixture, e.g., a Normal density, and of k of its derivatives wrt its mean, a sort of parameterised Taylor expansion. Which implies the parameter is unidimensional, incidentally. The weights of this strange mixture are furthermore constrained by the positivity of the resulting mixture, a constraint that seems impossible to satisfy in the Normal case when the number of derivatives is odd. And hard to analyse in any case since possibly negative components do not enjoy an interpretation as a probability density. In exponential families, the local mixture is the original exponential family density multiplied by a polynomial. The current paper moves one step further [from the reasonable] by considering mixtures [in the standard sense] of such objects. Which components are parameterised by their mean parameter and a collection of weights. The authors then restrict the mean parameters to belong to a finite and fixed set, which elements are coerced by a maximum error rate on any compound distribution derived from this exponential family structure. The remainder of the paper discusses of the choice of the mean parameters and of an EM algorithm to estimate the parameters, with a confusing lower bound on the mixture weights that impacts the estimation of the weights. And no mention made of the positivity constraint. I remain completely bemused by the paper and its purpose: I do not even fathom how this qualifies as a mixture. Filed under: Statistics, University life Tagged: mixtures of distributions, reparameterisation, Statistics and Computing, Taylor expansion ### Emily Lakdawalla - The Planetary Society Blog Here's our exhaustive guide to Trump's 392-word NASA budget We break down every sentence from Trump's new NASA budget, so you don't have to ### Peter Coles - In the Dark London looking back I thought I’d do a quick post as a reaction to yesterday’s terrible events in London in which four people lost their lives and several are still critically injured. We now know that the attacker was British and that he was known to the intelligence services. He appears to have acted alone and was armed with knives and drove an ordinary car onto the pavement, hitting a number of people before crashing the car and managing to stab a police officer to death before he was himself shot and killed. Whatever his motivations were, it looks more likely on the basis of information currently available that these were the actions of a crazed individual than part of an international terrorist conspiracy. We should, however, avoid jumping to conclusions and wait for the investigation to be completed. The first thing I want to do is to express my condolences to the families and friends of those who lost their lives. My thoughts are also with those who were critically injured and I hope with all my heart that they will all recover speedily and completely. Physical healing will take time, but they will need help, support and time to come to terms with the mental trauma too. The same is true for those who were caught up in this attack and received minor injuries or even just witnessed what happened, because they must have been shocked by the experience. I hope they receive all the help they need at what must be a very difficult time. The second point is that it’s clear that the police and other emergency services acted with great courage and professionalism yesterday. One policeman sadly died, but the swift actions of his colleagues prevented further loss of life. Ambulances, paramedics and members of the public all responded magnificently to care for those injured, and we shall probably find that their response saved many lives too. They deserve all our thanks. Finally, I noticed a number of ill-informed comments on Twitter from the usual gang of Far-Right hate-mongers, especially professional troll Katie Hopkins, claiming that London was “cowed” and “afraid” because this attack. I don’t believe that for one minute, and I want to explain why. I lived in London for about eight years (between 1990 and 1998). During that time I found myself in relatively close proximity to three major bomb explosions, though fortunately I wasn’t close enough to be actually harmed. I also concluded that my proximity to these events was purely coincidental… The first, in 1993, was the Bishopsgate Bombing. I happened to be looking out of the kitchen window of my flat in Bethnal Green when that bomb went off. I had a clear view across Weavers Fields towards the City of London and saw the explosion happen. I heard it too, several seconds later, loud enough to set off the car alarms in the car park beneath my window. This picture, from the relevant Wikipedia page, shows the devastation of the area affected by the blast. The other two came in quick succession. First, a large bomb exploded in London Docklands on Friday February 8th 1996, at around 5pm, when our regular weekly Astronomy seminar was just about to finish at Queen Mary College on the Mile End Road. We were only a couple of miles from the blast, but I don’t remember hearing anything and it was only later that I found out what had happened. Then, on the evening of Sunday 18th February 1996, I was in a fairly long queue trying to get into a night club in Covent Garden when there was a loud bang followed by a tinkling sound caused by pieces of glass falling to the ground. It sounded very close but I was in a narrow street surrounded by tall buildings and it was hard to figure out from which direction the sound had come from. It turned out that someone had accidentally detonated a bomb on a bus in Aldwych, apparently en route to plant it somewhere else (probably King’s Cross). What I remember most about that evening was that it took me a very long time to get home. Several blocks around the site of the explosion were cordoned off. I lived in the East End, on the wrong side of sealed-off area, so I had to find a way around it before heading home. No buses or taxis were to be found so I had to walk all the way. I arrived home in the early hours of the morning. Anyway, my point is that amid these awful terrorist atrocities of the 1990s, people were not “cowed” or “afraid”. Londoners are made of sterner stuff than that. It is true that one’s immediate response when confronted with, e.g. , a bomb explosion is to be a bit rattled. I’m sure that was true for many Londoners yesterday. That soon gives way to a determination to get on with your life and not let the bastards win. The events of the 1990s gave us a London of road blocks, security barriers and many other irritating inconveniences, but they did not bring the city to a standstill, as some have suggested happened yesterday. For the most part it was “business as usual”. I don’t live in London anymore, but I think Londoners are as unlikely to be frightened today as they were back then. And it will take much more than one man to “shut down the city”. As a matter of fact, I think only a coward would suggest otherwise. P.S. I forgot to mention another event, in 2005, when I was at the precise location of a bomb explosion but precisely 24 hours early… ### Emily Lakdawalla - The Planetary Society Blog A repeat of the space shuttle's bold test flight? NASA considers crew aboard first SLS mission NASA has only flown astronauts aboard a rocket's first flight once, when John Young and Bob Crippen took space shuttle Columbia on the boldest test flight in history. What are the risks of repeating the feat for SLS? ### Peter Coles - In the Dark Composed upon Westminster Bridge, September 3 1802, by William Wordsworth Earth has not anything to show more fair: Dull would he be of soul who could pass by A sight so touching in its majesty: This City now doth, like a garment, wear The beauty of the morning; silent, bare, Ships, towers, domes, theatres, and temples lie Open unto the fields, and to the sky; All bright and glittering in the smokeless air. Never did sun more beautifully steep In his first splendour, valley, rock, or hill; Ne’er saw I, never felt, a calm so deep! The river glideth at his own sweet will: Dear God! the very houses seem asleep; And all that mighty heart is lying still! by William Wordsworth (1770-1850) ### Christian P. Robert - xi'an's og and it only gets worse… “Trump wants us to associate immigrants with criminality. That is the reason behind a weekly published list of immigrant crimes – the first of which was made public on Monday. Singling out the crimes of undocumented immigrants has one objective: to make people view them as deviant, dangerous and fundamentally undesirable. ” The Guardian, March 22, 2017 “I didn’t want this job. I didn’t seek this job,’ Tillerson told the Independent Journal Review (IJR), in an interview (…) My wife told me I’m supposed to do this.'” The Guardian, March 22, 2017 “…under the GOP plan, it estimated that 24 million people of all ages would lose coverage over 10 years (…) Trump’s plan, for instance, would cut$5.8 billion from the National Institutes of Health, an 18 percent drop for the 32 billion agency that funds much of the nation’s research into what causes different diseases and what it will take to treat them.” The New York Times, March 5, 2017 Filed under: Kids, pictures, Travel, University life Tagged: Donald Trump, GOP, ice, immigration, NIH, The Guardian, The New York Times, trumpism, US politics ## March 22, 2017 ### Christian P. Robert - xi'an's og X-Outline of a Theory of Statistical Estimation While visiting Warwick last week, Jean-Michel Marin pointed out and forwarded me this remarkable paper of Jerzy Neyman, published in 1937, and presented to the Royal Society by Harold Jeffreys. “Leaving apart on one side the practical difficulty of achieving randomness and the meaning of this word when applied to actual experiments…” “It may be useful to point out that although we are frequently witnessing controversies in which authors try to defend one or another system of the theory of probability as the only legitimate, I am of the opinion that several such theories may be and actually are legitimate, in spite of their occasionally contradicting one another. Each of these theories is based on some system of postulates, and so long as the postulates forming one particular system do not contradict each other and are sufficient to construct a theory, this is as legitimate as any other. “ This paper is fairly long in part because Neyman starts by setting Kolmogorov’s axioms of probability. This is of historical interest but also needed for Neyman to oppose his notion of probability to Jeffreys’ (which is the same from a formal perspective, I believe!). He actually spends a fair chunk on explaining why constants cannot have anything but trivial probability measures. Getting ready to state that an a priori distribution has no meaning (p.343) and that in the rare cases it does it is mostly unknown. While reading the paper, I thought that the distinction was more in terms of frequentist or conditional properties of the estimators, Neyman’s arguments paving the way to his definition of a confidence interval. Assuming repeatability of the experiment under the same conditions and therefore same parameter value (p.344). “The advantage of the unbiassed [sic] estimates and the justification of their use lies in the fact that in cases frequently met the probability of their differing very much from the estimated parameters is small.” “…the maximum likelihood estimates appear to be what could be called the best “almost unbiassed [sic]” estimates.” It is also quite interesting to read that the principle for insisting on unbiasedness is one of producing small errors, because this is not that often the case, as shown by the complete class theorems of Wald (ten years later). And that maximum likelihood is somewhat relegated to a secondary rank, almost unbiased being understood as consistent. A most amusing part of the paper is when Neyman inverts the credible set into a confidence set, that is, turning what is random in a constant and vice-versa. With a justification that the credible interval has zero or one coverage, while the confidence interval has a long-run validity of returning the correct rate of success. What is equally amusing is that the boundaries of a credible interval turn into functions of the sample, hence could be evaluated on a frequentist basis, as done later by Dennis Lindley and others like Welch and Peers, but that Neyman fails to see this and turn the bounds into hard values. For a given sample. “This, however, is not always the case, and in general there are two or more systems of confidence intervals possible corresponding to the same confidence coefficient α, such that for certain sample points, E’, the intervals in one system are shorter than those in the other, while for some other sample points, E”, the reverse is true.” The resulting construction of a confidence interval is then awfully convoluted when compared with the derivation of an HPD region, going through regions of acceptance that are the dual of a confidence interval (in the sampling space), while apparently [from my hasty read] missing a rule to order them. And rejecting the notion of a confidence interval being possibly empty, which, while being of practical interest, clashes with its frequentist backup. Filed under: Books, Statistics, University life Tagged: Bayesian Analysis, confidence intervals, credible intervals, Dennis Lindley, Harold Jeffreys, inference, Jerzy Neyman, maximum likelihood estimation, unbiasedness, University of Warwick, X-Outline ### Peter Coles - In the Dark Keep Calm and Carry On I had just finished my biggest task of the day and stopped to make a cup of tea, when I caught the news of a serious incident on Westminster Bridge in London, at which it seems several lives have been lost. My thoughts are with my friends and colleagues in London at this very scary time and, above all, with those who have been affected directly by this terrible event. I hope everyone will keep as calm as possible and avoid jumping to conclusions about who is responsible, and let the police and security services get on with doing their job. ### The n-Category Cafe Functional Equations VII: The p-Norms The $pp$-norms have a nice multiplicativity property: $‖\left(Ax,Ay,Az,Bx,By,Bz\right){‖}_{p}=‖\left(A,B\right){‖}_{p}\phantom{\rule{thinmathspace}{0ex}}‖\left(x,y,z\right){‖}_{p} \|\left(A x, A y, A z, B x, B y, B z\right)\|_p = \|\left(A, B\right)\|_p \, \|\left(x, y, z\right)\|_p $ for all $A,B,x,y,z\in ℝA, B, x, y, z \in \mathbb\left\{R\right\}$ — and similarly, of course, for any numbers of arguments. Guillaume Aubrun and Ion Nechita showed that this condition completely characterizes the $pp$-norms. In other words, any system of norms that’s multiplicative in this sense must be equal to $‖\cdot {‖}_{p}\|\cdot\|_p$ for some $p\in \left[1,\infty \right]p \in \left[1, \infty\right]$. And the amazing thing is, to prove this, they used some nontrivial probability theory. All this is explained in this week’s functional equations notes, which start on page 26 here. ## March 21, 2017 ### Peter Coles - In the Dark R.I.P. Colin Dexter (1930-2017) I was saddened this afternoon to hear of the death, at the age of 86, of Colin Dexter, the novelist who created the character of Inspector Morse, memorably played on the long-running TV series of the same name by John Thaw. The television series of Inspector Morse came to an end in 2000, with a poignant episode called The Remorseful Day, but has led to two successful spin-offs, in Lewis and Endeavour both of which are still running. Colin Dexter regularly appeared in in both Inspector Morse and Lewis, mainly in non-speaking roles and part of the fun of these programmes was trying to spot him in the background. As a crime writer, Colin Dexter was definitely in the English’ tradition of Agatha Christie, in that his detective stories relied more on cleverly convoluted plots than depth of characterization, but the central character of Morse was a brilliant creation in itself and is rightly celebrated. Crime fiction is too often undervalued in literary circles, but I find it a fascinating genre and Colin Dexter was a fine exponent. Colin Dexter was also an avid solver of crossword puzzles, a characteristic shared by his Detective Inspector Morse. In fact I met Colin Dexter once, back in 2010, at a lunch to celebrate the 2000th Azed puzzle in the Observer which I blogged about here. Colin Dexter used to be a regular entrant – and often a winner – in Azed‘s monthly clue-setting competition, but I haven’t seen his name among the winners for a while. You can see his outstanding record on the “&lit” archive here. I guess he retired from crosswords just has he had done from writing crime novels. To be honest, he seemed quite frail back in 2010 so I’m not surprised he decided to take it easy in his later years. Incidentally, Colin Dexter took the name Morse’ from his friend Jeremy Morse, another keen cruciverbalist. Sadly he passed away last year, at the age of 87. Jeremy Morse was another frequent winner of the Azed competition and he produced some really cracking clues – you can find them all on the “&lit” archive too. Here’s a little cryptic tribute: Morse inventor developed Nordic Telex (5,6) Now I think I’ll head home to cook my traditional mid-week vegetable curry, have a glass of wine, and see if I can watch a DVD last episode of Inspector Morse without crying R.I.P. Norman Colin Dexter (1930-2017) ### Emily Lakdawalla - The Planetary Society Blog Unraveling a Martian enigma: The hidden rivers of Arabia Terra Arabia Terra has always been a bit of a martian enigma. Planetary scientist Joel Davis takes us on a tour of its valley networks and their significance in telling the story of water on Mars. ### Symmetrybreaking - Fermilab/SLAC High-energy visionary Meet Hernán Quintana Godoy, a scientist who helped make Chile central to international astronomy. Professor Hernán Quintana Godoy has a way of taking the long view, peering back into the past through distant stars while looking ahead to the future of astronomy in his home, Chile. For three decades, Quintana has helped shape the landscape of astronomy in Chile, host to some of the largest ground-based observatories in the world. In January he became the first recipient of the Education Prize of the American Astronomical Society from a country other than the United States or Canada. “Training the next generation of astronomers should not be limited to just a few countries,” says Keely Finkelstein, former chair of the AAS Education Prize Committee. “[Quintana] has been a tireless advocate for establishing excellent education and research programs in Chile.” Quintana earned his doctorate from the University of Cambridge in the United Kingdom in 1973. The same year, a military junta headed by General Augusto Pinochet took power in a coup d’état. Quintana came home and secured a teaching position at the University of Chile. At the time, Chilean researchers mainly focused on the fundamentals of astronomy—measuring the radiation from stars and calculating the coordinates of celestial objects. By contrast, Quintana’s dissertation on high-energy phenomena seemed downright radical. A year and a half after taking his new job, Quintana was granted a leave of absence to complete a post-doc abroad. Writing from the United States, Quintana published an article encouraging Chile to take better advantage of its existing international observatories. He urged the government to provide more funding and to create an environment that would encourage foreign-educated astronomers to return home to Chile after their postgraduate studies. The article did not go over well with the administration at his university. “I wrote it for a magazine that was clearly against Pinochet,” Quintana says. “The magazine cover was a black page with a big ‘NO’ in red” related to an upcoming referendum. UCh dissolved Quintana’s teaching position. Quintana became a wandering postdoc and research associate in Europe, the US and Canada. It wasn’t until 1981 that Quintana returned to teach at the Physics Institute at Pontifical Catholic University of Chile. He continued to push the envelope at PUC. He created elective courses on general astronomy, extragalactic astrophysics and cluster dynamics. He revived and directed a small astronomy group. He encouraged students to expand their horizons by hiring both Chilean and foreign teachers and sending students to study abroad. “Because of him I took advantage of most of the big observatories in Chile and had an international perspective of research from the very beginning of my career,” says Amelia Ramirez, who studied with Quintana in 1983. A specialist in interacting elliptical galaxies, she is now head of Research and Development in University of La Serena. In mid-1980s Quintana became the scriptwriter for a set of distance learning astronomy classes produced by the educational division of his university’s public TV channel, TELEDUC. He challenged his viewers to take on advanced topics—and they responded. Illustration by Corinne Mucha “I even introduced two episodes on relativity theory,” Quintana says. “This shocked them. The reception was so good that I wrote a whole book on the subject.” The station partnered with universities and institutions across Chile to provide viewers the opportunity to earn a diploma by taking a written test based on the televised material. More than 5000 people enrolled during the four-year broadcasting period. “What stands out [about Quintana] is his strategic vision and his creativity to materialize projects,” says Alejandro Clocchiatti, a professor at PUC who worked with Quintana for 20 years. “All he does is with dedication and enthusiasm, even if things don’t go according to plan. He’s got an unbeatable optimism.” Over the years, Quintana has had a hand in planning the locations of multiple new telescopes in Chile. In 1994 he guided an expedition to identify the location of the Atacama Large Millimeter Array, a collection of 66 high-precision antennae. In 1998, PUC finally responded to decades of advocating by Quintana and his colleagues and opened a new major in astronomy. Gradually more universities followed suit. Quintana retired three years ago. He is optimistic about the future of Chilean astronomy. It has grown from a collection of 25 professors and their students in the late ’90s to a community of more than 800 hundred students, teachers and researchers. He says he is looking out for new discoveries forthcoming instruments will bring. The European Extremely Large Telescope, under construction on Cerro Armazones in the Atacama Desert of northern Chile, is expected to produce images 16 times sharper than Hubble’s. The southern facilities of the Cherenkov Telescope Array, a planned collection of 99 telescopes in Chile, will complement a northern array to complete the world’s most sensitive high-energy gamma-ray observatory. Both arrangements will peer into super-massive black holes, the atmospheres of extra-solar planets, and the origin of relativistic cosmic particles. “Everything in our universe is constantly changing,” Quintana says. “We are all heirs of that structural evolution.” ### The n-Category Cafe On the Operads of J. P. May Guest post by Simon Cho We continue the Kan Extension Seminar II with Max Kelly’s On the operads of J. P. May. As we will see, the main message of the paper is that (symmetric) operads enriched in a suitably nice category $𝒱\mathcal\left\{V\right\}$ arise naturally as monoids for a “substitution product” in the monoidal category $\left[P,𝒱\right]\left[\mathbf\left\{P\right\}, \mathcal\left\{V\right\}\right]$ (where $P\mathbf\left\{P\right\}$ is a category that keeps track of the symmetry). Before we begin, I want to thank the organizers and participants of the Kan Extension Seminar (II) for the opportunity to read and discuss these nice papers with them. Some time ago, in her excellent post about Hyland and Power’s paper, Evangelia described what Lawvere theories are about. We might think of Lawvere theories as a way to frame algebraic structure by stratifying the different components of an algebraic structure into roughly three ascending levels of specificity: the product structure, the specific algebraic operations (meaning, other than projections, etc.), and the models of that algebraic structure. These structures are manifested categorically through (respectively) the category ${\aleph }_{0}^{\text{op}}\aleph_0^\left\{\text\left\{op\right\}\right\}$ of finite sets and (the duals of) maps between them, a category $\mathcal\left\{L\right\}$ with finite products that has the same objects as ${\aleph }_{0}\aleph_0$, and some other category $𝒞\mathcal\left\{C\right\}$ with finite products. Then a Lawvere theory is just a strict product preserving functor $I:{\aleph }_{0}^{\text{op}}\to ℒI: \aleph_0^\left\{\text\left\{op\right\}\right\} \rightarrow \mathcal\left\{L\right\}$, and a model or interpretation of a Lawvere theory is a (non-strict) product preserving functor $M:ℒ\to 𝒞M: \mathcal\left\{L\right\} \rightarrow \mathcal\left\{C\right\}$. Thus ${\aleph }_{0}^{\text{op}}\aleph_0^\left\{\text\left\{op\right\}\right\}$ specifies the bare product structure (with the attendant projections, etc.) which gives us a notion of what it means to be “$nn$-ary” for some given $nn$; $II$ then transfers this notion of arity to the category $\mathcal\left\{L\right\}$, whose shape describes the specific algebraic structure in question (think of the diagrams one uses to categorically define the group axioms, for example); $MM$ then gives a particular manifestation of the algebraic structure $\mathcal\left\{L\right\}$ on an object $M\circ I\left(1\right)\in 𝒞M \circ I \left(1\right) \in \mathcal\left\{C\right\}$. The reason I bring this up is that I like to think of operads as what results when we make the following change of perspective on Lawvere theories: whereas models of Lawvere theories are essentially given by specifying a “ground set of elements” $A\in 𝒞A \in \mathcal\left\{C\right\}$ and taking as the $nn$-ary operations morphisms ${A}^{n}\to AA^n \rightarrow A$, we now consider a hypothetical category whose ($nn$-indexed) objects themselves are the homsets $𝒞\left({A}^{n},A\right)\mathcal\left\{C\right\}\left(A^n, A\right)$, along with some machinery that keeps track of what happens when we permute the argument slots. #### Cosmos structure on $\left[P,𝒱\right]\left[\mathbf\left\{P\right\}, \mathcal\left\{V\right\}\right]$ More precisely, consider the category $P\mathbf\left\{P\right\}$ with objects the natural numbers, and morphisms $P\left(m,n\right)\mathbf\left\{P\right\}\left(m,n\right)$ given by $P\left(n,n\right)={\Sigma }_{n}\mathbf\left\{P\right\}\left(n,n\right) = \Sigma_n$ (the symmetric group on $nn$ letters) and $P\left(m,n\right)=\varnothing \mathbf\left\{P\right\}\left(m,n\right) = \emptyset$ for $m\ne nm \neq n$. Let $𝒱\mathcal\left\{V\right\}$ be a cosmos, that is, a complete and cocomplete symmetric monoidal closed category with identity $II$ and internal hom $\left[-,-\right]\left[-,-\right]$. Fix $A\in 𝒱A \in \mathcal\left\{V\right\}$. The assignment $n↦\left[{A}^{\otimes n},A\right]n \mapsto \left[A^\left\{\otimes n\right\}, A\right]$ defines a functor $P\to 𝒱\mathbf\left\{P\right\} \rightarrow \mathcal\left\{V\right\}$ (where functoriality in $P\mathbf\left\{P\right\}$ comes from the symmetry of the tensor product in $𝒱\mathcal\left\{V\right\}$). This turns out to be a typical example of a $𝒱\mathcal\left\{V\right\}$-operad, which we call the “endomorphism operad” on $AA$. In order to actually define what an operad is, we need to lay some groundwork. (A point of notation: we will henceforth denote ${A}^{\otimes n}A^\left\{\otimes n\right\}$ by ${A}^{n}A^n$.) We’ll need the fact that the functor $𝒱\left(I,-\right):𝒱\to textbf\mathrm{Sets}\mathcal\left\{V\right\}\left(I, -\right): \mathcal\left\{V\right\} \rightarrow \textbf\left\{Sets\right\}$ has a left adjoint $FF$ given by $\mathrm{FX}={\coprod }_{X}IFX = \coprod_X I$. $FF$ takes the product to the tensor product (since it’s a left adjoint and tensor products in $𝒱\mathcal\left\{V\right\}$ distributes over coproducts), and in fact we can assume that it does so strictly. Henceforth for $X\in textbf\mathrm{Sets}X \in \textbf\left\{Sets\right\}$ and $A\in 𝒱A \in \mathcal\left\{V\right\}$ we write $X\otimes AX \otimes A$ to actually mean $\mathrm{FX}\otimes AFX \otimes A$. We then get a cosmos structure on $\mathcal\left\{F\right\}$ which is given by Day convolution: for $T,S\in ℱT,S \in \mathcal\left\{F\right\}$ we have $T\otimes S={\int }^{m,n}P\left(m+n,-\right)\otimes \mathrm{Tm}\otimes \mathrm{Sn}T \otimes S = \int^\left\{m,n\right\} \mathbf\left\{P\right\}\left(m+n, - \right) \otimes Tm \otimes Sn$ Since we are thinking of a given $T\in ℱT \in \mathcal\left\{F\right\}$ as a collection of operations (indexed by arity) on which we can act by permuting the argument slots, we can think of $\left(T\otimes S\right)k\left(T \otimes S\right) k$ as a collection of the $kk$-ary operations that we obtain by freely permuting $mm$ argument slots of type $TT$ and $nn$ argument slots of type $SS$ (where $m,nm,n$ range over all pairs such that $m+n=km+n = k$), modulo respecting the previously given actions of ${\Sigma }_{m}\Sigma_m$ (resp. ${\Sigma }_{n}\Sigma_n$) on $\mathrm{Tm}Tm$ (resp. $\mathrm{Sn}Sn$). The identity is then given by $P\left(0,-\right)\otimes I\mathbf\left\{P\right\}\left(0,-\right) \otimes I$. Associativity and symmetry of the cosmos structure. Now let $T,S,R\in ℱT,S, R \in \mathcal\left\{F\right\}$. If we unpack the definition, draw out some diagrams, and apply some abstract nonsense, we find that $T\otimes \left(S\otimes R\right)\simeq \left(T\otimes S\right)\otimes R\simeq {\int }^{m+n+k}P\left(m+n+k,-\right)\otimes \mathrm{Tm}\otimes \mathrm{Sn}\otimes \mathrm{Rk}T \otimes \left(S \otimes R\right) \simeq \left(T \otimes S\right) \otimes R \simeq \int^\left\{m+n+k\right\} \mathbf\left\{P\right\}\left(m+n+k, - \right) \otimes Tm \otimes Sn \otimes Rk$ which we can again assume are actually equalities. Before we address the symmetry of this monoidal structure, we make a technical point. $P\mathbf\left\{P\right\}$ itself has a symmetric monoidal structure, given by addition. Thus for ${n}_{1},\dots ,{n}_{m}\in Pn_1, \dots, n_m \in \mathbf\left\{P\right\}$ we have ${n}_{1}+\cdots +{n}_{m}\in Pn_1 + \cdots + n_m \in \mathbf\left\{P\right\}$. There is evidently an action of ${\Sigma }_{m}\Sigma_m$ on this term, which we require to be in the “wrong” direction, so that $\xi \in {\Sigma }_{m}\xi \in \Sigma_m$ induces $⟨\xi ⟩:{n}_{\xi 1}+\cdots +{n}_{\xi m}\to {n}_{1}+\cdots +{n}_{m}\langle \xi \rangle: n_\left\{\xi 1\right\} + \cdots + n_\left\{\xi m\right\} \rightarrow n_1 + \cdots + n_m$ rather than the other way around. (However, for the symmetry of the monoidal structure on $𝒱\mathcal\left\{V\right\}$, given a product ${A}_{1}\otimes \cdots \otimes {A}_{m}A_1 \otimes \cdots \otimes A_m$ we require that the action of ${\Sigma }_{m}\Sigma_m$ on this term is in the “correct” direction, i.e. $\xi \in {\Sigma }_{m}\xi \in \Sigma_m$ induces $⟨\xi ⟩:{A}_{1}\otimes \cdots \otimes {A}_{m}\to {A}_{\xi 1}\otimes \cdots \otimes {A}_{\xi m}\langle \xi \rangle: A_1 \otimes \cdots \otimes A_m \rightarrow A_\left\{\xi 1\right\} \otimes \cdots \otimes A_\left\{\xi m\right\}$.) We thus have: $\begin{array}{ccc}{T}_{1}\otimes \cdots \otimes {T}_{m}& =& {\int }^{{n}_{1},\dots ,{n}_{m}}P\left({n}_{1}+\cdots {n}_{m},-\right)\otimes {T}_{1}{n}_{1}\otimes \cdots \otimes {T}_{m}{n}_{m}\\ & & \\ ⟨\xi ⟩↓& & ↓P\left(⟨\xi ⟩,-\right)\otimes ⟨\xi ⟩\\ & & \\ {T}_{\xi 1}\otimes \cdots \otimes {T}_{\xi m}& =& {\int }^{{n}_{1},\dots ,{n}_{m}}P\left({n}_{\xi 1}+\cdots {n}_{\xi m},-\right)\otimes {T}_{\xi 1}{n}_{\xi 1}\otimes \cdots \otimes {T}_{\xi m}{n}_{\xi m}\\ \end{array} \begin\left\{matrix\right\} T_1 \otimes \cdots \otimes T_m &=& \int^\left\{n_1, \dots, n_m\right\} \mathbf\left\{P\right\}\left(n_1 + \cdots n_m, - \right) \otimes T_1 n_1 \otimes \cdots \otimes T_\left\{m\right\} n_m\\ &&\\ \left\{\langle \xi \rangle\right\} \Big \downarrow && \Big \downarrow \left\{\mathbf\left\{P\right\}\left(\langle \xi \rangle, -\right) \otimes \langle \xi \rangle\right\}\\ &&\\ T_\left\{\xi 1\right\} \otimes \cdots \otimes T_\left\{\xi m\right\} &=& \int^\left\{n_1, \dots, n_m\right\} \mathbf\left\{P\right\}\left(n_\left\{\xi 1\right\} + \cdots n_\left\{\xi m\right\}, - \right) \otimes T_\left\{\xi 1\right\} n_\left\{\xi 1\right\} \otimes \cdots \otimes T_\left\{\xi m\right\} n_\left\{\xi m\right\}\\ \end\left\{matrix\right\} $ Now $⟨\xi ⟩:{n}_{\xi 1}+\cdots +{n}_{\xi m}\to {n}_{1}+\cdots +{n}_{m}\langle \xi \rangle: n_\left\{\xi 1\right\} + \cdots + n_\left\{\xi m\right\} \rightarrow n_1 + \cdots + n_m$ extends to an action $⟨\xi ⟩:{T}_{1}\otimes \cdots \otimes {T}_{m}\to {T}_{\xi 1}\otimes \cdots \otimes {T}_{\xi m}\langle \xi \rangle: T_1 \otimes \cdots \otimes T_m \rightarrow T_\left\{\xi 1\right\} \otimes \cdots \otimes T_\left\{\xi m\right\}$ as we saw previously. Therefore we now have a functor ${P}^{\text{op}}×ℱ\to ℱ\mathbf\left\{P\right\}^\left\{\text\left\{op\right\}\right\} \times \mathcal\left\{F\right\} \rightarrow \mathcal\left\{F\right\}$ given by $\left(m,T\right)↦{T}^{m}\left(m, T\right) \mapsto T^m$, a fact which we will later use. $\mathcal\left\{F\right\}$ as a $𝒱\mathcal\left\{V\right\}$-category. There is a way in which we can regard $𝒱\mathcal\left\{V\right\}$ as a full coreflective subcategory of $\mathcal\left\{F\right\}$: consider the functor $\varphi :ℱ\to 𝒱\phi: \mathcal\left\{F\right\} \rightarrow \mathcal\left\{V\right\}$ given by $\varphi T=T0\phi T = T0$. This has a right adjoint $\psi :𝒱\to ℱ\psi: \mathcal\left\{V\right\} \rightarrow \mathcal\left\{F\right\}$ given by $\psi A=P\left(0,-\right)\otimes A\psi A = \mathbf\left\{P\right\}\left(0, -\right) \otimes A$. The inclusion $\psi \psi$ preserves all of the relevant monoidal structure, so we are justified in considering $A\in 𝒱A \in \mathcal\left\{V\right\}$ as either an object of $𝒱\mathcal\left\{V\right\}$ or of $\mathcal\left\{F\right\}$ (via the inclusion $\psi \psi$). With this notation we can write, for $A\in 𝒱A \in \mathcal\left\{V\right\}$ and $T,S\in ℱT,S \in \mathcal\left\{F\right\}$: $ℱ\left(A\otimes T,S\right)\simeq 𝒱\left(A,\left[T,S\right]\right)\mathcal\left\{F\right\}\left(A \otimes T, S\right) \simeq \mathcal\left\{V\right\}\left(A, \left[T,S\right]\right)$ If $T,S\in ℱT, S \in \mathcal\left\{F\right\}$ then their $\mathcal\left\{F\right\}$-valued hom is given by $\left[\left[T,S\right]\right]\left[\left[T,S\right]\right]$, where for $k\in Pk \in \mathbf\left\{P\right\}$ we have $\left[\left[T,S\right]\right]k={\int }_{n}\left[\mathrm{Tn},S\left(n+k\right)\right]\left[\left[T,S\right]\right]k = \int_n \left[Tn, S\left(n+k\right)\right]$ and their $𝒱\mathcal\left\{V\right\}$-valued hom, which makes $\mathcal\left\{F\right\}$ into a $𝒱\mathcal\left\{V\right\}$-category, is given by $\left[T,S\right]=\varphi \left[\left[T,S\right]\right]={\int }_{n}\left[\mathrm{Tn},\mathrm{Sn}\right]\left[T,S\right] = \phi \left[\left[T,S\right]\right] = \int_n \left[Tn, Sn\right]$ #### The substitution product Let us return to our motivating example of the endomorphism operad (which we denote by $\left\{A,A\right\}\\left\{A,A\\right\}$) on $AA$, for a fixed $A\in 𝒱A \in \mathcal\left\{V\right\}$. For now it’s just an object $\left\{A,A\right\}\in ℱ\\left\{A, A\\right\} \in \mathcal\left\{F\right\}$; but it contains more structure than we’re currently using. Namely, for each $m,{n}_{1},\dots ,{n}_{m}\in Pm, n_1, \dots, n_m \in \mathbf\left\{P\right\}$ we can give a morphism $\left[{A}^{m},A\right]\otimes \left(\left[{A}^{{n}_{1}},A\right]\otimes \cdots \otimes \left[{A}^{{n}_{m}},A\right]\right)\to \left[{A}^{{n}_{1}+\cdots +{n}_{m}},A\right]\left[A^m, A\right] \otimes \left \left( \left[A^\left\{n_1\right\}, A\right] \otimes \cdots \otimes \left[A^\left\{n_m\right\}, A\right] \right \right) \rightarrow \left[A^\left\{n_1 + \cdots + n_m\right\}, A\right]$ coming from evaluation (see the section below about the little $nn$-disks operad for details). We would like a general framework for expressing such a notion of composing operations. Definition of an operad. Recall from the previous section that, for given $T\in ℱT \in \mathcal\left\{F\right\}$, we can consider $n↦{T}^{n}n \mapsto T^n$ as a functor ${P}^{\text{op}}\to ℱ\mathbf\left\{P\right\}^\left\{\text\left\{op\right\}\right\} \rightarrow \mathcal\left\{F\right\}$. We can thus define a (non-symmetric!) product $T\circ S={\int }^{n}\mathrm{Tn}\otimes {S}^{n}T \circ S = \int^n Tn \otimes S^n$. It is easy to check that if $S\in 𝒱S \in \mathcal\left\{V\right\}$ then in fact $T\circ S\in 𝒱T \circ S \in \mathcal\left\{V\right\}$, so that $\circ \circ$ can be considered as a functor either of type $ℱ×ℱ\to ℱ\mathcal\left\{F\right\} \times \mathcal\left\{F\right\} \rightarrow \mathcal\left\{F\right\}$ or of type $ℱ×𝒱\to 𝒱\mathcal\left\{F\right\} \times \mathcal\left\{V\right\} \rightarrow \mathcal\left\{V\right\}$. The clarity with which Kelly’s paper demonstrates the various important properties of this substitution product would be difficult for me to improve upon, so I simply list here the punchlines, and refer the reader to the original paper for their proofs: • For $T,S\in ℱT,S \in \mathcal\left\{F\right\}$ and $n\in Pn \in \mathbf\left\{P\right\}$, we have $\left(T\circ S{\right)}^{n}\simeq {T}^{n}\circ S\left(T \circ S\right)^n \simeq T^n \circ S$ which is natural in $T,S,nT, S, n$. Using this and a Fubini style argument we get associativity of $\circ \circ$. • $J=P\left(1,-\right)\otimes IJ = \mathbf\left\{P\right\}\left(1, - \right)\otimes I$ is the identity for $\circ \circ$. • For $S\in ℱS \in \mathcal\left\{F\right\}$, $-\circ S:ℱ\to ℱ- \circ S: \mathcal\left\{F\right\} \rightarrow \mathcal\left\{F\right\}$ has the right adjoint $\left\{S,-\right\}\\left\{S, -\\right\}$ given by $\left\{S,R\right\}m=\left[{S}^{m},R\right]\\left\{S, R\\right\}m = \left[S^m, R\right]$. Moreover if $A\in 𝒱A \in \mathcal\left\{V\right\}$ then we in fact have $𝒱\left(T\circ A,B\right)\simeq ℱ\left(T,\left\{A,B\right\}\right)\mathcal\left\{V\right\}\left(T \circ A, B\right) \simeq \mathcal\left\{F\right\} \left(T, \\left\{A, B\\right\}\right)$. We can now define an operad as a monoid for $\circ \circ$, i.e. some $T\in ℱT \in \mathcal\left\{F\right\}$ equipped with $\mu :T\circ T\to T\mu: T \circ T \rightarrow T$ and $\eta :J\to T\eta: J \rightarrow T$ satisfying the monoid axioms. Operad morphisms are morphisms $T\to {T}^{\prime }T \rightarrow T^\prime$ that respect $\mu \mu$ and $\eta \eta$. $\left\{A,A\right\}\\left\{A, A\\right\}$ as an operad. Once again we turn back to the example of $\left\{A,A\right\}\in ℱ\\left\{A, A\\right\} \in \mathcal\left\{F\right\}$. Note that our choice to denote the endomorphism operad $\left(n↦\left[{A}^{n},A\right]\right)\left(n \mapsto \left[A^n, A\right]\right)$ by $\left\{A,A\right\}\\left\{A, A\\right\}$ agrees with the construction of $\left\{A,-\right\}\\left\{A, -\\right\}$ as the right adjoint to $-\circ A- \circ A$. There is an evident evaluation map $\left\{A,A\right\}\circ A\stackrel{e}{\to }A\\left\{A, A\\right\} \circ A \xrightarrow\left\{e\right\} A$, so that we have the composition $\left\{A,A\right\}\circ \left\{A,A\right\}\circ A\stackrel{1\circ e}{\to }\left\{A,A\right\}\circ A\stackrel{e}{\to }A\\left\{A, A\\right\} \circ \\left\{A, A\\right\} \circ A \xrightarrow\left\{1 \circ e\right\} \\left\{A,A\\right\} \circ A \xrightarrow\left\{e\right\} A$ which by adjunction gives us $\mu :\left\{A,A\right\}\circ \left\{A,A\right\}\to \left\{A,A\right\}\mu:\\left\{A,A\\right\} \circ \\left\{A,A\\right\} \rightarrow \\left\{A,A\\right\}$ which we take as our monoid multiplication. Similarly $J\circ A\simeq AJ \circ A \simeq A$ corresponds by adjunction to $\eta :J\to \left\{A,A\right\}\eta: J \rightarrow \\left\{A, A\\right\}$. We thus have that $\left\{A,A\right\}\\left\{A,A\\right\}$ is an operad. In fact it is the “universal” operad, in the following sense: Every operad $T\in ℱT \in \mathcal\left\{F\right\}$ gives a monad $T\circ -T \circ -$ on $\mathcal\left\{F\right\}$, or on $𝒱\mathcal\left\{V\right\}$ via restriction. Given $A\in ℱA \in \mathcal\left\{F\right\}$, algebra structures ${h}^{\prime }:T\circ A\to Ah^\left\{\prime\right\}: T \circ A \rightarrow A$ for the monad $T\circ -T \circ -$ on $AA$ correspond precisely to operad morphisms $h:T\to \left\{A,A\right\}h: T \rightarrow \\left\{A,A\\right\}$. In this case we say that $hh$ gives an algebra structure on $AA$ for the operad $TT$. #### The little $nn$-disks operad There are some other aspects of operads that the paper looks at, but for this post I will abuse artistic license to talk about something else that isn’t exactly in the paper (although it is indirectly referenced): May’s little $nn$-disks operad. For a great introduction to the following material I recommend Emily Riehl’s notes on Kathryn Hess’s two-part (I,II) talk on operads in algebraic topology. Let $𝒱=\left({\mathrm{Top}}_{\text{nice}},×,\left\{*\right\}\right)\mathcal\left\{V\right\} = \left(\mathbf\left\{Top\right\}_\left\{\text\left\{nice\right\}\right\}, \times, \\left\{*\\right\}\right)$ where ${\mathrm{Top}}_{\text{nice}}\mathbf\left\{Top\right\}_\left\{\text\left\{nice\right\}\right\}$ is one’s favorite cartesian closed category of topological spaces, with $×\times$ the appropriate product in this category. Fix some $n\in ℕn \in \mathbb\left\{N\right\}$. For $k\in Pk \in \mathbf\left\{P\right\}$, we let ${d}_{n}\left(k\right)=\text{sEmb}\left({\coprod }_{k}{D}^{n},{D}^{n}\right)d_n\left(k\right) = \text\left\{sEmb\right\}\left(\coprod_\left\{k\right\} D^n, D^n\right)$, the space of standard embeddings of $kk$ copies of the closed unit $nn$-disk in ${ℝ}^{n}\mathbb\left\{R\right\}^n$ into the closed unit $nn$-disk in ${ℝ}^{n}\mathbb\left\{R\right\}^n$. By the space of standard embeddings we mean the subspace of the mapping space consisting of the maps which restrict on each summand to affine maps $x↦\lambda x+cx \mapsto \lambda x + c$ with $0\le \lambda \le 10 \leq \lambda \leq 1$. Given $\xi \in P\left(k,k\right)\xi \in \mathbf\left\{P\right\}\left(k, k\right)$ we have the evident action $⟨\xi ⟩:\text{sEmb}\left({\coprod }_{k}{D}^{n},{D}^{n}\right)\to \text{sEmb}\left({\coprod }_{\xi k}{D}^{n},{D}^{n}\right)\langle \xi \rangle: \text\left\{sEmb\right\}\left(\coprod_\left\{k\right\} D^n, D^n\right) \rightarrow \text\left\{sEmb\right\}\left(\coprod_\left\{\xi k\right\} D^n, D^n\right)$, which gives us a functor ${d}_{n}:P\to {\mathrm{Top}}_{\text{nice}}d_n: \mathbf\left\{P\right\} \rightarrow \mathbf\left\{Top\right\}_\left\{\text\left\{nice\right\}\right\}$, so ${d}_{n}\in ℱd_n \in \mathcal\left\{F\right\}$. Fix some $k,l\in Pk,l \in \mathbf\left\{P\right\}$; then ${d}_{n}^{k}\left(l\right)={\int }^{{m}_{1},\dots ,{m}_{k}}P\left({m}_{1}+\cdots +{m}_{k},l\right)\otimes {d}_{n}\left({m}_{1}\right)\otimes \cdots \otimes {d}_{n}\left({m}_{k}\right)d_n^k\left(l\right) = \int^\left\{m_1, \dots, m_k\right\} \mathbf\left\{P\right\}\left(m_1 + \cdots + m_k, l\right) \otimes d_n\left(m_1\right) \otimes \cdots \otimes d_n\left(m_k\right)$, which we can roughly think of as all the different ways we can partition a total of $ll$ disks into $kk$ blocks, with the ${i}^{\text{th}}i^\left\{\text\left\{th\right\}\right\}$ block having ${m}_{i}m_i$ disks, and then map each block of ${m}_{i}m_i$ disks into a single disk, all the while being able to permute the $ll$ disks amongst themselves (without necessarily having to respect the partitions). We then get $\mu :{d}_{n}\circ {d}_{n}\to {d}_{n}\mu: d_n \circ d_n \rightarrow d_n$ by composing the disk embeddings. More precisely, for each $ll$ we get a morphism ${\mu }_{l}:\left({d}_{n}\left(k\right)\otimes {d}_{n}^{k}\right)l\simeq {d}_{n}\left(k\right)\otimes \left({d}_{n}^{k}\left(l\right)\right)\to {d}_{n}\left(l\right)\mu_l: \left(d_n\left(k\right) \otimes d_n^k\right)l \simeq d_n\left(k\right) \otimes \left(d_n^k\left(l\right)\right) \rightarrow d_n\left(l\right)$ from the following considerations: First we note that \begin{array}{rl}{d}_{n}\left(k\right)\otimes {d}_{n}\left({m}_{1}\right)\otimes \cdots \otimes {d}_{n}\left({m}_{k}\right)& =\text{sEmb}\left(\coprod _{k}{D}^{n},{D}^{n}\right)×\left(\prod _{1\le i\le k}\text{sEmb}\left(\coprod _{{m}_{i}}{D}^{n},{D}^{n}\right)\right)\\ & \simeq \text{sEmb}\left({D}^{n},{D}^{n}{\right)}^{k}×\left(\prod _{1\le i\le k}\text{sEmb}\left(\coprod _{{m}_{i}}{D}^{n},{D}^{n}\right)\right)\\ & \simeq \prod _{1\le i\le k}\left(\text{sEmb}\left(\coprod _{{m}_{i}}{D}^{n},{D}^{n}\right)×\text{sEmb}\left({D}^{n},{D}^{n}\right)\right).\end{array} \begin\left\{aligned\right\} d_n\left(k\right) \otimes d_n\left(m_1\right) \otimes \cdots \otimes d_n\left(m_k\right) &= \text\left\{sEmb\right\}\left(\coprod_k D^n, D^n\right) \times \left(\prod_\left\{1 \leq i \leq k\right\} \text\left\{sEmb\right\}\left(\coprod_\left\{m_i\right\} D^n, D^n\right)\right)\\ &\simeq \text\left\{sEmb\right\}\left(D^n, D^n\right)^k \times \left(\prod_\left\{1 \leq i \leq k\right\} \text\left\{sEmb\right\}\left(\coprod_\left\{m_i\right\} D^n, D^n\right)\right)\\ &\simeq \prod_\left\{1 \leq i \leq k\right\} \left(\text\left\{sEmb\right\}\left(\coprod_\left\{m_i\right\} D^n, D^n\right) \times \text\left\{sEmb\right\}\left(D^n, D^n\right)\right). \end\left\{aligned\right\} Now for each $ii$ there is a map $\text{sEmb}\left({\coprod }_{{m}_{i}}{D}^{n},{D}^{n}\right)×\text{sEmb}\left({D}^{n},{D}^{n}\right)\to \text{sEmb}\left({\coprod }_{{m}_{i}}{D}^{n},{D}^{n}\right)\text\left\{sEmb\right\}\left(\coprod_\left\{m_i\right\} D^n, D^n\right) \times \text\left\{sEmb\right\}\left(D^n, D^n\right) \rightarrow \text\left\{sEmb\right\}\left(\coprod_\left\{m_i\right\}D^n, D^n\right)$ induced from iterated evaluation by adjunction. Then by the above, this gives a morphism \begin{array}{rl}{d}_{n}\left(k\right)\otimes {d}_{n}\left({m}_{1}\right)\otimes \cdots \otimes {d}_{n}\left({m}_{k}\right)& \to \prod _{1\le i\le k}\text{sEmb}\left(\coprod _{{m}_{i}}{D}^{n},{D}^{n}\right)\\ & \simeq \text{sEmb}\left(\coprod _{{m}_{1}+\cdots +{m}_{k}}{D}^{n},{D}^{n}\right)\\ & ={d}_{n}\left({m}_{1}+\cdots +{m}_{k}\right).\end{array} \begin\left\{aligned\right\} d_n\left(k\right) \otimes d_n\left(m_1\right) \otimes \cdots \otimes d_n\left(m_k\right) &\rightarrow \prod_\left\{1 \leq i \leq k\right\} \text\left\{sEmb\right\} \left(\coprod_\left\{m_i\right\} D^n, D^n\right)\\ &\simeq \text\left\{sEmb\right\}\left(\coprod_\left\{m_1 + \cdots + m_k\right\} D^n, D^n\right)\\ &= d_n\left(m_1 + \cdots + m_k\right). \end\left\{aligned\right\} A big reason that the little $nn$-disks operad is relevant to algebraic topology is that there is a big theorem stating that a space is weakly equivalent to an $nn$-fold loop space if and only if it’s an algebra for ${d}_{n}d_n$. One direction is straightforward: consider a space $AA$ and its $nn$-fold loop space ${\Omega }^{n}A\Omega^n A$. Given an element of ${d}_{n}\left(k\right)d_n \left(k\right)$ and $kk$ choices of “little maps” $\left({D}^{n},\partial {D}^{n}\right)\to \left(A,*\right)\left(D^n, \partial D^n\right) \rightarrow \left(A, \ast\right)$, we can stitch together these little maps into one large map $\left({D}^{n},\partial {D}^{n}\right)\to \left(A,*\right)\left(D^n, \partial D^n\right) \rightarrow \left(A,\ast\right)$ according to the instructions specified by the chosen element of ${d}_{n}\left(k\right)d_n\left(k\right)$ (where we map everything in the complement of the $kk$ little disks to the basepoint in $AA$). Doing this for each $kk$, we get an operad morphism ${d}_{n}\to \left\{{\Omega }^{n}A,{\Omega }^{n}A\right\}d_n \rightarrow \\left\{\Omega^n A, \Omega^n A\\right\}$. The other direction is much harder, and Maru gave an absolutely fantastic sketch of the basic story in our group discussions, which I hope she will post in the comments; I refrain from including it in the body of this post, partially for reasons of length and partially because I would just end up repeating verbatim what she said in the discussion. ### Clifford V. Johnson - Asymptotia News from the Front, XIII: Holographic Heat Engines for Fun and Profit I put a set of new results out on to the arxiv recently. They were fun to work out. They represent some of my continued fascination with holographic heat engines, those things I came up with back in 2014 that I think I've written about here before (here and here). For various reasons (that I've explained in various papers) I like to think of them as an answer waiting for the right question, and I've been refining my understanding of them in various projects, trying to get clues to what the question or questions might be. As I've said elsewhere, I seem to have got into the habit of using 21st Century techniques to tackle problems of a 19th Century flavour! The title of the paper is "Approaching the Carnot limit at finite power: An exact solution". As you may know, the Carnot engine, whose efficiency is the best a heat engine can do (for specified temperatures of exchange with the hot and cold reservoirs), is itself not a useful practical engine. It is a perfectly reversible engine and as such takes infinite time to run a cycle. A zero power engine is not much practical use. So you might wonder how close a real engine can come to the Carnot efficiency... the answer should be that it can come arbitrarily close, but most engines don't, and so people who care about this sort of thing spend a lot of time thinking about how to design special engines that can come close. And there are various arguments you can make for how to do it in various special systems and so forth. It's all very interesting and there's been some important work done. What I realized recently is that my old friends the holographic heat engines are a very good tool for tackling this problem. Part of the reason is that the underlying working substance that I've been using is a black hole (or, if you prefer, is defined by a black hole), and such things are often captured as exact [...] Click to continue reading this post The post News from the Front, XIII: Holographic Heat Engines for Fun and Profit appeared first on Asymptotia. ## March 20, 2017 ### Emily Lakdawalla - The Planetary Society Blog Signed, sealed but not delivered: LightSail 2 awaits ship date Following a pre-ship review at Planetary Society headquarters, LightSail 2 is ready to be integrated with its Prox-1 partner spacecraft. The final shipping schedule, however, has yet to be determined. ## March 19, 2017 ### Jaques Distler - Musings Responsibility Many years ago, when I was an assistant professor at Princeton, there was a cocktail party at Curt Callan’s house to mark the beginning of the semester. There, I found myself in the kitchen, chatting with Sacha Polyakov. I asked him what he was going to be teaching that semester, and he replied that he was very nervous because — for the first time in his life — he would be teaching an undergraduate course. After my initial surprise that he had gotten this far in life without ever having taught an undergraduate course, I asked which course it was. He said it was the advanced undergraduate Mechanics course (chaos, etc.) and we agreed that would be a fun subject to teach. We chatted some more, and then he said that, on reflection, he probably shouldn’t be quite so worried. After all, it wasn’t as if he was going to teach Quantum Field Theory, “That’s a subject I’d feel responsible for.” This remark stuck with me, but it never seemed quite so poignant until this semester, when I find myself teaching the undergraduate particle physics course. The textbooks (and I mean all of them) start off by “explaining” that relativistic quantum mechanics (e.g. replacing the Schrödinger equation with Klein-Gordon) make no sense (negative probabilities and all that …). And they then proceed to use it anyway (supplemented by some Feynman rules pulled out of thin air). This drives me up the #@%^ing wall. It is precisely wrong. There is a perfectly consistent quantum mechanical theory of free particles. The problem arises when you want to introduce interactions. In Special Relativity, there is no interaction-at-a-distance; all forces are necessarily mediated by fields. Those fields fluctuate and, when you want to study the quantum theory, you end up having to quantize them. But the free particle is just fine. Of course it has to be: free field theory is just the theory of an (indefinite number of) free particles. So it better be true that the quantum theory of a single relativistic free particle makes sense. So what is that theory? 1. It has a Hilbert space, $\mathcal\left\{H\right\}$, of states. To make the action of Lorentz transformations as simple as possible, it behoves us to use a Lorentz-invariant inner product on that Hilbert space. This is most easily done in the momentum representation $⟨\chi |\varphi ⟩=\int \frac{{d}^{3}\stackrel{⇀}{k}}{{\left(2\pi \right)}^{3}2\sqrt{{\stackrel{⇀}{k}}^{2}+{m}^{2}}}\phantom{\rule{thinmathspace}{0ex}}\chi \left(\stackrel{⇀}{k}{\right)}^{*}\varphi \left(\stackrel{⇀}{k}\right) \langle\chi|\phi\rangle = \int \frac\left\{d^3\vec\left\{k\right\}\right\}\left\{\left\{\left(2\pi\right)\right\}^3 2\sqrt\left\{\vec\left\{k\right\}^2+m^2\right\}\right\}\, \chi\left(\vec\left\{k\right\}\right)^* \phi\left(\vec\left\{k\right\}\right) $ 2. As usual, the time-evolution is given by a Schrödinger equation (1)$i{\partial }_{t}|\psi ⟩={H}_{0}|\psi ⟩i\partial_t |\psi\rangle = H_0 |\psi\rangle $ where ${H}_{0}=\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}H_0 = \sqrt\left\{\vec\left\{p\right\}^2+m^2\right\}$. Now, you might object that it is hard to make sense of a pseudo-differential operator like ${H}_{0}H_0$. Perhaps. But it’s not any harder than making sense of $U\left(t\right)={e}^{-i{\stackrel{⇀}{p}}^{2}t/2m}U\left(t\right)= e^\left\{-i \vec\left\{p\right\}^2 t/2m\right\}$, which we routinely pretend to do in elementary quantum. In both cases, we use the fact that, in the momentum representation, the operator $\stackrel{⇀}{p}\vec\left\{p\right\}$ is represented as multiplication by $\stackrel{⇀}{k}\vec\left\{k\right\}$. I could go on, but let me leave the rest of the development of the theory as a series of questions. 1. The self-adjoint operator, $\stackrel{⇀}{x}\vec\left\{x\right\}$, satisfies $\left[{x}^{i},{p}_{j}\right]=i{\delta }_{j}^{i} \left[x^i,p_j\right] = i \delta^\left\{i\right\}_j $ Thus it can be written in the form ${x}^{i}=i\left(\frac{\partial }{\partial {k}_{i}}+{f}_{i}\left(\stackrel{⇀}{k}\right)\right) x^i = i\left\left(\frac\left\{\partial\right\}\left\{\partial k_i\right\} + f_i\left(\vec\left\{k\right\}\right)\right\right) $ for some real function ${f}_{i}f_i$. What is ${f}_{i}\left(\stackrel{⇀}{k}\right)f_i\left(\vec\left\{k\right\}\right)$? 2. Define ${J}^{0}\left(\stackrel{⇀}{r}\right)J^0\left(\vec\left\{r\right\}\right)$ to be the probability density. That is, when the particle is in state $|\varphi ⟩|\phi\rangle$, the probability for finding it in some Borel subset $S\subset {ℝ}^{3}S\subset\mathbb\left\{R\right\}^3$ is given by $\text{Prob}\left(S\right)={\int }_{S}{d}^{3}\stackrel{⇀}{r}{J}^{0}\left(\stackrel{⇀}{r}\right) \text\left\{Prob\right\}\left(S\right) = \int_S d^3\vec\left\{r\right\} J^0\left(\vec\left\{r\right\}\right) $ Obviously, ${J}^{0}\left(\stackrel{⇀}{r}\right)J^0\left(\vec\left\{r\right\}\right)$ must take the form ${J}^{0}\left(\stackrel{⇀}{r}\right)=\int \frac{{d}^{3}\stackrel{⇀}{k}{d}^{3}\stackrel{⇀}{k}\prime }{{\left(2\pi \right)}^{6}4\sqrt{{\stackrel{⇀}{k}}^{2}+{m}^{2}}\sqrt{{\stackrel{⇀}{k}\prime }^{2}+{m}^{2}}}g\left(\stackrel{⇀}{k},\stackrel{⇀}{k}\prime \right){e}^{i\left(\stackrel{⇀}{k}-\stackrel{⇀}{k\prime }\right)\cdot \stackrel{⇀}{r}}\varphi \left(\stackrel{⇀}{k}\right)\varphi \left(\stackrel{⇀}{k}\prime {\right)}^{*} J^0\left(\vec\left\{r\right\}\right) = \int\frac\left\{d^3\vec\left\{k\right\}d^3\vec\left\{k\right\}\text{'}\right\}\left\{\left\{\left(2\pi\right)\right\}^6 4\sqrt\left\{\vec\left\{k\right\}^2+m^2\right\}\sqrt\left\{\left\{\vec\left\{k\right\}\text{'}\right\}^2+m^2\right\}\right\} g\left(\vec\left\{k\right\},\vec\left\{k\right\}\text{'}\right) e^\left\{i\left(\vec\left\{k\right\}-\vec\left\{k\text{'}\right\}\right)\cdot\vec\left\{r\right\}\right\}\phi\left(\vec\left\{k\right\}\right)\phi\left(\vec\left\{k\right\}\text{'}\right)^* $ Find $g\left(\stackrel{⇀}{k},\stackrel{⇀}{k}\prime \right)g\left(\vec\left\{k\right\},\vec\left\{k\right\}\text{'}\right)$. (Hint: you need to diagonalize the operator $\stackrel{⇀}{x}\vec\left\{x\right\}$ that you found in problem 1.) 3. The conservation of probability says $0={\partial }_{t}{J}^{0}+{\partial }_{i}{J}^{i} 0=\partial_t J^0 + \partial_i J^i $ Use the Schrödinger equation (1) to find ${J}^{i}\left(\stackrel{⇀}{r}\right)J^i\left(\vec\left\{r\right\}\right)$. 4. Under Lorentz transformations, ${H}_{0}H_0$ and $\stackrel{⇀}{p}\vec\left\{p\right\}$ transform as the components of a 4-vector. For a boost in the $zz$-direction, of rapidity $\lambda \lambda$, we should have $\begin{array}{rl}{U}_{\lambda }\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}{U}_{\lambda }^{-1}& =\mathrm{cosh}\left(\lambda \right)\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}+\mathrm{sinh}\left(\lambda \right){p}_{3}\\ {U}_{\lambda }{p}_{1}{U}_{\lambda }^{-1}& ={p}_{1}\\ {U}_{\lambda }{p}_{2}{U}_{\lambda }^{-1}& ={p}_{3}\\ {U}_{\lambda }{p}_{3}{U}_{\lambda }^{-1}& =\mathrm{sinh}\left(\lambda \right)\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}+\mathrm{cosh}\left(\lambda \right){p}_{3}\end{array} \begin\left\{split\right\} U_\lambda \sqrt\left\{\vec\left\{p\right\}^2+m^2\right\} U_\lambda^\left\{-1\right\} &= \cosh\left(\lambda\right) \sqrt\left\{\vec\left\{p\right\}^2+m^2\right\} + \sinh\left(\lambda\right) p_3\\ U_\lambda p_1 U_\lambda^\left\{-1\right\} &= p_1\\ U_\lambda p_2 U_\lambda^\left\{-1\right\} &= p_3\\ U_\lambda p_3 U_\lambda^\left\{-1\right\} &= \sinh\left(\lambda\right) \sqrt\left\{\vec\left\{p\right\}^2+m^2\right\} + \cosh\left(\lambda\right) p_3 \end\left\{split\right\} $ and we should be able to write ${U}_{\lambda }={e}^{i\lambda B}U_\lambda = e^\left\{i\lambda B\right\}$ for some self-adjoint operator, $BB$. What is $BB$? (N.B.: by contrast the ${x}^{i}x^i$, introduced above, do not transform in a simple way under Lorentz transformations.) The Hilbert space of a free scalar field is now ${⨁}_{n=0}^{\infty }{\text{Sym}}^{n}ℋ\bigoplus_\left\{n=0\right\}^\infty \text\left\{Sym\right\}^n\mathcal\left\{H\right\}$. That’s perhaps not the easiest way to get there. But it is a way … #### Update: Yike! Well, that went south pretty fast. For the first time (ever, I think) I’m closing comments on this one, and calling it a day. To summarize, for those who still care, 1. There is a decomposition of the Hilbert space of a Free Scalar field as ${ℋ}_{\varphi }=\underset{n=0}{\overset{\infty }{⨁}}{ℋ}_{n} \mathcal\left\{H\right\}_\phi = \bigoplus_\left\{n=0\right\}^\infty \mathcal\left\{H\right\}_n $ where ${ℋ}_{n}={\text{Sym}}^{n}ℋ \mathcal\left\{H\right\}_n = \text\left\{Sym\right\}^n \mathcal\left\{H\right\} $ and $\mathcal\left\{H\right\}$ is 1-particle Hilbert space described above (also known as the spin-$00$, mass-$mm$, irreducible unitary representation of Poincaré). 2. The Hamiltonian of the Free Scalar field is the direct sum of the induced Hamiltonia on ${ℋ}_{n}\mathcal\left\{H\right\}_n$, induced from the Hamiltonian, $H=\sqrt{{\stackrel{⇀}{p}}^{2}+{m}^{2}}H=\sqrt\left\{\vec\left\{p\right\}^2+m^2\right\}$, on $\mathcal\left\{H\right\}$. In particular, it (along with the other Poincaré generators) is block-diagonal with respect to this decomposition. 3. There are other interesting observables which are also block-diagonal, with respect to this decomposition (i.e., don’t change the particle number) and hence we can discuss their restriction to ${ℋ}_{n}\mathcal\left\{H\right\}_n$. Gotta keep reminding myself why I decided to foreswear blogging… ## March 18, 2017 ### Clifford V. Johnson - Asymptotia BBC CrowdScience SXSW Panel! They recorded one of the panels I was on at SXSW as a 30 minute episode of the BBC World Service programme CrowdScience! The subject was science and the movies, and it was a lot of fun, with some illuminating exchanges, I had some fantastic co-panellists: Dr. Mae Jemison (the astronaut, doctor, and chemical engineer), Professor Polina Anikeeva (she researches in materials science and engineering at MIT), and Rick Loverd (director of the Science and Entertainment Exchange), and we had an excellent host, Marnie Chesterton. It has aired now, but in case you missed it, here is a link to the site where you can listen to our discussion. The post BBC CrowdScience SXSW Panel! appeared first on Asymptotia. ### ZapperZ - Physics and Physicists Minutephysics's "How To Teleport Schrodinger's Cat" It used to be that Minute Physics videos are roughly.... a minute long. But that is no longer true. Here, he tackles quantum entanglement via trying an illustration of teleporting the infamous Schrodinger's Cat. I'm sorry, but how many of you managed to follow this? I think I'll stick to my "Quantum Entanglement for Dummies". :) Zz. ### Lubos Motl - string vacua and pheno Particles' wave functions always spread superluminally It's been almost a week since we discussed Jacques Distler's confusion about some basics of quantum field theory. He posts several blog posts a year, a quantum field theory course is probably the only one he teaches, and he was "driven up the wall" by a point that almost every good introductory textbook makes at the very beginning. I expected that within a day or two, he would post a detailed text with the derivations saying "Oops, I've been silly [for 50 years]". It just didn't happen. He still insists that the one-particle truncation of a quantum field theory is perfectly consistent and causal. In particular, he repeated many times in his blog post (search for the word "superluminal") that the relativistically modified Schrödinger's equation for one particle (with a square root) guarantees that the wave packets never spread faster than the speed of light. Oops, it's just too bad. By these comments, Jacques says that he is ignorant about many things that I (and my instructors) considered basics of quantum field theory since I was an undergraduate, such as: 1. The special theory of relativity and quantum mechanics are consistent but their combination is constraining and has some unavoidable consequences – some basic general properties of quantum field theories. 2. Consistent relativistic quantum mechanical theories guarantee that objects capable of emitting a particle are necessarily able to absorb them as well, and vice versa. 3. For particles that are charged in any way, the existence of antiparticles becomes an unavoidable consequence of relativity and quantum mechanics. 4. Probabilities of processes (e.g. cross sections) that involve these antiparticles are guaranteed to be linked to probabilities involving the original particles via crossing symmetry or its generalizations. 5. The pair production of particles and antiparticles becomes certain when energy $$E\gg m$$ is available or when fields are squeezed at distances $$\ell \ll 1/m$$ (much) shorter than the Compton wavelength. 6. Only observables constructed from quantum fields may be attributed to regions of the Minkowski spacetime so that they're independent from each other at spacelike separations (because they commute or anticommute). 7. Wave functions that are functions of "positions of particles" unavoidably allow propagation that exceeds the speed of light and there can't be any equation that bans it. The causal propagation only applies to quantum fields (the observables), not to wave functions of particles' positions. 8. Equivalently, almost all trajectories of particles that contribute to the Feynman path integral are superluminal and non-differentiable almost everywhere and this fact can't be avoided by any relativistic version of the mathematical expressions. Causality is only obtained by a combination of emission and absorption, contributions from particles and antiparticles, and at the level of quantum fields (observables). It's a lot of basic stuff that Jacques should know but instead, he doesn't know it and these insight drive him up the wall. Let's look at those things. The most well-defined disagreement is about the "relativistically corrected" Schrödinger equation$i\hbar\frac{\partial}{\partial t} \psi = c \sqrt{m^2c^2-\hbar^2\Delta} \psi + V(x) \psi$ You see that it's like the usual one-particle equation except that the non-relativistic formula for the kinetic energy, $$E=|\vec p|^2/2m$$, is replaced by the relativistic one, $$E=\sqrt{|\vec p|^2+m^2}$$, with the same Laplacian (times $$-\hbar^2$$) substituted for $$|\vec p|^2$$. Jacques believes that when you substitute a localized wave packet for $$\psi(x,y,z)$$ at $$t=0$$ and you wait for time $$t'$$, it will only spread to the ball of radius $$t'$$ away from the original region: it will never propagate superluminally. Search for "superluminally" in his blog post and comments. Oops, it's wrong and embarrassingly wrong. I think that the simplest way to see why he's wrong is to realize that the equation above still has the usual non-relativistic limit. As long as you guarantee that $$|\vec p| \ll m$$ in the $$c=\hbar=1$$ units, the evolution of the wave packets must be well approximated by non-relativistic physics and the non-relativistic Schrödinger equation. Consider an actual electron moving around a nucleus. In the hydrogen atom, the motion is basically non-relativistic. Consider an initial localized wave packet for the electron that has a uniform phase, is much larger than the Compton wavelength $$\hbar/mc\approx 2.4\times 10^{-12}\,{\rm m}$$ (it's simply $$1/m$$ in the $$c=\hbar=1$$ units) but still smaller than the radius of the atom. For example, the radius of the packet is $$10^{-11}$$ meters. Outside a sphere of this radius, the wave function is zero. Will this wave packet spread superluminally? You bet. By construction, the average speed is about an order of magnitude lower than the speed of light which is reasonably non-relativistic. So with a 1% accuracy (squared speed), and aside from the irrelevant phase linked to the additional additive shift $$E_0=mc^2$$ to the energy, the wave packet will spread like if it followed the non-relativistic Schrödinger equation$i\hbar\frac{\partial}{\partial t} \psi = -\hbar^2\frac{\Delta}{2m} \psi + V(x) \psi$ Let's set $$V(x)=0$$. OK, how do the wave packets spread according to the ordinary Schrödinger equation? Let's ask Ron Maimon – every good self-didact is enough to answer such questions. Well, it's simple: the Schrödinger equation is just a diffusion (or heat) equation where the main parameter is imaginary. If $$m$$ above were imaginary, $$m=i\mu$$, then the solution to the diffusion equation would be$\rho(x,t)\equiv \psi(x,t) = \frac{\sqrt{\mu}}{\sqrt{2\pi t}} \exp(-\mu x^2/t)$ The width of the Gaussian packet goes like $$\Delta x\sim \sqrt{t/\mu}$$. It's very simple. If you know the graph of the square root, you must know that the speed is initially very high. The speed $$dx/dt$$ scales like the derivative of the square root of time, i.e. as $$1/\sqrt{t\mu}$$. For times shorter than $$1/\mu$$, the speed with which the wave packet spreads unavoidably exceeds the speed of light. It's kosher that we're looking at timescales shorter than the "Compton time scale" of the electron. We only assumed that the spatial size of the wave packet is longer than the Compton wavelength. Whether an analogous scaling is obeyed by the dependence on time depends on the equation itself and the answer is clearly No. The asymmetric treatment of space and time in the equation (the square root is only used for the spatial derivatives) may be partly blamed for that asymmetry. Just to be sure, all the scalings are the same for the value of $$\mu=-im$$ that is imaginary. If you don't feel sure that our non-relativistic approximation was adequate for the question, I can give you a stronger weapon: the exact solution of the equation (Schrödinger's equation with the square root). What is it? Well, it's nothing else than the retarded Green's function – as taught in the context of the quantum Klein-Gordon field. Look e.g. at Page 7 of these lectures by Gonsalves in Buffalo. The retarded function is the matrix element of the evolution operator for the one-particle Hilbert space$G_{\rm ret}(x-x') = \bra{x,y,z} \exp(H(t-t')/i) \ket{x',y',z'}.$ When the particle is initially (a delta function) at the position $$(x',y',z')$$ at time $$t'$$ and you wait for time $$t-t'$$ i.e. you evolve it by the square-root-based Hamiltonian up to the moment $$t'$$, and you ask what will be the amplitude at the position $$(x,y,z)$$, the answer is nothing else than the retarded Green's function of the difference between the two four-vectors. Can the retarded Green's functions be analytically calculated? As long as you include Bessel functions among your "analytically allowed tools", the answer is Yes. If we set the four-vector $$x'=0$$ to zero, the retarded Green's function is simply$G_{\rm ret}(x) = \theta(t) \zzav{ \frac{ \delta( x^\mu x_\mu ) }{2\pi} - \frac{m}{4\pi}J_1 (mx^\mu x_\mu ) }$ For small and large timelike or spacelike separation, the Bessel function of the first kind used in the expression asymptotically is an odd function of the argument and behaves as (the sign is OK for positive arguments)$J_n(z) \sim \left\{ \begin{array}{cc} \frac{1}{n!} \zav{ \frac{z}{2} }^n&{\rm for}\,\, |z|\ll 1 \\ \sqrt{\frac{2}{\pi z}} \cos\zav{ z- \frac{(2n+1)\pi}{4} } & {\rm for}\,\,|z|\gg 1 \end{array} \right.$ But another lesson of the calculation is that the Green's function is nonzero even for $$x^\mu x_\mu$$ negative, i.e. spacelike separation – although it decreases roughly as $$\exp(-m|x|)$$ over there if you redefine the normalization by the factor of $$2E$$ in the momentum space (which is a non-local transformation in the position space). See the last displayed equation on page 2 of Gonsalves: Relativistic Causality: Quantum mechanics of a single relativistic free point particle is inconsistent with the principle of relativity that signals cannot travel faster than the speed of light. The probability amplitude for a particle of mass $$m$$ to travel from position $${\bf r}_0$$ to $${\bf r}$$ in a time interval $$t$$ is$U(t) = \bra{{\bf r}} e^{-iHt} \ket{{\bf r}_0} = \bra{{\bf r}} e^{-i\sqrt{{\bf p}^2+m^2}t} \ket{{\bf r}_0}\sim\\ \sim \exp(-m\sqrt{{\rm r}^2-t^2}),\quad {\rm for}\,\,{\rm spacelike}\,\, {\rm r}^2\gt t^2$ Gonsalves also quotes "particle creation and annihilation" and "spin-statistics connection" as the other two unavoidable consequences of a consistent union of quantum mechanics and special relativity. He refers you to Chapter 2 of Peskin-Schroeder to learn these things from a well-known source. OK, you might ask, what's the right modification of the wave equation for one particle that guarantees that the wave packet never spreads luminally? There is none. The condition that the packet never spreads superluminally would violate the uncertainty principle, a fundamental postulate of quantum mechanics. Why is it so? I can give you a simple idea. If you compress the particle to a small region, $$\Delta x \ll 1/m$$, much smaller than the Compton wavelength, the uncertainty principle unavoidably says $$\Delta p \gg m$$, so the motion is ultrarelativistic. You could think that $$\Delta p\gg m$$ or $$p\gg m$$ is still consistent with $$v\leq 1$$ but the evolved wave packets are unavoidably far from those that minimize the product of uncertainties and as the Bessel mathematics above shows, the piece in the spacelike region just can't exactly vanish, basically due to the non-local character of the operators. Similar derivations could be made with the help of the Feynman path integral. The typical trajectories contributing to the Feynman propagator are superluminal and non-differentiable almost everywhere and this fact does hold even in the calculation of the propagators in quantum field theory, a relativistic theory. As I discussed in a blog post in 2012, the superluminal or non-differentiable nature of generic paths in the path integral is needed for Feynman's formalism to be compatible with the uncertainty principle. Recall that we have solved a paradox: the calculation of $$xp-px$$ in the path integral should amount to the insertion of the classical integrand $$xp-px$$ to the path integral but this classical insertion is zero. The paradox was resolved thanks to the generic paths' being non-differentiable: the time ordering of $$x(t)$$ and $$p(t\pm \epsilon)$$ mattered. So does quantum field theory prevent you from sending signals to spacelike-separated regions? And how is it achieved? Yes, quantum field theory perfectly prohibits any propagation of signals superluminally or over spacelike separations. It does so by using the quantum fields. Quantum fields such as $$\Phi(x,y,z,t)$$ and functions of them and their derivatives are associated with spacetime points and they commute or anticommute with each other when the separation is spacelike. The zero commutator means that you may measure them simultaneously – that the decision to measure one doesn't influence the other or that the order of the two measurements is inconsequential. Just to be sure, the previous sentence doesn't say that these spacelike-separated measurements are never correlated. They may be correlated but correlation doesn't mean causation. They're only correlated if the correlation (mathematically described as entanglement within quantum mechanics) follows from the previous contact of the two subsystems that have evolved or moved to the spacelike-separated points. The point is that the outcomes themselves may be correlated but the human decisions – e.g. which polarization is measured on one photon – do not influence the statistics for the other photon itself at all. The existence of the "collapse" associated with the first measurement doesn't change the odds for the second measurement – although if you know the result into which the first measurement "collapsed", you must refine your predictions for the outcome of the second measurements because a correlation/entanglement could have been present. OK, how does this vanishing of the spacelike-separated commutators agree with the fact that the packets spread superluminally? On page 27 of Peskin-Schroeder, you may see that the "commutator Green's function" is a difference between two ordinary Green's functions and because those two are equal in the spacelike region, the value just cancels in the spacelike region. But again, the Fourier transform of the ordinary propagator such as $$1/(p^2-m^2+i\epsilon)$$ does not vanish in the spacelike regions of the 4-vector $$x^\mu$$. It cannot vanish because this position space propagator knows about the correlation of fields at two points of space. And the fields in nearby, spacelike-separated points are correlated, of course (very likely to be almost equal), especially if they are closer than the Compton wavelength. You may view this correlation as a result of the escaping of high-momentum or high-energy quanta to infinity. Only low-momentum or low-energy quanta are left in the vacuum and its low-energy excitations – and because of the Fourier relationship of $$x$$ and $$p$$, this absence of high-energy quanta means that the quantum fields can't depend on the spatial coordinates too much. You know, the message is that the ban on superluminal signals is compatible with quantum mechanics but the creation and annihilation of particles must be unavoidably allowed when you reconcile these two principles, special relativity and quantum mechanics. Jacques Distler believes that relativistic causality works even in "QFT truncated to the one-particle Hilbert space" which simply isn't right. He's really misunderstanding the key reason why quantum field theory was needed at all. Try to calculate the expectation value of the commutator of two fields $$F(x)$$ and $$G(y)$$ at two spacelike-separated points $$x,y$$. The fields $$F,G$$ may be the Klein-Gordon $$\Phi$$ itself or some bilinear constructed out of it, e.g. the component of a current $$J^0$$ that Distler talks about at some point. Imagine that you're calculating this commutator. You first expand $$F,G$$ in terms of $$\Phi$$ and its derivatives. Then you insert the expansions of $$\Phi$$ in terms of the creation and annihilation operators. And you know the expectation values of the type $$\bra 0 \Phi(x)\Phi(y) \ket 0$$. When you time-order $$x,y$$, it's just the usual propagator in the position space. The precise calculation will depend on the operators you choose but a general point is true: There will be lots of individual terms that are nonzero for spacelike $$x-y$$. Only if you sum all these terms – which will pick creation operators from $$F$$ and annihilation operators from $$G$$ and vice versa etc., you can achieve the cancellation. In particular, if you consider the operators $$F,G \sim J^0$$, those will contain terms of the type $$a^\dagger a$$ as well as $$b^\dagger b$$ for a field whose particles and antiparticles differ. Only if you include the correlators of from both particles and antiparticles matching between the points $$x,y$$, you may get a cancellation of the commutator (its expectation value). In other words, the fact that a quantum field is capable of both creating a particle and annihilating an antiparticle (which is the same for "real" fields) is absolutely vital for its ability to commute with spacelike-separated colleagues! This insight may be formulated in yet another equivalent way. You just can't construct a localized – relativistically causally well-behaved – field operator at a given point that would only contain terms of a given creation-annihilation schematic type, e.g. only $$a^\dagger a$$ but no $$b^\dagger b$$, only $$a^\dagger$$ but no $$b$$, and so on. Any operator that has a well-defined "number of particles of each type that it creates or annihilates" is unavoidably "non-local" and can't exactly commute with its spacelike-separated counterparts! If you wanted to study the truncation of the quantum field theory to a one-particle Hilbert space where the number of particles is $$N=1$$, and the number of antiparticles (and all other particle species) is zero, then all "first-quantized" operators on your Hilbert space correspond to some combination of operators of the $$a_k^\dagger a_m$$ form. You annihilate one particle and create one particle. But no such combination of operators may be strictly confined to a region so that it would commute with itself at spacelike-separation. Students who have carefully done some basic calculations in quantum field theory know this fact from many "happy cancellations" that weren't obvious for some time. For example, consider the quantized electromagnetic field. Write the total energy as$H = \int d^3 x\,\frac{1}{2}\zav{B^2+ E^2},$ i.e. the integral of the electric and magnetic energy density. Substitute $$\vec A$$ and its derivatives for $$\vec B,\vec E$$, and write $$A$$ and its derivatives in terms of creation and annihilation operators for photons. So you will get terms of the form $$a^\dagger a$$, $$aa$$, and $$a^\dagger a^\dagger$$. At the end, the total Hamiltonian only contains the terms of the $$a^\dagger a$$ "mixed" type but this simplified form is only obtained once you integrate over $$\int d^3 x$$ which makes the terms $$a a$$ and $$a^\dagger a^\dagger$$ vanish because of their oscillating dependence on $$x$$. If you only write the energy density itself, it will unavoidably contain the operators of the type $$aa$$ and $$a^\dagger a^\dagger$$ – annihilating or creating two photons – too. And the terms of all these forms are equally important for the quantum field to be well-behaved, especially for the vanishing of its commutators at spacelike separations. The broader lesson is that important principles of physics are ultimately reconcilable but the reconciliation is often non-trivial and implies insights, principles, and processes that didn't seem to unavoidably follow from the principles separately. So the combination of relativity and quantum mechanics implies the basic phenomena of quantum field theory – antiparticles, pair production, the inseparability of creation and annihilation, spin-statistics relations, and a few other things. In the same way, perhaps a more extreme one, the unification of quantum mechanics and general relativity is possible but any consistent theory obeying both principles has to respect some qualitative features we know from quantum gravity – as exemplified by string theory, probably the only possible precise definition of a consistent theory of quantum gravity. In particular, black holes must carry a finite entropy, be practically indistinguishable from heavy particle species, and such heavy particle species must exist. The processes around black holes and those involving elementary particles are unavoidably linked by some UV-IR relationships and string theory's modular invariance is the most explicit known example (or toy model?) of such relationships. In combination, the known important principles of physics are far more constraining than the principles are separately and they imply that the "kind of a theory we need" or even "the precise theory" is basically unique. This strictness is ultimately good news. If it didn't exist, we would be drowning in the infinite field of possibilities. Because of the "bonus" strictness resulting from the combination of important principles of physics, we know that a theory combining quantum mechanics and special relativity must work like quantum field theory and a theory that also respects gravity as in general relativity has to be string/M-theory. ### Tommaso Dorigo - Scientificblogging ## March 17, 2017 ### Tommaso Dorigo - Scientificblogging Five New Charmed Baryons Discovered By LHCb! While I was busy reporting the talks at the "Neutrino Telescope" conference in Venice, LHCb released a startling new result, which I have not much time to describe in much detail this evening (it's Friday evening here in Italy and I'm going to call the week off), and yet wish to share with you as soon as possible. The spectroscopy of low- and intermediate-mass hadrons (whatever this means) is a complex topic which either enthuses particle physicists or bores them to death. There are two reasons for this dycothomic behaviour. read more ### Symmetrybreaking - Fermilab/SLAC Q&A: Dark matter next door? Astrophysicists Eric Charles and Mattia Di Mauro discuss the surprising glow of our neighbor galaxy. Astronomers recently discovered a stronger-than-expected glow of gamma rays at the center of the Andromeda galaxy, the nearest major galaxy to the Milky Way. The signal has fueled hopes that scientists are zeroing in on a sign of dark matter, which is five times more prevalent than normal matter but has never been detected directly. Researchers believe that gamma rays—a very energetic form of light—could be produced when hypothetical dark matter particles decay or collide and destroy each other. However, dark matter isn’t the only possible source of the gamma rays. A number of other cosmic processes are known to produce them. So what do Andromeda’s gamma rays really tell us about dark matter? To find out, Symmetry’s Manuel Gnida talked with Eric Charles and Mattia Di Mauro, two members of the Fermi-LAT collaboration—an international team of researchers that found the Andromeda gamma-ray signal using the Large Area Telescope, a sensitive “eye” for gamma rays on NASA’s Fermi Gamma-ray Space Telescope. Both researchers are based at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. The LAT was conceived of and assembled at SLAC, which also hosts its operations center. KIPAC researchers Eric Charles and Mattia Di Mauro Dawn Harmer, SLAC National Accelerator Laboratory ### Have you discovered dark matter? MD: No, we haven’t. In the study, the LAT team looked at the gamma-ray emissions of the Andromeda galaxy and found something unexpected, something we don’t fully understand yet. But there are other potential astrophysical explanations than dark matter. It’s also not the first time that the LAT collaboration has studied Andromeda with Fermi, but in the old data the galaxy only looked like a big blob. With more data and improved data processing, we have now obtained a much clearer picture of the galaxy’s gamma-ray glow and how it’s distributed. ### What’s so unusual about the results? EC: As a spiral galaxy, Andromeda is similar to the Milky Way. Therefore, we expected the emissions of both galaxies to look similar. What we discovered is that they are, in fact, quite different. In our galaxy, gamma rays come from all kinds of locations—from the center and the spiral arms in the outer regions. For Andromeda, on the other hand, the signal is concentrated at the center. ### Why do galaxies glow in gamma rays? EC: The answer depends on the type of galaxy. There are active galaxies called blazars. They emit gamma rays when matter in close orbit around supermassive black holes generates jets of plasma. And then there are “normal” galaxies like Andromeda and the Milky Way that produce gamma rays in other ways. When we look at the emissions of the Milky Way, the galaxy appears like a bright disk, with the somewhat brighter galactic center at the center of the disk. Most of this glow is diffuse and comes from the gas between the stars that lights up when it’s hit by cosmic rays—energetic particles spit out by star explosions or supernovae. Other gamma-ray sources are the remnants of such supernovae and pulsars—extremely dense, magnetized, rapidly rotating neutron stars. These sources show up as bright dots in the gamma-ray map of the Milky Way, except at the center where the density of gamma-ray sources is high and the diffuse glow of the Milky Way is brightest, which prevents the LAT from detecting individual sources. Andromeda is too far away to see individual gamma-ray sources, so it only has a diffuse glow in our images. But we expected to see most of the emissions to come from the disk as well. Its absence suggests that there is less interaction between gas and cosmic rays in our neighbor galaxy. Since this interaction is tied to the formation of stars, this also suggests that Andromeda had a different history of star formation than the Milky Way. The sky in gamma rays with energies greater than 1 gigaelectronvolts, based on eight years of data from the LAT on NASA’s Fermi Gamma-ray Space Telescope. NASA/DOE/Fermi LAT Collaboration ### What does all this have to do with dark matter? MD: When we carefully analyze the gamma-ray emissions of the Milky Way and model all the gas and point-like sources to the best of our knowledge, then we’re left with an excess of gamma rays at the galactic center. Some people have argued this excess could be a telltale sign of dark matter particles. We know that the concentration of dark matter is largest at the galactic center, so if there were a dark matter signal, we would expect it to come from there. The localization of gamma-ray emissions at Andromeda’s center seems to have renewed the interest in the dark matter interpretation in the media. ### Is dark matter the most likely interpretation? EC: No, there are other explanations. There are so many gamma-ray sources at the galactic center that we can’t really see them individually. This means that their light merges into an extended, diffuse glow. In fact, two recent studies from the US and the Netherlands have suggested that this glow in the Milky Way could be due to unresolved point sources such as pulsars. The same interpretation could also be true for Andromeda’s signal. ### What would it take to know for certain? MD: To identify a dark matter signal, we would need to exclude all other possibilities. This is very difficult for a complex region like the galactic center, for which we don’t even know all the astrophysical processes. Of course, this also means that, for the same reason, we can’t completely rule out the dark matter interpretation. But what’s really important is that we would want to see the same signal in a few different places. However, we haven’t detected any gamma-ray excesses in other galaxies that are consistent with the ones in the Milky Way and Andromeda. This is particularly striking for dwarf galaxies, small companion galaxies of the Milky Way that only have few stars. These objects are only held together because they are dominated by dark matter. If the gamma-ray excess at the galactic center were due to dark matter, then we should have already seen similar signatures in the dwarf galaxies. But we don’t. ### ZapperZ - Physics and Physicists DOE's Office Of Science Faces Disastrous Cuts The first Trump budget proposal presents a major disaster for scientific funding and especially to DOE Office of Science budget. President Donald Trump's first budget request to Congress, to be released at 7 a.m. Thursday, will call for cutting the 2018 budget of the National Institutes of Health (NIH) by6 billion, or nearly 20%, according to sources familiar with the proposal. The Department of Energy's (DOE's) Office of Science would lose $900 million, or nearly 20% of its$5 billion budget. The proposal also calls for deep cuts to the research programs at the Environmental Protection Agency (EPA) and the National Oceanic and Atmospheric Administration (NOAA), and a 5% cut to NASA's earth science budget. And it would eliminate DOE's roughly $300 million Advanced Research Projects Agency-Energy. I don't know in what sense this will make America "great again". It is certainly not in science, that's for sure. Zz. ## March 16, 2017 ### ZapperZ - Physics and Physicists Born Rule Confirmed To An Even Tighter Bound I must say that I might have missed this paper if Chad Orzel didn't mention it in his article. Here, he highlighted a paper by Kauten et al. from New Journal of Physics (open access) that performed 5-slit interference test with the purpose of detecting any higher-order interference beyond that predicted by the Born rule. They found none, and imposed a tighter bound on any higher-order effects. As Orzel reported: That's what the NJP paper linked above is about. One of the ways you might get the Born rule from some deeper principle would be to have it be merely an approximation to some more fundamental structure. That, in turn, might very well involve a procedure other than "squaring" the wavefunction to get the probability of various measurement outcomes. In which case, you would expect to see some higher-order contributions to the probability-- the wavefunction cubed, say, or to the fourth power. . . . Sadly, for fans of variant models of quantum probability, what they actually do is the latter. They don't see any deviation from the ordinary Born rule, and can say with confidence that all the higher-order contributions are zero, to something like a hundredth of a percent. Of course, this won't stop the continuation of the search, because that is what we do. But it is amazing that QM has withstood numerous challenges throughout its history. Zz. ### Emily Lakdawalla - The Planetary Society Blog Trump's first budget proposal is out. Here's how NASA fared NASA escaped a large-scale budget slash, and planetary science fared well. ARM is canceled, the Moon-versus-Mars debate is not mentioned, and Earth science stands to lose some missions. ### Lubos Motl - string vacua and pheno LHCb discovers five $$css$$ bound states at once The LHCb detector is way smaller and cheaper than its fat ATLAS and CMS siblings. But it doesn't mean that it can't discover cool things – and many things. The letter $$b$$ refers to the bottom quark. It's often said that the bottom quark is the best path towards the research of CP-violation and similar things. But for some reasons, the LHCb managed to discover five new particles without any bottom quark – at once: The collaboration proudly tweeted about the new discovery and linked to their new paper, Observation of five new narrow $$\Omega^0_c$$ states decaying to $$\Xi^+_c K^−$$ You may count the new peaks on the graph above. If you haven't forgotten some rather rudimentary number theory, you know that the counting goes as follows: One, two, three, four, five. TRF contains new stuff to learn for everybody, including those who would consider any mathematics exam unconstitutional and inhuman. ;-) They identify the bound states of the Omega baryon according to the decay products that they can label reliably enough. These new charmed neutral Omega baryons (the quark content is $$css$$, like the cascading style sheets) decay to a positive charmed Xi baryon (whose quark content is $$ucs$$ or $$ucs$$, if you agree that the acronym shouldn't be reserved by a corrupt Union of Concerned Scientists and Anthony Watts' dog) and the negative kaon $$K^-$$ (quark content: $$\bar u s$$, thanks, Bill). Well, the positive charmed Xi baryon decays to $$p K^- \pi^+$$ and those are really well-known everyday animals for the LHCb scientists. The new $$css$$ bound states are narrow resonances – which means that the decay rate is slow (width is small) enough. You may consider them excited states of the same particle or different particles. Which of those is better is little bit a subjective issue. The excited states of a hydrogen atom are clearly "the same particle" because the transitions between them is the most common one and involves a truly neutral, peaceful photon (which is "almost nothing", especially when it comes to charges). But these excited states of the $$css$$ quarks are strongly interacting and it's rather easy for these beasts to create quark-antiquark pairs, in this case an up-antiup pair, and divide all the quarks differently. These processes are actually more frequent than a simple emission of a photon. So the excited states don't change to each other so automatically and they may be considered distinct entities although they're really built from the same ingredients, just like different excited states of a hydrogen atom. You can imagine how people had to be thrilled in the 1960s when such new particles were discovered frequently and the innocent physicists actually believed that those were elementary particles. However, in the late 1960s, quarks were proposed, and in the early 1970s, QCD was written down. Before QCD, physicists were willing to believe that they live in a paradise with hundreds of exotic elementary particles species or that these numerous particles were proving that Nature was lifting Herself by Her own bootstraps. At some moment, physicists devoured the QCD apple and their feeling of mystery and submission faded away. Those are just some additional boring bound states of six quark flavors and their antiquarks, aren't they? Why so much ado? And that's where we are. Lots of the childish excitement is gone, our previous emotions look a bit silly and scientifically naive, and when we want to look for the truly deep signs of Nature's mysteries, we know that we must dig deeper than to discover five new baryons (at once). Off-topic. Dr Sheldon Cooper, the boy (Iain Armitage, a theater critic), interviewed another Sheldon a year ago. The spin-off of TBBT could be fun. And if you asked me, I find this whole elaborate scheme with Greek letters labeling the QCD ground states to be an anachronism. I would replace symbols like $$\Omega_c^0$$ by $$css$$ – note that both require three characters – and perhaps added some extra labels when needed. For example, these five excitations may be labeled $$3000,3050,3066,3090,3119$$ which are their masses in the units of $$1\MeV$$. With this modernized notation, we could reserve the precious Greek letters for something more mysterious, for something that still sounds Greek to us. And I am not talking about Greek economic and imigration policies which should be represented by characters such as f%&*^*g s#&*t. But I may be wrong and those baryons may be fundamentally important. And even if they're not, it's important that physicists don't forget the craft that their predecessors were so good at half a century ago. It's like not forgetting how to make and listen to classical music or anything of the sort that suddenly faced lots of competition attempting to steal a big part of the people's attention. ## March 15, 2017 ### The n-Category Cafe Functional Equations VI: Using Probability Theory to Solve Functional Equations A functional equation is an entirely deterministic thing, such as $f\left(x+y\right)=f\left(x\right)+f\left(y\right) f\left(x + y\right) = f\left(x\right) + f\left(y\right) $ or $f\left(f\left(f\left(x\right)\right)\right)=x f\left(f\left(f\left(x\right)\right)\right) = x $ or $f\left(\mathrm{cos}\left({e}^{f\left(x\right)}\right)\right)+2x=\mathrm{sin}\left(f\left(x+1\right)\right). f\Bigl\left(\cos\bigl\left(e^\left\{f\left(x\right)\right\}\bigr\right)\Bigr\right) + 2x = \sin\bigl\left(f\left(x+1\right)\bigr\right). $ So it’s a genuine revelation that one can solve some functional equations using probability theory — more specifically, the theory of large deviations. This week and next week, I’m explaining how. Today (pages 22-25 of these notes) was mainly background: • an introduction to the theory of large deviations; • an introduction to convex duality, which Simon has written about here before; • how the two can be combined to get a nontrivial formula for sums of powers of real numbers. Next time, I’ll explain how this technique produces a startlingly simple characterization of the $pp$-norms. ## March 14, 2017 ### Symmetrybreaking - Fermilab/SLAC The life of an accelerator As it evolves, the SLAC linear accelerator illustrates some important technologies from the history of accelerator science. Tens of thousands of accelerators exist around the world, producing powerful particle beams for the benefit of medical diagnostics, cancer therapy, industrial manufacturing, material analysis, national security, and nuclear as well as fundamental particle physics. Particle beams can also be used to produce powerful beams of X-rays. Many of these particle accelerators rely on artfully crafted components called cavities. The world’s longest linear accelerator (also known as a linac) sits at the Department of Energy’s SLAC National Accelerator Laboratory. It stretches two miles and accelerates bunches of electrons to very high energies. The SLAC linac has undergone changes in its 50 years of operation that illustrate the evolution of the science of accelerator cavities. That evolution continues and will determine what the linac does next. Illustration by Corinne Mucha ### Robust copper An accelerator cavity is a mostly closed, hollow chamber with an opening on each side for particles to pass through. As a particle moves through the cavity, it picks up energy from an electromagnetic field stored inside. Many cavities can be lined up like beads on a string to generate higher and higher particle energies. When SLAC’s linac first started operations, each of its cavities was made exclusively from copper. Each tube-like cavity consisted of a 1-inch-long, 4-inch-wide cylinder with disks on either side. Technicians brazed together more than 80,000 cavities to form a straight particle racetrack. Scientists generate radiofrequency waves in an apparatus called a klystron that distributes them to the cavities. Each SLAC klystron serves a 10-foot section of the beam line. The arrival of the electron bunch inside the cavity is timed to match the peak in the accelerating electric field. When a particle arrives inside the cavity at the same time as the peak in the electric field, then that bunch is optimally accelerated. “Particles only gain energy if the variable electric field precisely matches the particle motion along the length of the accelerator,” says Sami Tantawi, an accelerator physicist at Stanford University and SLAC. “The copper must be very clean and the shape and size of each cavity must be machined very carefully for this to happen.” In its original form, SLAC’s linac boosted electrons and their antimatter siblings, positrons, to an energy of 50 billion electronvolts. Researchers used these beams of accelerated particles to study the inner structure of the proton, which led to the discovery of fundamental particles known as quarks. Today almost all accelerators in the world—including smaller systems for medical and industrial applications—are made of copper. Copper is a good electric conductor, which is important because the radiofrequency waves build up an accelerating field by creating electric currents in the cavity walls. Copper can be machined very smoothly and is cheaper than other options, such as silver. “Copper accelerators are very robust systems that produce high acceleration gradients of tens of millions of electronvolts per meter, which makes them very attractive for many applications,” says SLAC accelerator scientist Chris Adolphsen. Today, one-third of SLAC’s original copper linac is used to accelerate electrons for the Linac Coherent Light Source, a facility that turns energy from the electron beam into what is currently the world’s brightest X-ray laser light. Researchers continue to push the technology to higher and higher gradients—that is, larger and larger amounts of acceleration over a given distance. “Using sophisticated computer programs on powerful supercomputers, we were able to develop new cavity geometries that support almost 10 times larger gradients,” Tantawi says. “Mixing small amounts of silver into the copper further pushes the technology toward its natural limits.” Cooling the copper to very low temperatures helps as well. Tests at 45 Kelvin—negative 384 degrees Fahrenheit—have shown to increase acceleration gradients 20-fold compared to SLAC’s old linac. Copper accelerators have their limitations, though. SLAC’s historic linac produces 120 bunches of particles per second, and recent developments have led to copper structures capable of firing 80 times faster. But for applications that need much higher rates, Adolphsen says, “copper cavities don’t work because they would melt.” Illustration by Corinne Mucha ### Chill niobium For this reason, crews at SLAC are in the process of replacing one-third of the original copper linac with cavities made of niobium. Niobium can support very large bunch rates, as long as it is cooled. At very low temperatures, it is what’s known as a superconductor. “Below the critical temperature of 9.2 Kelvin, the cavity walls conduct electricity without losses, and electromagnetic waves can travel up and down the cavity many, many times, like a pendulum that goes on swinging for a very long time,” says Anna Grassellino, an accelerator scientist at Fermi National Accelerator Laboratory. “That’s why niobium cavities can store electromagnetic energy very efficiently and can operate continuously.” You can find superconducting niobium cavities in modern particle accelerators such as the Large Hadron Collider at CERN and the CEBAF accelerator at Thomas Jefferson National Accelerator Facility. The European X-ray Free-Electron Laser in Germany, the European Spallation Source at CERN, and the Facility for Rare Isotope Beams at Michigan State University are all being built using niobium technology. Niobium cavities also appear in designs for the next-generation International Linear Collider. At SLAC, the niobium cavities will support LCLS-II, an X-ray laser that will produce up to a million ultrabright light flashes per second. The accelerator will have 280 cavities, each about three feet long with a 3-inch opening for the electron beam to fly through. Sets of eight cavities will be strung together into cryomodules that keep the cavities at a chilly 2 Kelvin, which is colder than interstellar space. Each niobium cavity is made by fusing together two halves stamped from a sheet of pure metal. The cavities are then cleaned very thoroughly because even the tiniest impurities would degrade their performance. The shape of the cavities is reminiscent of a stack of shiny donuts. This is to maximize the cavity volume for energy storage and to minimize its surface area to cut down on energy dissipation. The exact size and shape also depends on the type of accelerated particle. “We’ve come a long way since the first development of superconducting cavities decades ago,” Grassellino says. “Today’s niobium cavities produce acceleration gradients of up to about 50 million electronvolts per meter, and R&D work at Fermilab and elsewhere is further pushing the limits.” Illustration by Corinne Mucha ### Hot plasma Over the past few years, SLAC accelerator scientists have been working on a way to push the limits of particle acceleration even further: accelerating particles using bubbles of ionized gas called plasma. Plasma wakefield acceleration is capable of creating acceleration gradients that are up to 1000 times larger than those of copper and niobium cavities, promising to drastically shrink the size of particle accelerators and make them much more powerful. “These plasma bubbles have certain properties that are very similar to conventional metal cavities,” says SLAC accelerator physicist Mark Hogan. “But because they don’t have a solid surface, they can support extremely high acceleration gradients without breaking down.” Hogan’s team at SLAC and collaborators from the University of California, Los Angeles, have been developing their plasma acceleration method at the Facility for Advanced Accelerator Experimental Tests, using an oven of hot lithium gas for the plasma and an electron beam from SLAC’s copper linac. Researchers create bubbles by sending either intense laser light or a high-energy beam of charged particles through plasma. They then send beams of particles through the bubbles to be accelerated. When, for example, an electron bunch enters a plasma, its negative charge expels plasma electrons from its flight path, creating a football-shaped cavity filled with positively charged lithium ions. The expelled electrons form a negatively charged sheath around the cavity. This plasma bubble, which is only a few hundred microns in size, travels at nearly the speed of light and is very short-lived. On the inside, it has an extremely strong electric field. A second electron bunch enters that field and experiences a tremendous energy gain. Recent data show possible energy boosts of billions of electronvolts in a plasma column of just a little over a meter. “In addition to much higher acceleration gradients, the plasma technique has another advantage,” says UCLA researcher Chris Clayton. “Copper and niobium cavities don’t keep particle beams tightly bundled and require the use of focusing magnets along the accelerator. Plasma cavities, on the other hand, also focus the beam.” Much more R&D work is needed before plasma wakefield accelerator technology can be turned into real applications. But it could represent the future of particle acceleration at SLAC and of accelerator science as a whole. ## March 13, 2017 ### Tommaso Dorigo - Scientificblogging Posts On Neutrino Experiments, Day 1 The first day of the Neutrino Telescopes XVII conference in Venice is over, and I would like to point you to some short summaries that I published for the conference blog, at http://neutel11.wordpress.com. Specifically: - a summary of the talk on Super-Kamiokande - a summary of the talk on SNO - a summary of the talk on KamLAND - a summary of the talk on K2K and T2K - a summary of the talk on Daya Bay You might have noticed that the above experiments were recipients of the 2016 Breakthrough prize in physics. In fact, the session was specifically focusing on these experiments for that reason. read more ### Tommaso Dorigo - Scientificblogging The Formidable Neutrino Elementary particles are mysterious and unfathomable, and it takes giant accelerators and incredibly complex devices to study them. In the last 100 years we have made great strides in the investigations of the properties of quarks, leptons, and vector bosons, but I would lie if I said we know half of what we would like to. In science, the opening of a door reveals others, closed by more complicated locks - and no clearer example of this is the investigation of subatomic matter. read more ### CERN Bulletin GAC-EPA Le GAC organise des permanences avec entretiens individuels qui se tiennent le dernier mardi de chaque mois, sauf en juin, juillet et décembre. La prochaine permanence se tiendra le : Mardi 28 mars de 13 h 30 à 16 h 00 Salle de réunion de l’Association du personnel Les permanences suivantes auront lieu les mardis 25 avril, 30 mai, 29 août, 26 septembre, 31 octobre et 28 novembre 2017. Les permanences du Groupement des Anciens sont ouvertes aux bénéficiaires de la Caisse de pensions (y compris les conjoints survivants) et à tous ceux qui approchent de la retraite. Nous invitons vivement ces derniers à s’associer à notre groupement en se procurant, auprès de l’Association du personnel, les documents nécessaires. Informations : http://gac-epa.org/. Formulaire de contact : http://gac-epa.org/Organization/ContactForm/ContactForm-fr.php ### CERN Bulletin ### CERN Bulletin Cine club ### Thursday 16 March 2017 at 20:00 CERN Council Chamber # Fire Directed by Deepa Mehta Canada / India, 1996, 104 minutes Sita and Radha are two Indian women stuck in loveless marriages. While Sita is trapped in an arranged relationship with her cruel and unfaithful husband, Jatin , Radha is married to his brother, Ashok, a religious zealot who believes in suppressing desire. As the two women recognize their similar situations, they grow closer, and their relationship becomes far more involved than either of them could have anticipated. Original version English / Hindi; English subtitles # La souriante Madame Beudet Directed by Germaine Dulac France, 1923, 26 minutes One of the first feminist movies, The Smiling Madame Beudet is the story of an intelligent woman trapped in a loveless marriage. Her husband is used to playing a stupid practical joke in which he puts an empty revolver to his head and threatens to shoot himself. One day, while the husband is away, she puts bullets in the revolver. However, she is stricken with remorse and tries to retrieve the bullets the next morning. Her husband gets to the revolver first only this time he points the revolver at her. Silent In collaboration with the Women In Technology Community ### Wednesday 22 Mars 2017 at 20:00 CERN Council Chamber # Sans toit ni loi / Vagabond Directed by Agnès Varda France, 1985, 105 minutes A young woman's body is found frozen in a ditch. Through flashbacks and interviews, we see the events that led to her inevitable death. Original version French ; English subtitles In collaboration with the Women In Technology Community Save Save ### CERN Bulletin Open Day at EVE and School of CERN Staff Association: an opportunity for many parents to discover the structure. On Saturday, 4 March 2017, the Children’s Day-Care Centre EVE and School of CERN Staff Association opened its doors to allow interested parents to visit the structure. Staff Association - Carole Dargagnon presents the EVE and school during the open day. This event was a great success and brought together many families. The Open Day was held in two sessions (first session at 10 am and second at 11 am), each consisting in two parts: • a general presentation of the structure by the Headmistress Carole Dargagnon, • a tour of the installations with Marie-Luz Cavagna and Stéphanie Palluel, the administrative assistants. The management team was delighted to offer parents the opportunity to participate in this pleasant event, where everyone could express themselves, ask questions and find answers in a friendly atmosphere. ### CERN Bulletin “VICO”, Visiting Colleagues “Hello, I am your delegate” – have you heard this line? Maybe you have already had the pleasure of receiving a visit from a Staff Association delegate – then you know what this is all about. As for those of you, who have not yet heard these words, it’s time to get curious. The Staff Association has decided to embark upon an adventure called “VICO”, Visiting Colleagues. From past experience, we have understood the value of personal, direct contact with the people we represent. We believe that the best way to achieve this is to knock on your office door and pay you a short visit. We do not want to make you fill in yet another online questionnaire and would much rather collect your feedback in a short conversation face to face. Of course, we have prepared ourselves thoroughly for these visit rounds, because we do not want to waste your time. We welcome criticism because it can make us aware of our shortcomings, tell us about how you perceive our work, and help us improve when needed. So, after friendly introducing ourselves into your office and taking a brief moment of your time, we will merely ask you a few questions. We are always eager to hear your opinions on different topics. You will hear the “Hello, I am your delegate” from mid-March to mid-June, whenever your delegate has the chance to pass by your office. We know that realistically we will not succeed in visiting all of you. This should be even more of a reason for you to accept these short polite intrusions as a privilege to talk to your delegate in person: they are all yours for a few minutes, so please speak your mind. We are looking forward to some enlightening field trips, as we come see you in your offices. We hope that these visits will enhance mutual understanding and help build stronger ties with you through sharing ideas and opinions. But rest assured, these visits are not your only chance to meet us: if you happen to run into a delegate in the corridor, an understanding smile and even a coffee between colleagues would surely be welcome! We are all working together, so let us learn from each other. ### Lubos Motl - string vacua and pheno Jacques Distler vs some QFT lore Young physicists in Austin, be careful about some toxic junk in your city Three weeks ago, in the article titled Responsibility physicist Jacques Distler of UT Austin mentioned a statement by Sasha Polyakov that he was "responsible" for quantum field theory. That comment was particularly relevant when Distler taught an undergraduate particle physics course and was frustrated by the following: The textbooks (and I mean all of them) start off by “explaining” that relativistic quantum mechanics (e.g. replacing the Schrödinger equation with Klein-Gordon) make no sense (negative probabilities and all that …). And they then proceed to use it anyway (supplemented by some Feynman rules pulled out of thin air). This drives me up the fúçkïñg wall. It is precisely wrong. There is a perfectly consistent quantum mechanical theory of free particles. The problem arises when you want to introduce interactions. Did the following text defend the legitimacy of Distler's frustration? Well, partly... but I would pick the answer No if I had to. What's going on? Indeed, textbooks and instructors often – and, according to some measures, always – say that quantum mechanics of one particle ceases to behave well once you switch to relativity – to theories covariant under the Lorentz transformations. Are these statements right? Are they wrong? And are the correct statements one can make important? It depends what exact statements you have in mind. What Distler discusses is the existence of the Hilbert space – and Hamiltonian – for one particle, e.g. the Klein-Gordon particle. Does it exist? You bet. If you believe that a Hilbert space of particles exists in free quantum field theory, do the following: Write a basis vector of that Hilbert space as the basis of a Fock space, i.e. in terms of the basis vector that are$a^\dagger_{\vec k_1} \cdots a^\dagger_{\vec k_n} \ket 0$ And simply pick those basis vectors that contain exactly one creation operator. This one-particle subspace of the Hilbert space will evolve to itself under the empty-spacetime evolution operators. In fact, if you write the basis in the momentum basis as I did, the Hamiltonian for one real quantum of the real Klein-Gordon equation will be simply$H = \sqrt{|\vec k|^2 + m^2}.$ This is something you may derive from quantum field theory. The operator above is perfectly well-defined in the momentum space. The energy is non-negative, the norms of states are positive, everything works fine. So has Distler shown that all the statements of the type "one particle isn't consistent in relativistic quantum mechanics" are wrong? Nope, he hasn't. In particular, he was talking about the statement ...replacing the [non-relativistic, e.g. one-particle] Schrödinger equation with Klein-Gordon make[s] no sense... But this statement is right at the level of one-particle quantum mechanics because his equation for the evolution of the wave function is not the Klein-Gordon equation. You know, the Klein-Gordon equation is$\left(\frac{\partial^2}{\partial t^2} - \frac{\partial^2}{\partial x^2} - \frac{\partial^2}{\partial y^2} - \frac{\partial^2}{\partial z^2} + m^2 \right) \Phi = 0.$ That's a nice, local – perfectly differential equation. On the other hand, the replacement for the non-relativistic Schrödinger equation$i\hbar\frac{\partial}{\partial t} \psi = -\frac{\hbar^2}{2m} \Delta \psi + V(x) \psi$ that he derived and that describes the evolution of one-particle states was$i\hbar\frac{\partial}{\partial t} \psi = c \sqrt{m^2c^2-\hbar^2\Delta} \psi + V(x) \psi$ Because the square root has a neverending Taylor expansion, the function of the Laplace operator is a terribly non-local "integral operator" acting on the wave function $$\psi(x,y,z,t)$$ in the position representation. So this equation for one particle, even though it follows from the Klein-Gordon quantum field theory, doesn't have the nice and local Klein-Gordon form. It isn't pretty and it isn't fundamental. If you wrote this equation in isolation, you should be worried that the resulting theory isn't relativistic because relativity implies locality and this equation allows the localized wave function packet to spread superluminally! What the statements mean is that if you want to use some nice and local equation for a wave function for one particle – i.e. if you literally want to replace Schrödinger's equation by the similar Klein-Gordon equation – you won't find a way to construct (in terms of local functions of derivatives etc.) the probability current and density etc. that would have the desired positivity properties etc. And this statement is just true and important! If you want to return to simple, fundamental, justifiable, beautiful equations, you can indeed use the Klein-Gordon, Dirac, Maxwell, and other equations. But you must appreciate that they're equations for (field) operators, not for wave functions. This statement is important because it's not just a mathematical one. It's highly physical, too. In particular, if you consider any relativistic quantum mechanical theory of particles – quantum field theory or something grander, like string theory – it's unavoidable that when you confine particles to the distance shorter than the Compton wavelength $$\hbar / mc$$ of that particle, you will unavoidably have enough energy so that particle-antiparticle pairs will start to be produced at nonzero probabilities. And in relativity, it's normal for a particular to move by a speed comparable to the speed of light, and then its wavelength is comparable to the Compton wavelength. You can't really trust the one-particle theory at distances comparable to its normal de Broglie wavelength! So the theory is wrong in some very strong sense. The antiparticles (which are the same with the original particle in the real Klein-Gordon case, just to be sure) inevitably follow from relativity combined with quantum mechanics, and so does the pair production of particles and antiparticles. This physical statement has lots of nearly equivalent mathematical manifestations. For example, local observables in a relativistic quantum theory have to be constructed out of quantum fields. So the 1-particle Hilbert space doesn't have any truly local observables: You can't construct the Klein-Gordon field $$\Phi(x,y,z,t)$$ out of operators acting on the 1-particle Hilbert space because the latter operators never change the number of particles while $$\Phi(x,y,z,t)$$ does (by one or minus one – it's a combination of creation and annihilation operators). In fact, you can't construct the bilinears in $$\Phi$$ and/or its derivatives, either, because while those operators in QFT contain some terms that preserve the number of particles, they also contain equally important terms that change the number of particles by two (particle-antiparticle pair production or pair annihilation) and those are equally important for obtaining the right commutators and other things. The mixing of creation operators for particles and the annihilation operators for antiparticles is absolutely unavoidable if you want to define observables at points (or regions smaller than the Compton wavelength). There's one more statement that Distler made and that is really wrong. Distler wrote that the problems only begin when you start to consider interactions – and from the context, it's clear that he meant interactions involving several quanta of quantum fields, several particles in the quantum field theory sense. But that's not true. Problems of "one-particle relativistic quantum mechanics" already appear if you consider the behavior of the single particle in external classical fields. Just squeeze a Klein-Gordon particle – e.g. a Higgs boson – in between two metallic plates whose distance is sent to zero. Will it make sense? No, as I mentioned, the walls start to produce particle-antiparticle quanta in general. Time-dependent Hamiltonians lead to particle production, if you wish. Similarly, if you place these particles in any external classical field, the actual Klein-Gordon field may react in a way to create particle pairs. So the truncation of the Hilbert space of a quantum field theory to the one-particle subspace is inconsistent not only if you consider interactions of particles in the usual Feynman diagrammatic sense – but even if you consider the behavior of the particle in external classical fields. Whatever you try to with the particle that goes beyond the stupid simple single free-particle Hamiltonian will force you to acknowledge that the truncated one-particle theory is no good. We want to do something more with the theory than just write an unmotivated non-local Hamiltonian of the kind $$H\sim \sqrt{m^2+p^2}$$ if I use $$\hbar=c=1$$ units here. And as soon as we do anything else – justify this ugly and seemingly non-local (and therefore seemingly relativity-violating) Hamiltonian by an elegant theory, study particle interactions, study the behavior of one particle in external classical fields – we just need to switch to the full-blown quantum field theory, otherwise our musings will be inconsistent. One extra comment. I mentioned that the non-local differential operator allows the wave packet to spread superluminally. How is it possible that such a thing results from a relativistic theory? Well, quantum field theory has no problem with that because when you do any doable measurement, the processes in which a particle spreads in the middle gets combined with processes involving antiparticles. When you calculate the "strength of influences spreading superluminally", some Green's functions – which are nonzero for spacelike separations – will combine to the "commutator correlation function" which vanishes at spacelike separation. So the inseparable presence of antiparticles will save the locality for you. The truncation to particles-only (without antiparticles) would indeed violate locality required by relativity as long as you could experimentally verify it (you need at least some interactions of that particle with something else for that). While Jacques is right about the possibility to truncate the Hilbert space of quantum field theories to the one-particle subspaces, he's morally wrong about all these big statements – and some of his statements are literally wrong, too. At least morally, the lore that drives him up the wall is right and there are ways to formulate this lore so that it is both literally true and important, too. So students in Austin are encouraged to actively ignore their grumpy instructor's tirades against the quantum field theory lore and even more encouraged to understand in what sense the lore is true. As I explain in the comments, many quantum field theory textbooks have wonderful explanations – usually at the very beginning – of the wisdom that Jacques Distler seems to misunderstand, namely why quantum fields and the mixing of sectors with different numbers of particles is unavoidable for consistency of quantum mechanics with special relativity. The 2008 textbook by my adviser Tom Banks starts the explanation on Page 3, in section "Why quantum field theory?" It says that the probability amplitude for a particle emission at spacetime point $$x$$ and its absorption at point $$y$$ is unavoidably nonzero for spacelike separations and because it would only be only nonzero for one of the two time orderings of $$x,y$$, and the ordering of spacelike-separated event isn't Lorentz-invariant, the Lorentz invariance would be broken and one must actually demand that only amplitudes where both orders are summed over are allowed. In other words, as argued on page 5, the only known consistent ways to solve this clash with the Lorentz invariance is to postulate that every emission source must also be able to act as an absorption sink and vice versa. When both terms are combined, the sum is still nonzero in the spacelike region but has no brutal discontinuities when the ordering gets reversed. Also, when the particle carries charges, the emission and absorption in the two related processes must involve particles of opposite charges and one predicts (and Dirac predicted) the existence of antiparticles that are needed for things to work. Weinberg QFT Volume 1 explains the negative probabilities and energies of the relativistic equations naively used instead of the non-relativistic Schrödinger equation on pages 7, 12, 15... Read it for a while. It's OK but, in my opinion, much less deep than Tom's presentation. Peskin's and Schroeder's textbook on quantum field theory discusses the non-vanishing of the amplitudes in the spacelike region on page 14 and pages 27-28 discuss that the actual influence of one measurement on another is measured by the commutator of two field operators. And that vanishes for spacelike separations – again, because two processes that are opposite to each other are subtracted. Without the mixing of creation operators (for particles) and annihilation operators (for antiparticles), you just can't define any observables that would belong to a point or a region and that would behave relativistically (respected the independence of observables that are spacelike separated). Quantum fields are the only known way to avoid this conflict between quantum mechanics and relativity. They are unavoidably superpositions of positive- and negative-energy solutions, and therefore are expanded in sums of creation and annihilation operators. That's why all local discussions make it necessary to allow emission and absorption at the same time – and, consequently, the combination of quantum mechanics and relativity makes it necessary to consider the whole Fock space with a variable number of particles. The one-particle truncation is inconsistent with relativistic dynamics such as time-dependent interactions, emission, or absorption. In the mathematical language, fields and their functions are necessary for any local observables in relativistic quantum mechanical theories. They always contain terms that change the number of particles – except for the trivial constant operator $$1$$. In the physical language, relativity and quantum mechanics simultaneously imply that emission and absorption are linked, antiparticle exists, and scattering amplitudes for particles and antiparticles have to obey identities such as the crossing symmetry. The teaching of a quantum field theory course could be a good opportunity for Jacques to learn this basic stuff that is often presented on pages such as 3,5,7,12,14... of introductory textbooks. ### John Baez - Azimuth Restoring the North Cascades Ecosystem In 49 hours, the National Park Service will stop taking comments on an important issue: whether to reintroduce grizzly bears into the North Cascades near Seattle. If you leave a comment on their website before then, you can help make this happen! Follow the easy directions here: http://theoatmeal.com/blog/grizzlies_north_cascades Please go ahead! Then tell your friends to join in, and give them this link. This can be your good deed for the day. But if you want more details: Grizzly bears are traditionally the apex predator in the North Cascades. Without the apex predator, the whole ecosystem is thrown out of balance. I know this from my childhood in northern Virginia, where deer are stripping the forest of all low-hanging greenery with no wolves to control them. With the top predator, the whole ecosystem springs to life and starts humming like a well-tuned engine! For example, when wolves were reintroduced in Yellowstone National Park, it seems that even riverbeds were affected: There are several plans to restore grizzlies to the North Cascades. On the link I recommended, Matthew Inman supports Alternative C — Incremental Restoration. I’m not an expert on this issue, so I went ahead and supported that. There are actually 4 alternatives on the table: Alternative A — No Action. They’ll keep doing what they’re already doing. The few grizzlies already there would be protected from poaching, the local population would be advised on how to deal with grizzlies, and the bears would be monitored. All other alternatives will do these things and more. Alternative B — Ecosystem Evaluation Restoration. Up to 10 grizzly bears will be captured from source populations in northwestern Montana and/or south-central British Columbia and released at a single remote site on Forest Service lands in the North Cascades. This will take 2 years, and then they’ll be monitored for 2 years before deciding what to do next. Alternative C — Incremental Restoration. 5 to 7 grizzly bears will be captured and released into the North Casades each year over roughly 5 to 10 years, with a goal of establishing an initial population of 25 grizzly bears. Bears would be released at multiple remote sites. They can be relocated or removed if they cause trouble. Alternative C is expected to reach the restoration goal of approximately 200 grizzly bears within 60 to 100 years. Alternative D — Expedited Restoration. 5 to 7 grizzly bears will be captured and released into the North Casades each year until the population reaches about 200, which is what the area can easily support. So, pick your own alternative if you like! By the way, the remaining grizzly bears in the western United States live within six recovery zones: • the Greater Yellowstone Ecosystem (GYE) in Wyoming and southwest Montana, • the Northern Continental Divide Ecosystem (NCDE) in northwest Montana, • the Cabinet-Yaak Ecosystem (CYE) in extreme northwestern Montana and the northern Idaho panhandle, • the Selkirk Ecosystem (SE) in northern Idaho and northeastern Washington, • the Bitterroot Ecosystem (BE) in central Idaho and western Montana, • and the North Cascades Ecosystem (NCE) in northwestern and north-central Washington. The North Cascades Ecosystem consists of 24,800 square kilometers in Washington, with an additional 10,350 square kilometers in British Columbia. In the US, 90% of this ecosystem is managed by the US Forest Service, the US National Park Service, and the State of Washington, and approximately 41% falls within Forest Service wilderness or the North Cascades National Park Service Complex. For more, read this: • National Park Service, Draft Grizzly Bear Restoration Plan / Environmental Impact Statement: North Cascades Ecosystem. The picture of grizzlies is from this article: • Ron Judd, Why returning grizzlies to the North Cascades is the right thing to do, Pacific NW Magazine, 23 November 2015. If you’re worried about reintroducing grizzly bears, read it! The map is from here: • Krista Langlois, Grizzlies gain ground, High Country News, 27 August 2014. Here you’ll see the huge obstacles this project has overcome so far. ## March 12, 2017 ### Clifford V. Johnson - Asymptotia Some Panellists… My quick trip to South by Southwest was fruitful, and fun. I was in three events. This* was the group for the panel that was hosted by Rick Loverd, who directs the Science and Entertainment Exchange. We had lots of great discussion about Science in Film, TV, and other entertainment media: - Why it is important to make films more engaging with richer storytelling, to help build broader familiarity with science and scientists, and so on. There were insights from both sides of the "aisle": I spoke about what the kind of work I do in this area, coming from the science side of things and Samantha Corbin-Miller and Stephany Folsom discussed things form their points of view of writers of TV and Film. (I was pleasantly surprised to learn that I'd recently (last Summer) looked at Stephany's work in detail: She wrote the upcoming movie Thor: Ragnarok, and I had studied and written notes on the screenplay and met with the production team and director to give them some help [...] Click to continue reading this post The post Some Panellists… appeared first on Asymptotia. ### ZapperZ - Physics and Physicists The Weak Nuclear Force I'm going to highlight this latest video by Fermilab's Don Lincoln for a number of reasons. First, the video: Second, this is one video packed with a number of very important and illuminating stuff. First he explains about the concept of "spin" in both the classical and quantum picture. This is important because to many people who do not study physics, the word "spin" conjures up a certain idea that is not correct when applied to quantum mechanics. So this video hopefully will enlighten the idea a bit. But what is more fascinating here is his brief historical overview of the first proposal of the connection between the weak interaction and spin, and how Chien Shiung Wu should have received the Nobel Prize for this with Yang and Lee. This might be another case of gender bias that prevented a brilliant Chinese female physicist from a deserving prize. Considering the time that she lived in and the societal and cultural obstacles that she had to overcome, she simply had to be just too outstanding to be able to get to where she was. So this is one terrific video all around, and you get to learn a bit about the weak interaction to boot! Zz. ### Jon Butterworth - Life and Physics ### Lubos Motl - string vacua and pheno A stringy interview with Petr Hořava Giotis has pointed out that the Czech Public Radio recorded a 15-minute English-language interview with Czech string theorist Petr Hořava while he was visiting his old homeland. I hope that this cutely simple HTML5 audio tag with the MP3 file works for everybody. For years, Petr has been working at Berkeley. He's well-known as the co-author of the Hořava-Witten "M-theory on spaces with boundaries" that carry the $$E_8$$ gauge supermultiplet, as they demonstrated. He was also one of the several forefathers of D-branes in the late 1980s. More recently, he inspired the Hořava-Lifshitz theories of gravity that try to start with a theory invariant under the non-relativistic – and Galilean – symmetries. He was also given the Neuron Prize, a Czech science award. In the interview, he talks about string theory, that and why it's the only game in town, what it may explain, what it modifies, what it doesn't, to what extent string theory has been established etc. I think that I would agree with everything he said. Maybe I would prefer a more optimistic tone but that's a different question. ;-) ## March 10, 2017 ### Symmetrybreaking - Fermilab/SLAC A strength test for the strong force New research could tell us about particle interactions in the early universe and even hint at new physics. Much of the matter in the universe is made up of tiny particles called quarks. Normally it’s impossible to see a quark on its own because they are always bound tightly together in groups. Quarks only separate in extreme conditions, such as immediately after the Big Bang or in the center of stars or during high-energy particle collisions generated in particle colliders. Scientists at Louisiana Tech University are working on a study of quarks and the force that binds them by analyzing data from the ATLAS experiment at the LHC. Their measurements could tell us more about the conditions of the early universe and could even hint at new, undiscovered principles of physics. The particles that stick quarks together are aptly named “gluons.” Gluons carry the strong force, one of four fundamental forces in the universe that govern how particles interact and behave. The strong force binds quarks into particles such as protons, neutrons and atomic nuclei. As its name suggests, the strong force is the strongest—it’s 100 times stronger than the electromagnetic force (which binds electrons into atoms), 10,000 times stronger than the weak force (which governs radioactive decay), and a hundred million million million million million million (1039) times stronger than gravity (which attracts you to the Earth and the Earth to the sun). But this ratio shifts when the particles are pumped full of energy. Just as real glue loses its stickiness when overheated, the strong force carried by gluons becomes weaker at higher energies. “Particles play by an evolving set of rules,” says Markus Wobisch from Louisiana Tech University. “The strength of the forces and their influence within the subatomic world changes as the particles’ energies increase. This is a fundamental parameter in our understanding of matter, yet has not been fully investigated by scientists at high energies.” Characterizing the cohesiveness of the strong force is one of the key ingredients to understanding the formation of particles after the Big Bang and could even provide hints of new physics, such as hidden extra dimensions. “Extra dimensions could help explain why the fundamental forces vary dramatically in strength,” says Lee Sawyer, a professor at Louisiana Tech University. “For instance, some of the fundamental forces could only appear weak because they live in hidden extra dimensions and we can’t measure their full strength. If the strong force is weaker or stronger than expected at high energies, this tells us that there’s something missing from our basic model of the universe.” By studying the high-energy collisions produced by the LHC, the research team at Louisiana Tech University is characterizing how the strong force pulls energetic quarks into encumbered particles. The challenge they face is that quarks are rambunctious and caper around inside the particle detectors. This subatomic soirée involves hundreds of particles, often arising from about 20 proton-proton collisions happening simultaneously. It leaves a messy signal, which scientists must then reconstruct and categorize. Wobisch and his colleagues innovated a new method to study these rowdy groups of quarks called jets. By measuring the angles and orientations of the jets, he and his colleagues are learning important new information about what transpired during the collisions—more than what they can deduce by simple counting the jets. The average number of jets produced by proton-proton collisions directly corresponds to the strength of the strong force in the LHC’s energetic environment. “If the strong force is stronger than predicted, then we should see an increase in the number of proton-protons collisions that generate three jets. But if the strong force is actually weaker than predicted, then we’d expect to see relatively more collisions that produce only two jets. The ratio between these two possible outcomes is the key to understanding the strong force.” After turning on the LHC, scientists doubled their energy reach and have now determined the strength of the strong force up to 1.5 trillion electronvolts, which is roughly the average energy of every particle in the universe just after the Big Bang. Wobisch and his team are hoping to double this number again with more data. “So far, all our measurements confirm our predictions,” Wobisch says. “More data will help us look at the strong force at even higher energies, giving us a glimpse as to how the first particles formed and the microscopic structure of space-time.” ### The n-Category Cafe The Logic of Space Mathieu Anel and Gabriel Catren are editing a book called New Spaces for Mathematics and Physics, about all different kinds of notions of “space” and their applications. Among other things, there are chapters about smooth spaces, $\infty \infty$-groupoids, topos theory, stacks, and various other things of interest to $nn$-Cafe patrons, all of which I am looking forward to reading. There are chapters by our own John Baez about the continuum and Urs Schreiber about higher prequantum geometry. Here is my own contribution: I intend this to be my last effort at popularization of HoTT for some time, and accordingly it ended up being rather comprehensive. It begins with a 20-page introduction to type theory, from the perspective of a mathematician wanting to use it as an internal language for categories. There are many introductions to type theory, but probably not enough from this point of view, and moreover most popularizations of type theory are rather vague about its categorical semantics; thus I chose (with some additional prompting from the editors) to spend quite some time on this, and be fairly (though not completely) precise about exactly how the categorical semantics of type theory works. I also decided to emphasize the point of view that type theory (and “syntax” more generally) is a presentation of the initial object in some category of structured categories. Some category theorists respond to this by saying essentially “what good is it to describe that initial object in some complicated way, rather than just studying it categorically?” It’s taken me a while to be able to express the answer in a really satisfying way (at least, one that satisfies me), and I tried to do so here. The short version is that by explicitly constructing an object that has some universal property, we may learn more about it than we can conclude from the mere statement of its universal property. This is one of the reasons that topologists study classifying spaces, category theorists study classifying toposes, and algebraists study free groups. For a longer answer, read the chapter. After this introduction to ordinary type theory, but before moving on to homotopy type theory, I spent a while on synthetic topology: type theory treated as an internal language for a category of spaces (actual space-spaces, not $\infty \infty$-groupoids). This seemed appropriate since the book is about all different kinds of space. It also provides a good justification of type theory’s constructive logic for a classical mathematician, since classical principles like the law of excluded middle and the axiom of choice are simply false in categories of spaces (e.g. a continuous surjection generally fails to have a continuous section). I also introduced some specific toposes of spaces, such as Johnstone’s topological topos and the toposes of continuous sets and smooth sets. I also mentioned their “local” or “cohesive” nature, and how it can be regarded as explaining why so many objects in mathematics come “naturally” with topological structure. Namely, because mathematics can be done in type theory, and thereby interpreted in any topos, any mathematical construction can be interpreted in a topos of spaces; and since the forgetful functor from a local/cohesive topos preserves most categorical operations, in most cases the “underlying set” of such an interpretation is what we would get by performing the same construction directly with sets. This also tell us in what circumstances we should expect a construction that takes account of topology to disagree with its analogue for discrete sets, and in what circumstances we should expect a set-theoretic construction to inherit a nontrivial topology even when there is no topological input; read the chapter for details. The subsequent introduction to homotopy type theory and synthetic homotopy theory has nothing particularly special about it, although I decided to downplay the role of “fibration categories” in favor of $\left(\infty ,1\right)\left(\infty,1\right)$-categories when talking about higher-categorical semantics. Current technology for constructing higher-categorical interpretations of type theory uses fibration categories, but I don’t regard that as necessarily essential, and future technology may move away from it. In particular, in addition to the intuition of identity types as path objects in a model category, I think it’s valuable to have a similar intution for identity types as diagonal morphisms in an $\left(\infty ,1\right)\left(\infty,1\right)$-category. The last section brings everything together by discussing cohesive homotopy type theory, which is of course one of my current personal interests, modeling the local/cohesive structure of an $\left(\infty ,1\right)\left(\infty,1\right)$-topos with modalities inside homotopy type theory. As I’ve said before, I feel that this perspective greatly clarifies the distinction and relationship between space-spaces and $\infty \infty$-groupoid “spaces”, with the connecting “fundamental $\infty \infty$-groupoid” functor characterized by a simple universal property. Finally, in the conclusion I at last allowed myself some philosophical rein to speculate about synthetic theories as foundations for mathematics, as opposed to simply internal languages for categories constructed in an ambient classical mathematics. Once we see that mathematics can be formulated in type theory to apply equally well in a category of spaces as in the category of sets, there is no particular reason to regard the category of sets as the “true” foundation and the category of spaces as “less foundational”. Just as we can construct a category of spaces from a category of sets by equipping sets with topological structure, we can construct a “category of sets” (i.e. a Boolean topos) from a “category of spaces” by restricting to the subcategory of objects with uninteresting topology (the discrete or codiscrete ones). Either category, therefore, can serve as an equally valid “reference frame” from which to describe mathematics. ### ZapperZ - Physics and Physicists APS Endorses March Of Science The American Physical Society has unanimously endorsed the upcoming March for Science. I'll be flying out of town on that exact day of the March, so I had decided a while back to simply contribute to it. I get the sentiment and the mission. However, I'm skeptical on the degree of impact that it will make. It will get publicity, and maybe focuses some of the issues, especially funding in the physical sciences, to the public. But for it to take hold, it can't simply be a one-day event, and as much as I've involved myself in many outreach programs, I still see a lot of misinformation and ignorance among the public about science, and physics in particular. Here's something I've always wanted to do, but never followed through and lack the resources to do it. How about we do something similar to a family tree genealogy. But instead of tracing human ancestors, we focus on technology "family tree". I've always wanted to start with the iPhone capacitive touch screen. Trace back up the technology and scientific roots of this component. I bet you there were a lot of various material science, engineering, and physics that were part of various patents, published papers, etc. that eventually gave birth to this touch screen. What it will do is show the public that what they have so gotten used to came out of very basic research in physics and engineering. We can even list out all the funding agencies that were part of the direct line of "descendants" of the device and show them how money spent on basic science actually became a major component of our economy. By doing this, you don't beat around the bush. You TELL the public what they can actually get out of an investment in science with a concrete example. And it may come out of areas that they never made connection before. Zz. ### Clifford V. Johnson - Asymptotia Upcoming Panels at SXSW (Image credit: I borrowed this image from the SXSW website.) It seems that even after finishing the manuscript of the graphic book and turning it in to the publisher*, I can't get away from panels. It's a poor pun, to help make an opening line - I actually mean a different sort of panel. I'll be participating in two (maybe three) of them this Saturday at the South By SouthWest event in Austin, Texas. I'll give you details below, and if you happen to be around, come and see us! This means that I'll not get to see any of the actual conference itself since two (maybe three) events is enough to wipe out most of the day, and then I jump on a plane back to LA. They're about Science and the media. I'll be talking about the things I've [...] Click to continue reading this post The post Upcoming Panels at SXSW appeared first on Asymptotia. ### The n-Category Cafe Postdocs in Sydney Richard Garner writes: The category theory group at Macquarie is currently advertising a two-year Postdoctoral Research Fellowship to work on a project entitled “Enriched categories: new applications in geometry and logic”. Applications close 31st March. The position is expected to start in the second half of this year. More information can be found at the following link: http://jobs.mq.edu.au/cw/en/job/500525/postdoctoral-research-fellow Feel free to contact me with further queries. Richard Garner ## March 09, 2017 ### Marco Frasca - The Gauge Connection Quote of the day “Bad men need nothing more to compass their ends, than that good men should look on and do nothing.” John Stuart Mill Filed under: Quote ## March 07, 2017 ### Symmetrybreaking - Fermilab/SLAC Researchers face engineering puzzle How do you transport 70,000 tons of liquid argon nearly a mile underground? Nearly a mile below the surface of Lead, South Dakota, scientists are preparing for a physics experiment that will probe one of the deepest questions of the universe: Why is there more matter than antimatter? To search for that answer, the Deep Underground Neutrino Experiment, or DUNE, will look at minuscule particles called neutrinos. A beam of neutrinos will travel 800 miles through the Earth from Fermi National Accelerator Laboratory to the Sanford Underground Research Facility, headed for massive underground detectors that can record traces of the elusive particles. Because neutrinos interact with matter so rarely and so weakly, DUNE scientists need a lot of material to create a big enough target for the particles to run into. The most widely available (and cost effective) inert substance that can do the job is argon, a colorless, odorless element that makes up about 1 percent of the atmosphere. The researchers also need to place the detector full of argon far below Earth’s surface, where it will be protected from cosmic rays and other interference. “We have to transfer almost 70,000 tons of liquid argon underground,” says David Montanari, a Fermilab engineer in charge of the experiment’s cryogenics. “And at this point we have two options: We can either transfer it as a liquid or we can transfer it as a gas.” Either way, this move will be easier said than done. ### Liquid or gas? The argon will arrive at the lab in liquid form, carried inside of 20-ton tanker trucks. Montanari says the collaboration initially assumed that it would be easier to transport the argon down in its liquid form—until they ran into several speed bumps. Transporting liquid vertically is very different from transporting it horizontally for one important reason: pressure. The bottom of a mile-tall pipe full of liquid argon would have a pressure of about 3000 pounds per square inch—equivalent to 200 times the pressure at sea level. According to Montanari, to keep these dangerous pressures from occurring, multiple de-pressurizing stations would have to be installed throughout the pipe. Even with these depressurizing stations, safety would still be a concern. While argon is non-toxic, if released into the air it can reduce access to oxygen, much like carbon monoxide does in a fire. In the event of a leak, pressurized liquid argon would spill out and could potentially break its vacuum-sealed pipe, expanding rapidly to fill the mine as a gas. One liter of liquid argon would become about 800 liters of argon gas, or four bathtubs’ worth. Even without a leak, perhaps the most important challenge in transporting liquid argon is preventing it from evaporating into a gas along the way, according to Montanari. To remain a liquid, argon is kept below a brisk temperature of minus 180 degrees Celsius (minus 300 degrees Fahrenheit). “You need a vacuum-insulated pipe that is a mile long inside a mine shaft,” Montanari says. “Not exactly the most comfortable place to install a vacuum-insulated pipe.” To avoid these problems, the cryogenics team made the decision to send the argon down as gas instead. Routing the pipes containing liquid argon through a large bath of water will warm it up enough to turn it into gas, which will be able to travel down through a standard pipe. Re-condensers located underground act as massive air conditioners will then cool the gas until becomes a liquid again. “The big advantage is we no longer have vacuum insulated pipe,” Montanari says. “It is just straight piece of pipe.” Argon gas poses much less of a safety hazard because it is about 1000 times less dense than liquid argon. High pressures would be unlikely to build up and necessitate depressurizing stations, and if a leak occurred, it would not expand as much and cause the same kind of oxygen deficiency. The process of filling the detectors with argon will take place in four stages that will take almost two years, Montanari says. This is due to the amount of available cooling power for re-condensing the argon underground. There is also a limit to the amount of argon produced in the US every year, of which only so much can be acquired by the collaboration and transported to the site at a time. Illustration by Ana Kova ### Argon for answers Once filled, the liquid argon detectors will pick up light and electrons produced by neutrino interactions. Part of what makes neutrinos so fascinating to physicists is their habit of oscillating from one flavor—electron, muon or tau—to another. The parameters that govern this “flavor change” are tied directly to some of the most fundamental questions in physics, including why there is more matter than antimatter. With careful observation of neutrino oscillations, scientists in the DUNE collaboration hope to unravel these mysteries in the coming years. “At the time of the Big Bang, in theory, there should have been equal amounts of matter and antimatter in the universe,” says Eric James, DUNE’s technical coordinator. That matter and antimatter should have annihilated, leaving behind an empty universe. “But we became a matter-dominated universe.” James and other DUNE scientists will be looking to neutrinos for the mechanism behind this matter favoritism. Although the fruits of this labor won’t appear for several years, scientists are looking forward to being able to make use of the massive detectors, which are hundreds of times larger than current detectors that hold only a few hundred tons of liquid argon. Currently, DUNE scientists and engineers are working at CERN to construct Proto-DUNE, a miniature replica of the DUNE detector filled with only 300 tons of liquid argon that can be used to test the design and components. “Size is really important here,” James says. “A lot of what we’re doing now is figuring out how to take those original technologies which have already being developed... and taking it to this next level with bigger and bigger detectors.” ### John Baez - Azimuth Pi and the Golden Ratio Two of my favorite numbers are pi: $\pi = 3.14159...$ and the golden ratio: $\displaystyle{ \Phi = \frac{\sqrt{5} + 1}{2} } = 1.6180339...$ They’re related: $\pi = \frac{5}{\Phi} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \Phi}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2 + \Phi}}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2 + \sqrt{2 + \Phi}}}}} \cdots$ Greg Egan and I came up with this formula last weekend. It’s probably not new, and it certainly wouldn’t surprise experts, but it’s still fun coming up with a formula like this. Let me explain how we did it. History has a fractal texture. It’s not exactly self-similar, but the closer you look at any incident, the more fine-grained detail you see. The simplified stories we learn about the history of math and physics in school are like blurry pictures of the Mandelbrot set. You can see the overall shape, but the really exciting stuff is hidden. François Viète is a French mathematician who doesn’t show up in those simplified stories. He studied law at Poitiers, graduating in 1559. He began his career as an attorney at a quite high level, with cases involving the widow of King Francis I of France and also Mary, Queen of Scots. But his true interest was always mathematics. A friend said he could think about a single question for up to three days, his elbow on the desk, feeding himself without changing position. Nonetheless, he was highly successful in law. By 1590 he was working for King Henry IV. The king admired his mathematical talents, and Viète soon confirmed his worth by cracking a Spanish cipher, thus allowing the French to read all the Spanish communications they were able to obtain. In 1591, François Viète came out with an important book, introducing what is called the new algebra: a symbolic method for dealing with polynomial equations. This deserves to be much better known; it was very familiar to Descartes and others, and it was an important precursor to our modern notation and methods. For example, he emphasized care with the use of variables, and advocated denoting known quantities by consonants and unknown quantities by vowels. (Later people switched to using letters near the beginning of the alphabet for known quantities and letters near the end like $x,y,z$ for unknowns.) In 1593 he came out with another book, Variorum De Rebus Mathematicis Responsorum, Liber VIII. Among other things, it includes a formula for pi. In modernized notation, it looks like this: $\displaystyle{ \frac2\pi = \frac{\sqrt 2}2 \cdot \frac{\sqrt{2+\sqrt 2}}2 \cdot \frac{\sqrt{2+\sqrt{2+\sqrt 2}}}{2} \cdots}$ This is remarkable! First of all, it looks cool. Second, it’s the earliest known example of an infinite product in mathematics. Third, it’s the earliest known formula for the exact value of pi. In fact, it seems to be the earliest formula representing a number as the result of an infinite process rather than of a finite calculation! So, Viète’s formula has been called the beginning of analysis. In his article “The life of pi”, Jonathan Borwein went even further and called Viète’s formula “the dawn of modern mathematics”. How did Viète come up with his formula? I haven’t read his book, but the idea seems fairly clear. The area of the unit circle is pi. So, you can approximate pi better and better by computing the area of a square inscribed in this circle, and then an octagon, and then a 16-gon, and so on: If you compute these areas in a clever way, you get this series of numbers: $\begin{array}{ccl} A_4 &=& 2 \\ \\ A_8 &=& 2 \cdot \frac{2}{\sqrt{2}} \\ \\ A_{16} &=& 2 \cdot \frac{2}{\sqrt{2}} \cdot \frac{2}{\sqrt{2 + \sqrt{2}}} \\ \\ A_{32} &=& 2 \cdot \frac{2}{\sqrt{2}} \cdot \frac{2}{\sqrt{2 + \sqrt{2}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2}}}} \end{array}$ and so on, where $A_n$ is the area of a regular n-gon inscribed in the unit circle. So, it was only a small step for Viète (though an infinite leap for mankind) to conclude that $\displaystyle{ \pi = 2 \cdot \frac{2}{\sqrt{2}} \cdot \frac{2}{\sqrt{2 + \sqrt{2}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2}}}} \cdots }$ or, if square roots in a denominator make you uncomfortable: $\displaystyle{ \frac2\pi = \frac{\sqrt 2}2 \cdot \frac{\sqrt{2+\sqrt 2}}2 \cdot \frac{\sqrt{2+\sqrt{2+\sqrt 2}}}{2} \cdots}$ The basic idea here would not have surprised Archimedes, who rigorously proved that $223/71 < \pi < 22/7$ by approximating the circumference of a circle using a regular 96-gon. Since $96 = 2^5 \times 3$, you can draw a regular 96-gon with ruler and compass by taking an equilateral triangle and bisecting its edges to get a hexagon, bisecting the edges of that to get a 12-gon, and so on up to 96. In a more modern way of thinking, you can figure out everything you need to know by starting with the angle $\pi/3$ and using half-angle formulas 4 times to work out the sine or cosine of $\pi/96$. And indeed, before Viète came along, Ludolph van Ceulen had computed pi to 35 digits using a regular polygon with $2^{62}$ sides! So Viète’s daring new idea was to give an exact formula for pi that involved an infinite process. Now let’s see in detail how Viète’s formula works. Since there’s no need to start with a square, we might as well start with a regular n-gon inscribed in the circle and repeatedly bisect its sides, getting better and better approximations to pi. If we start with a pentagon, we’ll get a formula for pi that involves the golden ratio! We have $\displaystyle{ \pi = \lim_{k \to \infty} A_k }$ so we can also compute pi by starting with a regular n-gon and repeatedly doubling the number of vertices: $\displaystyle{ \pi = \lim_{k \to \infty} A_{2^k n} }$ The key trick is to write $A_{2^k}{n}$ as a ‘telescoping product’: $A_{2^k n} = A_n \cdot \frac{A_{2n}}{A_n} \cdot \frac{A_{4n}}{A_{2n}} \cdot \frac{A_{8n}}{A_{4n}}$ Thus, taking the limit as $k \to \infty$ we get $\displaystyle{ \pi = A_n \cdot \frac{A_{2n}}{A_n} \cdot \frac{A_{4n}}{A_{2n}} \cdot \frac{A_{8n}}{A_{4n}} \cdots }$ where we start with the area of the n-gon and keep ‘correcting’ it to get the area of the 2n-gon, the 4n-gon, the 8n-gon and so on. There’s a simple formula for the area of a regular n-gon inscribed in a circle. You can chop it into $2 n$ right triangles, each of which has base $\sin(\pi/n)$ and height $\cos(\pi/n)$, and thus area $n \sin(\pi/n) \cos(\pi/n)$: Thus, $A_n = n \sin(\pi/n) \cos(\pi/n) = \displaystyle{\frac{n}{2} \sin(2 \pi / n)}$ This lets us understand how the area changes when we double the number of vertices: $\displaystyle{ \frac{A_{n}}{A_{2n}} = \frac{\frac{n}{2} \sin(2 \pi / n)}{n \sin(\pi / n)} = \frac{n \sin( \pi / n) \cos(\pi/n)}{n \sin(\pi / n)} = \cos(\pi/n) }$ This is nice and simple, but we really need a recursive formula for this quantity. Let’s define $\displaystyle{ R_n = 2\frac{A_{n}}{A_{2n}} = 2 \cos(\pi/n) }$ Why the factor of 2? It simplifies our calculations slightly. We can express $R_{2n}$ in terms of $R_n$ using the half-angle formula for the cosine: $\displaystyle{ R_{2n} = 2 \cos(\pi/2n) = 2\sqrt{\frac{1 + \cos(\pi/n)}{2}} = \sqrt{2 + R_n} }$ Now we’re ready for some fun! We have $\begin{array}{ccl} \pi &=& \displaystyle{ A_n \cdot \frac{A_{2n}}{A_n} \cdot \frac{A_{4n}}{A_{2n}} \cdot \frac{A_{8n}}{A_{4n}} \cdots } \\ \\ & = &\displaystyle{ A_n \cdot \frac{2}{R_n} \cdot \frac{2}{R_{2n}} \cdot \frac{2}{R_{4n}} \cdots } \end{array}$ so using our recursive formula $R_{2n} = \sqrt{2 + R_n}$, which holds for any $n$, we get $\pi = \displaystyle{ A_n \cdot \frac{2}{R_n} \cdot \frac{2}{\sqrt{2 + R_n}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + R_n}}} \cdots }$ I think this deserves to be called the generalized Viète formula. And indeed, if we start with a square, we get $A_4 = \displaystyle{\frac{4}{2} \sin(2 \pi / 4)} = 2$ and $R_4 = 2 \cos(\pi/4) = \sqrt{2}$ giving Viète’s formula: $\pi = \displaystyle{ 2 \cdot \frac{2}{\sqrt{2}} \cdot \frac{2}{\sqrt{2 + \sqrt{2}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2}}}} \cdots }$ as desired! But what if we start with a pentagon? For this it helps to remember a beautiful but slightly obscure trig fact: $\cos(\pi / 5) = \Phi/2$ and a slightly less beautiful one: $\displaystyle{ \sin(2\pi / 5) = \frac{1}{2} \sqrt{2 + \Phi} }$ It’s easy to prove these, and I’ll show you how later. For now, note that they imply $A_5 = \displaystyle{\frac{5}{2} \sin(2 \pi / 5)} = \frac{5}{4} \sqrt{2 + \Phi}$ and $R_5 = 2 \cos(\pi/5) = \Phi$ Thus, the formula $\pi = \displaystyle{ A_5 \cdot \frac{2}{R_5} \cdot \frac{2}{\sqrt{2 + R_5}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + R_5}}} \cdots }$ gives us $\pi = \displaystyle{ \frac{5}{4} \sqrt{2 + \Phi} \cdot \frac{2}{\Phi} \cdot \frac{2}{\sqrt{2 + \Phi}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \Phi}}} \cdots }$ or, cleaning it up a bit, the formula we want: $\pi = \frac{5}{\Phi} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \Phi}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2 + \Phi}}}} \cdot \frac{2}{\sqrt{2 + \sqrt{2 + \sqrt{2 + \sqrt{2 + \Phi}}}}} \cdots$ Voilà! There’s a lot more to say, but let me just explain the slightly obscure trigonometry facts we needed. To derive these, I find it nice to remember that a regular pentagon, and the pentagram inside it, contain lots of similar triangles: Using the fact that all these triangles are similar, it’s easy to show that for any one, the ratio of the long side to the short side is $\Phi$ to 1, since $\displaystyle{\Phi = 1 + \frac{1}{\Phi} }$ Another important fact is that the pentagram trisects the interior angle of the regular pentagon, breaking the interior angle of $108^\circ = 3\pi/5$ into 3 angles of $36^\circ = \pi/5$: Again this is easy and fun to show. Combining these facts, we can prove that $\displaystyle{ \cos(2\pi/5) = \frac{1}{2\Phi} }$ and $\displaystyle{ \cos(\pi/5) = \frac{\Phi}{2} }$ To prove the first equation, chop one of those golden triangles into two right triangles and do things you learned in high school. To prove the second, do the same things to one of the short squat isosceles triangles: Starting from these equations and using $\cos^2 \theta + \sin^2 \theta = 1$, we can show $\displaystyle{ \sin(2\pi/5) = \frac{1}{2}\sqrt{2 + \Phi}}$ and, just for completeness (we don’t need it here): $\displaystyle{ \sin(\pi/5) = \frac{1}{2}\sqrt{3 - \Phi}}$ These require some mildly annoying calculations, where it helps to use the identity $\displaystyle{\frac{1}{\Phi^2} = 2 - \Phi }$ Okay, that’s all for now! But if you want more fun, try a couple of puzzles: Puzzle 1. We’ve gotten formulas for pi starting from a square or a regular pentagon. What formula do you get starting from an equilateral triangle? Puzzle 2. Using the generalized Viète formula, prove Euler’s formula $\displaystyle{ \frac{\sin x}{x} = \cos\frac{x}{2} \cdot \cos\frac{x}{4} \cdot \cos\frac{x}{8} \cdots }$ Conversely, use Euler’s formula to prove the generalized Viète formula. So, one might say that the real point of Viète’s formula, and its generalized version, is not any special property of pi, but Euler’s formula. ## March 03, 2017 ### Tommaso Dorigo - Scientificblogging Decision Trees, Explained To Kids Decision trees are one of the many players in the booming field of supervised machine learning. They can be used to classify elements into two or more classes, depending on their characteristics. Their interest in particle physics applications is large, as we always need to try and decide on a statistical basis what kind of physics process originated the particle collision we see in our detector. read more ## March 02, 2017 ### Symmetrybreaking - Fermilab/SLAC Hey Fermilab, it’s a Monkee Micky Dolenz, best known as a vocalist and drummer in 1960s pop band The Monkees, turns out to be one of Fermi National Accelerator Laboratory’s original fans. “Dear Ms. Higgins,” began the email to an employee of Fermi National Accelerator Laboratory. “My name is Micky Dolenz. I am in the entertainment business and probably best known for starring in a ’60s TV show called The Monkees. I have also been a big fan of particle physics for many decades.” The message, which laboratory archivist Valerie Higgins received in November 2016, was legit. And it turns out Dolenz wasn’t kidding about his love of physics. Dolenz visited Fermilab on February 10 and impressed and amazed the scientists he met with his knowledge of (and genuine affection for) the science of quarks, leptons and bosons. Dolenz was, by all accounts, just as excited to meet with Fermilab scientists as they were to meet with him. “He was so enthusiastic about the lab,” Higgins says. “It was such a treat to see someone of his stature and popularity be so interested and knowledgeable about our kind of physics.” Previously unbeknownst to most of the lab’s employees, Dolenz’s association with Fermilab actually stretches back more than 40 years. The last time Dolenz visited Fermilab, the year was 1970. The Monkees TV show had wound down, and Dolenz, then 25, was starring in a play called Remains to Be Seen at the Pheasant Run Playhouse in nearby St. Charles, Illinois. Fermilab wasn’t even called Fermilab yet—it still went by the name National Accelerator Laboratory. Dolenz says he remembers his first visit well. At the time, the lab consisted of a few trailers and bungalows—Fermilab’s now-iconic high-rise building, Wilson Hall, would not be completed until 1973. Dolenz had lunch with several of the scientists then toured the construction site for the Main Ring, the future home of Fermilab’s first superconducting accelerator, the Tevatron. Dolenz captured some of his visit on 16mm film, footage he says he still has in storage. Dolenz called his previous tour of Fermilab “wonderful” and “a dream come true.” Dolenz credits a junior high science teacher with sparking his interest in physics. He spent much of his childhood in Los Angeles building oscilloscopes and transceivers for ham radios and other gadgets. “I was always curious, always building stuff,” he says. “While the other kids were reading Superman comics, I was reading Science News. I loved it all, particularly particle physics and quantum physics.” Dolenz was in training to be an architect, but at age 20, the Monkees audition offered him the opportunity to catapult to worldwide fame as a TV star and musician instead. (“I’m not an idiot,” he says of accepting the role.) Still, he maintained his interest in science—his first email address, created in the 1990s, was “Higgs137,” referencing both the then-undiscovered Higgs boson and the measure of the fine structure constant. Fermilab Director Nigel Lockyer, left, and Deputy Director Joe Lykken, right, talk with Monkee Micky Dolenz during his tour. Photo by Reidar Hahn, Fermilab That interest in science has remained strong, Fermilab physicists noted during the February tour. Dolenz toured the underground cavern that houses detectors for the MINOS, NOvA and MINERvA neutrino experiments, the Muon g-2 experiment hall (where scientists played the theme from The Monkees when he walked in), and the DZero detector in the long-since completed Main Ring. He also spent time in three control rooms. In every location, he impressed the scientists he met with his understanding of physics and his full-on joy at seeing science in action. “Who knew he is a life-long physics aficionado?” says scientist Adam Lyon, who gave Dolenz his Tevatron tour. “I had a great time talking with him.” Dolenz says he sees plenty of connection between his twin interests of physics and music, noting that Einstein played the violin; Richard Feynman played bongos; and Queen guitarist Brian May is an astrophysicist on several experimental collaborations. “According to theory the universe is constantly vibrating, down to even the smallest particles,” Dolenz says. “We talked a lot about vibrations in the ’60s, and Eastern philosophy has been talking about the vibration of the universe for thousands of years. Music is vibration and meter and frequency. There’s a lot of overlap.” Dolenz enjoyed his time at Fermilab so much that he hung out at the lab’s on-site pub until late in the evening, chatting with scientists. And according to Higgins, who spent the most time with him, he’s hoping to return very soon. “He’s still looking for the footage he shot in 1970, and plans to donate that to the archive,” she says. “But I told him he’s welcome here anytime.” Monkee Micky Dolenz stands by a model particle accelerator with Fermilab physicist Herman White and Fermilab Director of Communication Katie Yurkewicz. Photo by Reidar Hahn, Fermilab ## March 01, 2017 ### Jon Butterworth - Life and Physics ## February 28, 2017 ### Symmetrybreaking - Fermilab/SLAC How to build a universe Our universe should be a formless fog of energy. Why isn’t it? According to the known laws of physics, the universe we see today should be dark, empty and quiet. There should be no stars, no planets, no galaxies and no life—just energy and simple particles diffusing further and further into an expanding universe. And yet, here we are. Cosmologists calculate that roughly 13.8 billion years ago, our universe was a hunk of thick, hot energy with no boundaries and its own rules. But then, in less than a microsecond, it matured, and the fundamental laws and properties of matter arose from the pandemonium. How did our elegant and intricate universe emerge? Illustration by Corinne Mucha ### The three conditions The question “How is it here?” alludes to a conundrum that arose during the development of quantum mechanics. In 1928 Paul Dirac combined quantum theory and special relativity to predict the energy of an electron moving near the speed of light. But his equations produced two equally favorable answers: one positive and one negative. Because energy itself cannot be negative, Dirac mused that perhaps the two answers represented the particle’s two possible electric charges. The idea of oppositely charged matter-antimatter pairs was born. Meanwhile, about six minutes away from Dirac’s office in Cambridge, physicist Patrick Blackett was studying the patterns etched in cloud chambers by cosmic rays. In 1933 he detected 14 tracks that showed a single particle of light colliding with an air molecule and bursting into two new particles. The spiral tracks of these new particles were mirror images of each other, indicating that they were oppositely charged. This was one of the first observations of what Dirac had predicted five years earlier—the birth of an electron-positron pair. Today it’s well known that matter and antimatter are the ultimate wonder twins. They’re spontaneously born from raw energy as a team of two and vanish in a silent poof of energy when they merge and annihilate. This appearing-disappearing act spawned one of the most fundamental mysteries in the universe: What is engraved in the laws of nature that saved us from the broth of appearing and annihilating particles of matter and antimatter? “We know this cosmic asymmetry must exist because here we are,” says Jessie Shelton, a theorist at the University of Illinois. “It’s a puzzling imbalance because theory requires three conditions—which all have to be true at once—to create this cosmic preference for matter.” In the 1960s physicist Andrei Sakharov proposed this set of three conditions that could explain the appearance of our matter-dominated universe. Scientists continue to look for evidence of these conditions today. Illustration by Corinne Mucha ### 1. Breaking the tether The first problem is that matter and antimatter always seem to be born together. Just as Blackett observed in the cloud chambers, uncharged energy transforms into evenly balanced matter-antimatter pairs. Charge is always conserved through any transition. For there to be an imbalance in the amounts of matter and antimatter, there needs to be a process that creates more of one than the other. “Sakharov’s first criterion essentially says that there must be some new process that converts antimatter into matter, or vice versa,” says Andrew Long, a postdoctoral researcher in cosmology at the University of Chicago. “This is one of the things experimentalists are looking for in the lab.” In the 1980s, scientists searched for evidence of Sakharov’s first condition by looking for signs of a proton decaying into a positron and two photons. They have yet to find evidence of this modern alchemy, but they continue to search. “We think that the early universe could have contained a heavy neutral particle that sometimes decayed into matter and sometimes decayed into antimatter, but not necessarily into both at the same time,” Long says. Illustration by Corinne Mucha ### 2. Picking a favorite Matter and antimatter cannot co-habitate; they always annihilate when they come into contact. But the creation of just a little more matter than antimatter after the Big Bang—about one part in 10 billion—would leave behind the ingredients needed to build the entire visible universe. How could this come about? Sakharov’s second criterion dictates that the matter-only process outlined in his first criterion must be more efficient than the opposing antimatter process. And specifically, “we need to see a favoritism for the right kinds of matter to agree with astronomical observations,” Shelton says. Observations of light left over from the early universe and measurements of the first lightweight elements produced after the Big Bang show that the discrepancy must exist in a class of particles called baryons: protons, antiprotons and other particles constructed from quarks. “These are snapshots of the early universe,” Shelton says. “From these snapshots, we can derive the density and temperature of the early universe and calculate the slight difference between the number of baryons and antibaryons.” But this slight difference presents a problem. While there are some tiny discrepancies between the behavior of particles and their antiparticle counterparts, these idiosyncrasies are still consistent with the Standard Model and are not enough to explain the origin of the cosmic imbalance nor the universe’s tenderness towards matter. Illustration by Corinne Mucha ### 3. Taking a one-way street In particle physics, any process that runs forward can just as easily run in reverse. A pair of photons can merge and morph into a particle and antiparticle pair. And just as easily, the particle and antiparticle pair can recombine into a pair of photons. This process happens all around us, continually. But because it is cyclical, there is no net gain or loss for a type of matter. If this were always true, our young universe could have been locked in an infinite loop of creation and destruction. Without something slamming the brakes on these cycles at least for a moment, matter could not have evolved into the complex structures we see today. “For every stitch that’s knit, there a simultaneous tug on the thread,” Long says. “We need a way to force the reaction to move forward and not simultaneously run in reverse at the same rate.” Many cosmologists suspect that the gradual expansion and cooling of the universe was enough to lock matter into being, like a supersaturated sweet tea whose sugar crystals drop to the bottom of the glass as it cools (or in the “freezing” interpretation, like a sweet tea that instantly freezes into ice, locking sugar crystals in place without giving them a chance to dissolve). Other cosmologists think that the plasma of the early universe may have contained bubbles that helped separate matter and antimatter (and then served as incubators for particles to acquire mass). Several experiments at CERN are looking for evidence that the universe meets Sakharov’s three conditions. For instance, several precision experiments at CERN’s Antimatter Factory are looking for minuscule differences between the intrinsic characteristics of protons and antiprotons. The LHCb experiment at the Large Hadron Collider is examining the decay patterns of unstable matter and antimatter particles. Shelton and Long both hope that more research from experiments at the LHC will be the key to building a more complete picture of our early universe. LHC experiments could discover that the Higgs field served as the lock that halted the early universe’s perpetually evolving and devolving particle soup—especially if the field contained bubbles that froze faster than others, providing cosmic petri dishes in which matter and antimatter could evolve differently, Long says. “More measurements of the Higgs boson and the fundamental properties of matter and antimatter will help us develop better theories and a better understanding of what and where we come from.” What exactly transpired during the birth of our universe may always remain a bit of an enigma, but we continue to seek new pieces of this formidable puzzle. ## February 26, 2017 ### Clifford V. Johnson - Asymptotia Sandwich Bag Graffiti A little while back, toward the end of December last year, I did a long stretch of days where I needed to change my routine a bit to take advantage of a window of time that came up that I could use for pushing forward on the book. I was falling behind and desperately needed to improve my daily production rate of finished art in order to catch up. So, I ended up ditching making a sandwich in the morning, instead leaving very soon after getting up to head to my office. I then stopped taking my sandwich altogether when I ran out of bread and did not make the time in the evening to bake a fresh batch, as I do once a week or so, because I was just coming back home and falling into bed. The USC catering outlets were all closed that week. This meant that I ended up seeking out a place to buy a sandwich near my office. I found a place [...] Click to continue reading this post The post Sandwich Bag Graffiti appeared first on Asymptotia. ## February 23, 2017 ### Symmetrybreaking - Fermilab/SLAC Instrument finds new earthly purpose Detectors long used to look at the cosmos are now part of X-ray experiments here on Earth. Modern cosmology experiments—such as the BICEP instruments and the Keck Array in Antarctica—rely on superconducting photon detectors to capture signals from the early universe. These detectors, called transition edge sensors, are kept at temperatures near absolute zero, at only tenths of a Kelvin. At this temperature, the “transition” between superconducting and normal states, the sensors function like an extremely sensitive thermometer. They are able to detect heat from cosmic microwave background radiation, the glow emitted after the Big Bang, which is only slightly warmer at around 3 Kelvin. Scientists also have been experimenting with these same detectors to catch a different form of light, says Dan Swetz, a scientist at the National Institute of Standards and Technology. These sensors also happen to work quite well as extremely sensitive X-ray detectors. NIST scientists, including Swetz, design and build the thin, superconducting sensors and turn them into pixelated arrays smaller than a penny. They construct an entire X-ray spectrometer system around those arrays, including a cryocooler, a refrigerator that can keep the detectors near absolute zero temperatures. TES array and cover shown with penny coin for scale. Dan Schmidt, NIST Over the past several years, these X-ray spectrometers built at the NIST Boulder MicroFabrication Facility have been installed at three synchrotrons at US Department of Energy national laboratories: the National Synchrotron Light Source at Brookhaven National Laboratory, the Advanced Photon Source at Argonne National Laboratory and most recently at the Stanford Synchrotron Radiation Lightsource at SLAC National Accelerator Laboratory. Organizing the transition edge sensors into arrays made a more powerful detector. The prototype sensor—built in 1995—consisted of only one pixel. These early detectors had poor resolution, says physicist Kent Irwin of Stanford University and SLAC. He built the original single-pixel transition edge sensor as a postdoc. Like a camera, the detector can capture greater detail the more pixels it has. “It’s only now that we’re hitting hundreds of pixels that it’s really getting useful,” Irwin says. “As you keep increasing the pixel count, the science you can do just keeps multiplying. And you start to do things you didn’t even conceive of being possible before.” Each of the 240 pixels is designed to catch a single photon at a time. These detectors are efficient, says Irwin, collecting photons that may be missed with more conventional detectors. Spectroscopy experiments at synchrotrons examine subtle features of matter using X-rays. In these types of experiments, an X-ray beam is directed at a sample. Energy from the X-rays temporarily excites the electrons in the sample, and when the electrons return to their lower energy state, they release photons. The photons’ energy is distinctive for a given chemical element and contains detailed information about the electronic structure. As the transition edge sensor captures these photons, every individual pixel on the detector functions as a high-energy-resolution spectrometer, able to determine the energy of each photon collected. The researchers combine data from all the pixels and make note of the pattern of detected photons across the entire array and each of their energies. This energy spectrum reveals information about the molecule of interest. These spectrometers are 100 times more sensitive than standard spectrometers, says Dennis Nordlund, SLAC scientist and leader of the transition edge sensor project at SSRL. This allows a look at biological and chemical details at extremely low concentrations using soft (low-energy) X-rays. “These technology advances mean there are many things we can do now with spectroscopy that were previously out of reach,” Nordlund says. “With this type of sensitivity, this is when it gets really interesting for chemistry.” Nordlund and his colleagues—Sangjun Lee, a SLAC postdoctoral research fellow, and Jamie Titus, a Stanford University doctoral student (pictured above at SSRL, from left: Lee, Titus and Nordlund)—have already used the transition-edge-sensor spectrometer at SSRL to probe for nitrogen impurities in nanodiamonds and graphene, as well as closely examine the metal centers of proteins and bioenzymes, such as hemoglobin and photosystem II. The project at SLAC was developed with support by the Department of Energy’s Laboratory Directed Research and Development. The early experiments at Brookhaven looked at bonding and the chemical structure of nitrogen-bearing explosives. With the spectrometer at Argonne, a research team recently took scattering measurements on high-temperature superconducting materials. “The instruments are very similar from a technical standpoint—same number of sensors, similar resolution and performance,” Swetz says. “But it’s interesting, the labs are all doing different science with the same basic equipment.” At NIST, Swetz says they’re working to pair these detectors with less intense light sources, which could enable researchers to do X-ray experiments in their personal labs. There are plans to build transition-edge-sensor spectrometers that will work in the higher energy hard X-ray region, which scientists at Argonne are working on for the next upgrade of Advanced Photon Source. To complement this, the SLAC and NIST collaboration is engineering spectrometers that will handle the high repetition rate of X-ray laser pulses such as LCLS-II, the next generation of the free-electron X-ray laser at SLAC. This will require faster readout systems. The goal is to create a transition-edge-sensor array with as many as 10,000 pixels that can capture more than 10,000 pulses per second. Irwin points out that the technology developed for synchrotrons, LCLS-II and future cosmic-microwave-background experiments provides shared benefit. “The information really keeps bouncing back and forth between X-ray science and cosmology,” Irwin says. ### John Baez - Azimuth Saving Climate Data (Part 6) Scott Pruitt, who filed legal challenges against Environmental Protection Agency rules fourteen times, working hand in hand with oil and gas companies, is now head of that agency. What does that mean about the safety of climate data on the EPA’s websites? Here is an inside report: • Dawn Reeves, EPA preserves Obama-Era website but climate change data doubts remain, InsideEPA.com, 21 February 2017. For those of us who are backing up climate data, the really important stuff is in red near the bottom. The EPA has posted a link to an archived version of its website from Jan. 19, the day before President Donald Trump was inaugurated and the agency began removing climate change-related information from its official site, saying the move comes in response to concerns that it would permanently scrub such data. However, the archived version notes that links to climate and other environmental databases will go to current versions of them—continuing the fears that the Trump EPA will remove or destroy crucial greenhouse gas and other data. The archived version was put in place and linked to the main page in response to “numerous [Freedom of Information Act (FOIA)] requests regarding historic versions of the EPA website,” says an email to agency staff shared by the press office. “The Agency is making its best reasonable effort to 1) preserve agency records that are the subject of a request; 2) produce requested agency records in the format requested; and 3) post frequently requested agency records in electronic format for public inspection. To meet these goals, EPA has re-posted a snapshot of the EPA website as it existed on January 19, 2017.” The email adds that the action is similar to the snapshot taken of the Obama White House website. The archived version of EPA’s website includes a “more information” link that offers more explanation. For example, it says the page is “not the current EPA website” and that the archive includes “static content, such as webpages and reports in Portable Document Format (PDF), as that content appeared on EPA’s website as of January 19, 2017.” It cites technical limits for the database exclusions. “For example, many of the links contained on EPA’s website are to databases that are updated with the new information on a regular basis. These databases are not part of the static content that comprises the Web Snapshot.” Searches of the databases from the archive “will take you to the current version of the database,” the agency says. “In addition, links may have been broken in the website as it appeared” on Jan. 19 and those will remain broken on the snapshot. Links that are no longer active will also appear as broken in the snapshot. “Finally, certain extremely large collections of content… were not included in the Snapshot due to their size” such as AirNow images, radiation network graphs, historic air technology transfer network information, and EPA’s searchable news releases.” #### ‘Smart’ Move One source urging the preservation of the data says the snapshot appears to be a “smart” move on EPA’s behalf, given the FOIA requests it has received, and notes that even though other groups like NextGen Climate and scientists have been working to capture EPA’s online information, having it on EPA’s site makes it official. But it could also be a signal that big changes are coming to the official Trump EPA site, and it is unclear how long the agency will maintain the archived version. The source says while it is disappointing that the archive may signal the imminent removal of EPA’s climate site, “at least they are trying to accommodate public concerns” to preserve the information. A second source adds that while it is good that EPA is seeking “to address the widespread concern” that the information will be removed by an administration that does not believe in human-caused climate change, “on the other hand, it doesn’t address the primary concern of the data. It is snapshots of the web text.” Also, information “not included,” such as climate databases, is what is difficult to capture by outside groups and is what really must be preserved. “If they take [information] down” that groups have been trying to preserve, then the underlying concern about access to data remains. “Web crawlers and programs can do things that are easy,” such as taking snapshots of text, “but getting the data inside the database is much more challenging,” the source says. The first source notes that EPA’s searchable databases, such as those maintained by its Clean Air Markets Division, are used by the public “all the time.” The agency’s Office of General Counsel (OGC) Jan. 25 began a review of the implications of taking down the climate page—a planned wholesale removal that was temporarily suspended to allow for the OGC review. But EPA did remove some specific climate information, including links to the Clean Power Plan and references to President Barack Obama’s Climate Action Plan. Inside EPA captured this screenshot of the “What EPA Is Doing” page regarding climate change. Those links are missing on the Trump EPA site. The archive includes the same version of the page as captured by our screenshot. Inside EPA first reported the plans to take down the climate information on Jan. 17. After the OGC investigation began, a source close to the Trump administration said Jan. 31 that climate “propaganda” would be taken down from the EPA site, but that the agency is not expected to remove databases on GHG emissions or climate science. “Eventually… the propaganda will get removed…. Most of what is there is not data. Most of what is there is interpretation.” The Sierra Club and Environmental Defense Fund both filed FOIA requests asking the agency to preserve its climate data, while attorneys representing youth plaintiffs in a federal climate change lawsuit against the government have also asked the Department of Justice to ensure the data related to its claims is preserved. The Azimuth Climate Data Backup Project and other groups are making copies of actual databases, not just the visible portions of websites. ## February 21, 2017 ### Symmetrybreaking - Fermilab/SLAC Mobile Neutrino Lab makes its debut The Mystery Machine for particles hits the road. It’s not as flashy as Scooby Doo’s Mystery Machine, but scientists at Virginia Tech hope that their new vehicle will help solve mysteries about a ghost-like phenomenon: neutrinos. The Mobile Neutrino Lab is a trailer built to contain and transport a 176-pound neutrino detector named MiniCHANDLER (Carbon Hydrogen AntiNeutrino Detector with a Lithium Enhanced Raghavan-optical-lattice). When it begins operations in mid-April, MiniCHANDLER will make history as the first mobile neutrino detector in the US. “Our main purpose is just to see neutrinos and measure the signal to noise ratio,” says Jon Link, a member of the experiment and a professor of physics at Virginia Tech’s Center for Neutrino Physics. “We just want to prove the detector works.” Neutrinos are fundamental particles with no electric charge, a property that makes them difficult to detect. These elusive particles have confounded scientists on several fronts for more than 60 years. MiniCHANDLER is specifically designed to detect neutrinos' antimatter counterparts, antineutrinos, produced in nuclear reactors, which are prolific sources of the tiny particles. Fission at the core of a nuclear reactor splits uranium atoms, whose products themselves undergo a process that emits an electron and electron antineutrino. Other, larger detectors such as Daya Bay have capitalized on this abundance to measure neutrino properties. MiniCHANDLER will serve as a prototype for future mobile neutrino experiments up to 1 ton in size. Link and his colleagues hope MiniCHANDLER and its future counterparts will find answers to questions about sterile neutrinos, an undiscovered, theoretical kind of neutrino and a candidate for dark matter. The detector could also have applications for national security by serving as a way to keep tabs on material inside of nuclear reactors. MiniCHANDLER echoes a similar mobile detector concept from a few years ago. In 2014, a Japanese team published results from another mobile neutrino detector, but their data did not meet the threshold for statistical significance. Detector operations were halted after all reactors in Japan were shut down for safety inspections. “We can monitor the status from outside of the reactor buildings thanks to [a] neutrino’s strong penetration power,” Shugo Oguri, a scientist who worked on the Japanese team, wrote in an email. Link and his colleagues believe their design is an improvement, and the hope is that MiniCHANDLER will be able to better reject background events and successfully detect neutrinos. ### Neutrinos, where are you? To detect neutrinos, which are abundant but interact very rarely with matter, physicists typically use huge structures such as Super-Kamiokande, a neutrino detector in Japan that contains 50,000 tons of ultra-pure water. Experiments are also often placed far underground to block out signals from other particles that are prevalent on Earth’s surface. With its small size and aboveground location, MiniCHANDLER subverts both of these norms. The detector uses solid scintillator technology, which will allow it to record about 100 antineutrino interactions per day. This interaction rate is less than the rate at large detectors, but MiniCHANDLER makes up for this with its precise tracking of antineutrinos. Small plastic cubes pinpoint where in MiniCHANDLER an antineutrino interacts by detecting light from the interaction. However, the same kind of light signal can also come from other passing particles like cosmic rays. To distinguish between the antineutrino and the riffraff, Link and his colleagues look for multiple signals to confirm the presence of an antineutrino. Those signs come from a process called inverse beta decay. Inverse beta decay occurs when an antineutrino collides with a proton, producing light (the first event) and also kicking a neutron out of the nucleus of the atom. These emitted neutrons are slower than the light and are picked up as a secondary signal to confirm the antineutrino interaction. “[MiniCHANDLER] is going to sit on the surface; it's not shielded well at all. So it's going to have a lot of background,” Link says. “Inverse beta decay gives you a way of rejecting the background by identifying the two-part event.” ### Monitoring the reactors Scientists could find use for a mobile neutrino detector beyond studying reactor neutrinos. They could also use the detector to measure properties of the nuclear reactor itself. A mobile neutrino detector could be used to determine whether a reactor is in use, Oguri says. “Detection unambiguously means the reactors are in operation—nobody can cheat the status.” The detector could also be used to determine whether material from a reactor has been repurposed to produce nuclear weapons. Plutonium, an element used in the process of making weapons-grade nuclear material, produces 60 percent fewer detectable neutrinos than uranium, the primary component in a reactor core. “We could potentially tell whether or not the reactor core has the right amount of plutonium in it,” Link says. Using a neutrino detector would be a non-invasive way to track the material; other methods of testing nuclear reactors can be time-consuming and disruptive to the reactor’s processes. But for now, Link just wants MiniCHANDLER to achieve a simple—yet groundbreaking—goal: Get the mobile neutrino lab running. ## February 18, 2017 ### John Baez - Azimuth Azimuth Backup Project (Part 4) The Azimuth Climate Data Backup Project is going well! Our Kickstarter campaign ended on January 31st and the money has recently reached us. Our original goal was$5000. We got $20,427 of donations, and after Kickstarter took its cut we received$18,590.96.

Next time I’ll tell you what our project has actually been doing. This time I just want to give a huge “thank you!” to all 627 people who contributed money on Kickstarter!

I sent out thank you notes to everyone, updating them on our progress and asking if they wanted their names listed. The blanks in the following list represent people who either didn’t reply, didn’t want their names listed, or backed out and decided not to give money. I’ll list people in chronological order: first contributors first.

Only 12 people backed out; the vast majority of blanks on this list are people who haven’t replied to my email. I noticed some interesting but obvious patterns. For example, people who contributed later are less likely to have answered my email yet—I’ll update this list later. People who contributed more money were more likely to answer my email.

The magnitude of contributions ranged from $2000 to$1. A few people offered to help in other ways. The response was international—this was really heartwarming! People from the US were more likely than others to ask not to be listed.

But instead of continuing to list statistical patterns, let me just thank everyone who contributed.

Daniel Estrada
Ahmed Amer
Saeed Masroor
Jodi Kaplan
John Wehrle
Bob Calder
Andrea Borgia
L Gardner

Uche Eke
Keith Warner
Dean Kalahan
James Benson
Dianne Hackborn

Walter Hahn
Thomas Savarino
Noah Friedman
Eric Willisson
Jeffrey Gilmore
John Bennett
Glenn McDavid

Brian Turner

Peter Bagaric

Martin Dahl Nielsen
Broc Stenman

Gabriel Scherer
Roice Nelson
Felipe Pait
Kenneth Hertz

Luis Bruno

Andrew Lottmann
Alex Morse

Noam Zeilberger

Buffy Lyon

Josh Wilcox

Danny Borg

Krishna Bhogaonker
Harald Tveit Alvestrand

Tarek A. Hijaz, MD
Jouni Pohjola
Chavdar Petkov
Markus Jöbstl
Bjørn Borud

Sarah G

William Straub

Frank Harper
Carsten Führmann
Rick Angel
Drew Armstrong

Jesimpson

Valeria de Paiva
Ron Prater
David Tanzer

Rafael Laguna
Miguel Esteves dos Santos
Sophie Dennison-Gibby

Randy Drexler
Peter Haggstrom

Jerzy Michał Pawlak
Santini Basra
Jenny Meyer

John Iskra

Bruce Jones
Māris Ozols
Everett Rubel

Mike D
Manik Uppal
Todd Trimble

Federer Fanatic

Forrest Samuel, Harmos Consulting

Annie Wynn
Norman and Marcia Dresner

Daniel Mattingly
James W. Crosby

Jennifer Booth
Greg Randolph

Dave and Karen Deeter

Sarah Truebe

Tieg Zaharia
Jeffrey Salfen
Birian Abelson

Logan McDonald

Brian Truebe
Jon Leland

Nicole

Sarah Lim

James Turnbull

John Huerta
Katie Mandel Bruce
Bethany Summer

Heather Tilert

Naom Hart
Aaron Riley

Giampiero Campa

Julie A. Sylvia

Pace Willisson

Bangskij

Peter Herschberg

Alaistair Farrugia

Conor Hennessy

Stephanie Mohr

Torinthiel

Lincoln Muri
Anet Ferwerda

Hanna

Michelle Lee Guiney

Ben Doherty
Trace Hagemann

Ryan Mannion

Penni and Terry O'Hearn

Brian Bassham
Caitlin Murphy
John Verran

Susan

Alexander Hawson
Fabrizio Mafessoni
Anita Phagan
Nicolas Acuña
Niklas Brunberg

V. Lazaro Zamora

Branford Werner
Niklas Starck Westerberg
Luca Zenti and Marta Veneziano

Ilja Preuß
Christopher Flint

Courtney Leigh

Katharina Spoerri

Daniel Risse

Hanna
Charles-Etienne Jamme
rhackman41

Jeff Leggett

RKBookman

Aaron Paul
Mike Metzler

Patrick Leiser

Melinda

Ryan Vaughn
Kent Crispin

Michael Teague

Ben

Fabian Bach
Steven Canning

Betsy McCall

John Rees

Mary Peters

Shane Claridge
Thomas Negovan
Tom Grace
Justin Jones

Jason Mitchell

Josh Weber
Rebecca Lynne Hanginger
Kirby

Dawn Conniff

Michael T. Astolfi

Kristeva

Erik
Keith Uber

Elaine Mazerolle
Matthieu Walraet

Linda Penfold

Lujia Liu

Keith

Samar Tareem

Henrik Almén
Michael Deakin
Rutger Ockhorst

Erin Bassett
James Crook

Junior Eluhu
Dan Laufer
Carl
Robert Solovay

Silica Magazine

Leonard Saers
Alfredo Arroyo García

Larry Yu

John Behemonth

Eric Humphrey

Svein Halvor Halvorsen

Karim Issa

Øystein Risan Borgersen
David Anderson Bell III

Ole-Morten Duesend

Robert Biegler

Qu Wenhao

Steffen Dittmar

Shanna Germain

John WS Marvin (Dread Unicorn Games)

Bill Carter
Darth Chronis

Lawrence Stewart

Gareth Hodges

Colin Backhurst
Christopher Metzger

Rachel Gumper

Mariah Thompson

Johnathan Salter

Maggie Unkefer
Shawna Maryanovich

Wilhelm Fitzpatrick
Dylan “ExoByte” Mayo
Lynda Lee

Scott Carpenter

Charles D, Payet
Vince Rostkowski

Tim Brown
Raven Daegmorgan
Zak Brueckner

Christian Page

Steven Greenberg
Chuck Lunney

Natasha Anicich

Bram De Bie
Edward L

Gray Detrick
Robert

Sarah Russell

Sam Leavin

Abilash Pulicken

Isabel Olondriz
James Pierce
James Morrison

April Daniels

José Tremblay Champagne

Chris Edmonds

Hans & Maria Cummings
Bart Gasiewiski

Andy Chamard

Andrew Jackson

Christopher Wright

Crystal Collins

ichimonji10

Alan Stern
Alison W

Dag Henrik Bråtane

Martin Nilsson



## February 16, 2017

### Symmetrybreaking - Fermilab/SLAC

Wizardly neutrinos

Why can a neutrino pass through solid objects?

Physicist Anne Schukraft of Fermi National Accelerator Laboratory explains.

Video of 5SniR5U6YTU

Have a burning question about particle physics? Let us know via email or Twitter (using the hashtag #AskSymmetry). We might answer you in a future video!

## February 14, 2017

### Symmetrybreaking - Fermilab/SLAC

LHCb observes rare decay

Standard Model predictions align with the LHCb experiment’s observation of an uncommon decay.

The Standard Model is holding strong after a new precision measurement of a rare subatomic process.

For the first time, the LHCb experiment at CERN has independently observed the decay of the Bs0 particle—a heavy composite particle consisting of a bottom antiquark and a strange quark—into two muons. The LHCb experiment co-discovered this rare process in 2015 after combining results with the CMS experiment.

Theorists predicted that this particular decay would occur only a few times out of a billion.

“Our measurement is slightly lower than predictions, but well within the range of experimental uncertainty and fully compatible with our models,” says Flavio Archilli, one of the co-leaders of this analysis and a postdoc at Nikhef National Institute for Subatomic Physics. “The theoretical predictions are very accurate, so now we want to improve our precision to see if our measurement is sitting right on top of the expected value or slightly outside, which could be an indication of new physics.”

The LHCb experiment examines the properties and decay patterns of particles to search for cracks in the Standard Model, our best description of the fundamental particles and forces. Any deviations from the Standard Model’s predictions could be evidence of new physics at play.

Supersymmetry, for example, is a popular theory that adds a host of new particles to the Standard Model and ameliorates many of its shortcomings—such as mathematical imbalances between how the different types of particles contribute to subatomic interactions.

“We love this decay because it is one of the most promising places to search for any new effects of supersymmetry,” Archilli says. “Scientists searched for this decay for more than 30 years and now we finally have the first single-experiment observation.”

This new measurement by the LHCb experiment combines data taken from Run 1 and Run 2 of the Large Hadron Collider and employs more refined analysis techniques, making it the most precise measurement of this process to date. In addition to measuring the rate of this rare decay, LHCb researchers also measured how long the Bs0 particle lives before it transforms into the two muons—another measurement that agrees with the Standard Model’s predictions.

“It's gratifying to have achieved these results,” says Universita di Pisa scientist Matteo Rama, one of the co-leaders of this analysis. "They reward the efforts made to improve the analysis techniques, to exploit our data even further. We look forward to updating the measurement with more data with the hope to observe, one day, significant deviations from the Standard Model predictions."

Event display of a typical Bs0 decay into two muons. The two muon tracks from the Bs0 decay are seen as a pair of green tracks traversing the whole detector.

LHCb collaboration

### Symmetrybreaking - Fermilab/SLAC

Physics love poems

This Valentine’s Day, we challenged our readers to send us physics-inspired love poems. You answered the call: We received dozens of submissions—in four different languages! You can find some of our favorite entries below.

But first, as a warm-up, enjoy a video of real scientists at Fermi National Accelerator Laboratory reciting physics-related Valentine’s Day haiku:

Video of lqoFbSyNDF8

Or read the haiku for yourself:

Thanks to all of our readers who submitted poems! In no particular order, here are some of our favorites:

For now, I’m seeing other quarks, some charming and some strange
But when we meet, I know we will all physics rearrange
For you, stop squark, will soon reveal the standard model as deficient
To me, you are my superpartner; the only one sufficient.
Without you, I just spin one-half of what our world could be
But you and I will couple soon in perfect symmetry.
All fundamental forces, we are meant to unify
In brilliant theory only love itself could clarify
Now though I may seem hypercharged and strongly interactive,
I must show my true colors if I hope to be attractive.
Without you, I just don’t feel really quite just like a top
But I’m confident I will yet find love in the name of stop.

- Jared Sagoff

The gravity that
Pulls my soul to you dilates:

- Philip Michaels

A Valentine for Two Quarks

Some people wish for one true love,
like dear old Ma and Pa.
That lifestyle’s not for us; we like
our quark ménage à trois.

You see, some like a threesome,
and I love both of you.
No green quark would be seen without
a red quark and a blue.

The sea is full of other quarks,
but darlings, I don’t heed ‘em.
You must believe I don’t exploit
my asymptotic freedom.

And when you pull away from me,
I just can’t take the stress.
My attraction just grows stronger
(coefficient alpha-s).

With you, my life is colourless;
you bring stability.
Without you, I’m unstable,
so I need you, Q.C.D.

I love our quirky, quarky love.
My Valentines, let’s carry on
exchanging gluons wantonly,
and make a little baryon.

- Cheryl Patrick

Will it work this time?
The wavefunction collapses.
Single once again.

- Anonymous

Our hearts were once close; two nucleons held tight
By a force that was strong, and a love that burned bright.
But, that force became weaker as the days faded ‘way,
And with it, our bond began to decay.

I’ve realize that opposites don’t always attract
(Otherwise, the atom would be more compact),
And opposites we were, our differences great,
Continuing this way, we’d annihilate.

We must be entangled - what else can explain
How, though we are distant, you still cause me pain?

We’ve exchanged mediators, but our half-lives were short,
All data suggests we should promptly abort.
Our collision is over, and signatures thereof
Have vanished, leaving us not a quantum of love.

- Peter Voznyuk

Love ignited light,
Eternal and everywhere:
A Cosmic Background

- Akshay Jogoo

Like energy dear
our love will last forever,
theoretically

- Lauren Brennan

## February 13, 2017

### Symmetrybreaking - Fermilab/SLAC

LZ dark matter detector on fast track

Construction has officially launched for the LZ next-generation dark matter experiment.

The race is on to build the most sensitive US-based experiment designed to directly detect dark matter particles. Department of Energy officials have formally approved a key construction milestone that will propel the project toward its April 2020 goal for completion.

The LUX-ZEPLIN experiment, which will be built nearly a mile underground at the Sanford Underground Research Facility in Lead, South Dakota, is considered one of the best bets yet to determine whether theorized dark matter particles known as WIMPs (weakly interacting massive particles) actually exist.

The fast-moving schedule for LZ will help the US stay competitive with similar next-gen dark matter direct-detection experiments planned in Italy and China.

On February 9, the project passed a DOE review and approval stage known as Critical Decision 3, which accepts the final design and formally launches construction.

“We will try to go as fast as we can to have everything completed by April 2020,” says Murdock “Gil” Gilchriese, LZ project director and a physicist at Lawrence Berkeley National Laboratory, the lead lab for the project. “We got a very strong endorsement to go fast and to be first.” The LZ collaboration now has about 220 participating scientists and engineers who represent 38 institutions around the globe.

The nature of dark matter—which physicists describe as the invisible component or so-called “missing mass” in the universe —has eluded scientists since its existence was deduced through calculations by Swiss astronomer Fritz Zwicky in 1933.

The quest to find out what dark matter is made of, or to learn whether it can be explained by tweaking the known laws of physics in new ways, is considered one of the most pressing questions in particle physics.

Successive generations of experiments have evolved to provide extreme sensitivity in the search that will at least rule out some of the likely candidates and hiding spots for dark matter, or may lead to a discovery.

LZ will be at least 50 times more sensitive to finding signals from dark matter particles than its predecessor, the Large Underground Xenon experiment, which was removed from Sanford Lab last year to make way for LZ. The new experiment will use 10 metric tons of ultra-purified liquid xenon to tease out possible dark matter signals.

“The science is highly compelling, so it’s being pursued by physicists all over the world,” says Carter Hall, the spokesperson for the LZ collaboration and an associate professor of physics at the University of Maryland. “It's a friendly and healthy competition, with a major discovery possibly at stake.”

A planned upgrade to the current XENON1T experiment at National Institute for Nuclear Physics’ Gran Sasso Laboratory in Italy, and China's plans to advance the work on PandaX-II, are also slated to be leading-edge underground experiments that will use liquid xenon as the medium to seek out a dark matter signal. Both of these projects are expected to have a similar schedule and scale to LZ, though LZ participants are aiming to achieve a higher sensitivity to dark matter than these other contenders.

Hall notes that while WIMPs are a primary target for LZ and its competitors, LZ’s explorations into uncharted territory could lead to a variety of surprising discoveries. “People are developing all sorts of models to explain dark matter,” he says. “LZ is optimized to observe a heavy WIMP, but it’s sensitive to some less-conventional scenarios as well. It can also search for other exotic particles and rare processes.”

LZ is designed so that if a dark matter particle collides with a xenon atom, it will produce a prompt flash of light followed by a second flash of light when the electrons produced in the liquid xenon chamber drift to its top. The light pulses, picked up by a series of about 500 light-amplifying tubes lining the massive tank—over four times more than were installed in LUX—will carry the telltale fingerprint of the particles that created them.

When a theorized dark matter particle known as a WIMP collides with a xenon atom, the xenon atom emits a flash of light (gold) and electrons. The flash of light is detected at the top and bottom of the liquid xenon chamber. An electric field pushes the electrons to the top of the chamber, where they generate a second flash of light (red).

SLAC National Accelerator Laboratory

Daniel Akerib, Thomas Shutt and Maria Elena Monzani are leading the LZ team at SLAC National Accelerator Laboratory. The SLAC effort includes a program to purify xenon for LZ by removing krypton, an element that is typically found in trace amounts with xenon after standard refinement processes. “We have already demonstrated the purification required for LZ and are now working on ways to further purify the xenon to extend the science reach of LZ,” Akerib says.

SLAC and Berkeley Lab collaborators are also developing and testing hand-woven wire grids that draw out electrical signals produced by particle interactions in the liquid xenon tank. Full-size prototypes will be operated later this year at a SLAC test platform. “These tests are important to ensure that the grids don't produce low-level electrical discharge when operated at high voltage, since the discharge could swamp a faint signal from dark matter,” Shutt says.

Hugh Lippincott, a Wilson Fellow at Fermi National Accelerator Laboratory and the physics coordinator for the LZ collaboration, says, “Alongside the effort to get the detector built and taking data as fast as we can, we’re also building up our simulation and data analysis tools so that we can understand what we’ll see when the detector turns on. We want to be ready for physics as soon as the first flash of light appears in the xenon.” Fermilab is responsible for implementing key parts of the critical system that handles, purifies, and cools the xenon.

All of the components for LZ are painstakingly measured for naturally occurring radiation levels to account for possible false signals coming from the components themselves. A dust-filtering cleanroom is being prepared for LZ's assembly and a radon-reduction building is under construction at the South Dakota site—radon is a naturally occurring radioactive gas that could interfere with dark matter detection. These steps are necessary to remove background signals as much as possible.

The vessels that will surround the liquid xenon, which are the responsibility of the UK participants of the collaboration, are now being assembled in Italy. They will be built with the world's most ultra-pure titanium to further reduce background noise.

To ensure unwanted particles are not misread as dark matter signals, LZ's liquid xenon chamber will be surrounded by another liquid-filled tank and a separate array of photomultiplier tubes that can measure other particles and largely veto false signals. Brookhaven National Laboratory is handling the production of another very pure liquid, known as a scintillator fluid, that will go into this tank.

The cleanrooms will be in place by June, Gilchriese says, and preparation of the cavern where LZ will be housed is underway at Sanford Lab. Onsite assembly and installation will begin in 2018, he adds, and all of the xenon needed for the project has either already been delivered or is under contract. Xenon gas, which is costly to produce, is used in lighting, medical imaging and anesthesia, space-vehicle propulsion systems, and the electronics industry.

“South Dakota is proud to host the LZ experiment at SURF and to contribute 80 percent of the xenon for LZ,” says Mike Headley, executive director of the South Dakota Science and Technology Authority (SDSTA) that oversees the facility. “Our facility work is underway and we’re on track to support LZ’s timeline.”

UK scientists, who make up about one-quarter of the LZ collaboration, are contributing hardware for most subsystems. Henrique Araújo, from Imperial College London, says, “We are looking forward to seeing everything come together after a long period of design and planning.”

Kelly Hanzel, LZ project manager and a Berkeley Lab mechanical engineer, adds, “We have an excellent collaboration and team of engineers who are dedicated to the science and success of the project.” The latest approval milestone, she says, “is probably the most significant step so far,” as it provides for the purchase of most of the major components in LZ’s supporting systems.

Major support for LZ comes from the DOE Office of Science’s Office of High Energy Physics, South Dakota Science and Technology Authority, the UK’s Science & Technology Facilities Council, and by collaboration members in South Korea and Portugal.

## February 10, 2017

### Symmetrybreaking - Fermilab/SLAC

Physics love poem challenge

Think you can do better than the Symmetry staff? Send us your poems!

Has the love of your life fallen for particle physics? Let the Symmetry team help you reach their heart—with haiku.

On Valentine’s Day, we will publish a collection of physics-related love poems written by Symmetry staff and—if you are so inclined—by readers like you!

Send your poems (haiku format optional) to letters@symmetrymagazine.org by Monday, February 13, at 10 a.m. Central. If we really like yours, we may send you a prize.

For inspiration, consider the following:

Artwork by Sandbox Studio, Chicago
Artwork by Sandbox Studio, Chicago
Artwork by Sandbox Studio, Chicago

## February 07, 2017

### Symmetrybreaking - Fermilab/SLAC

What ended the dark ages of the universe?

New experiments will help astronomers uncover the sources that helped make the universe transparent.

When we peer through our telescopes into the cosmos, we can see stars and galaxies reaching back billions of years. This is possible only because the intergalactic medium we’re looking through is transparent. This was not always the case.

Around 380,000 years after the Big Bang came recombination, when the hot mass of particles that made up the universe cooled enough for electrons to pair with protons, forming neutral hydrogen. This brought on the dark ages, during which the neutral gas in the intergalactic medium absorbed most of the high-energy photons around it, making the universe opaque to these wavelengths of light.

Then, a few hundred million years later, new sources of energetic photons appeared, stripping hydrogen atoms of their electrons and returning them to their ionized state, ultimately allowing light to easily travel through the intergalactic medium. After this era of reionization was complete, the universe was fully transparent once again.

Physicists are using a variety of methods to search for the sources of reionization, and finding them will provide insight into the first galaxies, the structure of the early universe and possibly even the properties of dark matter.

### Energetic sources

Current research suggests that most—if not all—of the ionizing photons came from the formation of the first stars and galaxies. “The reionization process is basically a competition between the rate at which stars produce ionizing radiation and the recombination rate in the intergalactic medium,” says Brant Robertson, a theoretical astrophysicist at the University of California, Santa Cruz.

However, astronomers have yet to find these early galaxies, leaving room for other potential sources. The first stars alone may not have been enough. “There are undoubtedly other contributions, but we argue about how important those contributions are,” Robertson says.

Active galactic nuclei, or AGN, could have been a source of reionization. AGN are luminous bodies, such as quasars, that are powered by black holes and release ultraviolet radiation and X-rays. However, scientists don’t yet know how abundant these objects were in the early universe.

Another, more exotic possibility, is dark matter annihilation. In some models of dark matter, particles collide with each other, annihilating and producing matter and radiation. “If through this channel or something else we could find evidence for dark matter annihilation, that would be fantastically interesting, because it would immediately give you an estimate of the mass of the dark matter and how strongly it interacts with Standard Model particles,” says Tracy Slatyer, a particle physicist at MIT.

Dark matter annihilation and AGN may have also indirectly aided reionization by providing extra heat to the universe.

### Probing the cosmic dawn

To test their theories of the course of cosmic reionization, astronomers are probing this epoch in the history of the universe using various methods including telescope observations, something called “21-centimeter cosmology” and probing the cosmic microwave background.

Astronomers have yet to find evidence of the most likely source of reionization—the earliest stars—but they’re looking.

By assessing the luminosity of the first galaxies, physicists could estimate how many ionizing photons they could have released. “[To date] there haven't been observations of the actual galaxies that are reionizing the universe—even Hubble can't deliver any of those—but the hope is that the James Webb Space Telescope can,” says John Wise, an astrophysicist at Georgia Tech.

Some of the most telling information will come from 21-centimeter cosmology, so called because it studies 21-centimeter radio waves. Neutral hydrogen gives off radio waves of this frequency, ionized hydrogen does not. Experiments such as the forthcoming Hydrogen Epoch of Reionization Array will detect neutral hydrogen using radio telescopes tuned to this frequency. This could provide clinching evidence about the sources of reionization.

“The basic idea with 21-centimeter cosmology is to not look at the galaxies themselves, but to try to make direct measurements of the intergalactic medium—the hydrogen between the galaxies,” says Adrian Liu, a Hubble fellow at UC Berkeley. “This actually lets you, in principle, directly see reionization, [by seeing how] it affects the intergalactic medium.”

By locating where the universe is ionized and where it is not, astronomers can create a map of how neutral hydrogen is distributed in the early universe. “If galaxies are doing it, then you would have ionized bubbles [around them]. If it is dark matter—dark matter is everywhere—so you're ionizing everywhere, rather than having bubbles of ionizing gas,” says Steven Furlanetto, a theoretical astrophysicist at the University of California, Los Angeles.

Physicists can also learn about sources of reionization by studying the cosmic microwave background, or CMB.

When an atom is ionized, the electron that is released scatters and disrupts the CMB. Physicists can use this information to determine when reionization happened and put constraints on how many photons were needed to complete the process.

For example, physicists reported last year that data released from the Planck satellite was able to lower its estimate of how much ionization was caused by sources other than galaxies. “Just because you could potentially explain it with star-forming galaxies, it doesn't mean that something else isn't lurking in the data,” Slatyer says. “We are hopefully going to get much better measurements of the reionization epoch using experiments like the 21-centimeter observations.”

It is still too early to rule out alternative explanations for the sources of reionization, since astronomers are still at the beginning of uncovering this era in the history of our universe, Liu says. “I would say that one of the most fun things about working in this field is that we don't know exactly what happened.”

## February 06, 2017

### John Baez - Azimuth

Saving Climate Data (Part 5)

There’s a lot going on! Here’s a news roundup. I will separately talk about what the Azimuth Climate Data Backup Project is doing.

### Tweaking the EPA website

Scientists are keeping track of how Trump administration is changing the Environmental Protection Agency website, with before-and-after photos, and analysis:

• Brian Kahn, Behold the “tweaks” Trump has made to the EPA website (so far), National Resources Defense Council blog, 3 February 2017.

All of this would be nothing compared to the new bill to eliminate the EPA, or Myron Ebell’s plan to fire most of the people working there:

• Joe Davidson, Trump transition leader’s goal is two-thirds cut in EPA employees, Washington Post, 30 January 2017.

If you want to keep track of this battle, I recommend getting a 30-day free subscription to this online magazine:

### Taking animal welfare data offline

The Trump team is taking animal-welfare data offline. The US Department of Agriculture will no longer make lab inspection results and violations publicly available, citing privacy concerns:

• Sara Reardon, US government takes animal-welfare data offline, Nature Breaking News, 3 Feburary 2017.

A new bill would prevent the US government from providing access to geospatial data if it helps people understand housing discrimination. It goes like this:

Notwithstanding any other provision of law, no Federal funds may be used to design, build, maintain, utilize, or provide access to a Federal database of geospatial information on community racial disparities or disparities in access to affordable housing._

For more on this bill, and the important ways in which such data has been used, see:

• Abraham Gutman, Scott Burris, and the Temple University Center for Public Health Law Research, Where will data take the Trump administration on housing?, Philly.com, 1 February 2017.

### The EDGI fights back

The Environmental Data and Governance Initiative or EDGI is working to archive public environmental data. They’re helping coordinate data rescue events. You can attend one and have fun eating pizza with cool people while saving data:

• 3 February 2017, Portland
• 4 February 2017, New York City
• 10-11 February 2017, Austin Texas
• 11 February 2017, U. C. Berkeley, California
• 18 February 2017, MIT, Cambridge Massachusetts
• 18 February 2017, Haverford Connecticut
• 18-19 February 2017, Washington DC
• 26 February 2017, Twin Cities, Minnesota

Or, work with EDGI to organize one your own data rescue event! They provide some online tools to help download data.

I know there will also be another event at UCLA, so the above list is not complete, and it will probably change and grow over time. Keep up-to-date at their site:

### Scientists fight back

The pushback is so big it’s hard to list it all! For now I’ll just quote some of this article:

• Tabitha Powledge, The gag reflex: Trump info shutdowns at US science agencies, especially EPA, 27 January 2017.

THE PUSHBACK FROM SCIENCE HAS BEGUN

Predictably, counter-tweets claiming to come from rebellious employees at the EPA, the Forest Service, the USDA, and NASA sprang up immediately. At The Verge, Rich McCormick says there’s reason to believe these claims may be genuine, although none has yet been verified. A lovely head on this post: “On the internet, nobody knows if you’re a National Park.”

At Hit&Run, Ronald Bailey provides handles for several of these alt tweet streams, which he calls “the revolt of the permanent government.” (That’s a compliment.)

Bailey argues, “with exception perhaps of some minor amount of national security intelligence, there is no good reason that any information, data, studies, and reports that federal agencies produce should be kept from the public and press. In any case, I will be following the Alt_Bureaucracy feeds for a while.”

NeuroDojo Zen Faulkes posted on how to demand that scientific societies show some backbone. “Ask yourself: “Have my professional societies done anything more political than say, ‘Please don’t cut funding?’” Will they fight?,” he asked.

Scientists associated with the group_ 500 Women Scientists _donned lab coats and marched in DC as part of the Women’s March on Washington the day after Trump’s Inauguration, Robinson Meyer reported at the Atlantic. A wildlife ecologist from North Carolina told Meyer, “I just can’t believe we’re having to yell, ‘Science is real.’”

Taking a cue from how the Women’s March did its social media organizing, other scientists who want to set up a Washington march of their own have put together a closed Facebook group that claims more than 600,000 members, Kate Sheridan writes at STAT.

The #ScienceMarch Twitter feed says a date for the march will be posted in a few days. [The march will be on 22 April 2017.] The group also plans to release tools to help people interested in local marches coordinate their efforts and avoid duplication.

At The Atlantic, Ed Yong describes the political action committee 314Action. (314=the first three digits of pi.)

Among other political activities, it is holding a webinar on Pi Day—March 14—to explain to scientists how to run for office. Yong calls 314Action the science version of Emily’s List, which helps pro-choice candidates run for office. 314Action says it is ready to connect potential candidate scientists with mentors—and donors.

Other groups may be willing to step in when government agencies wimp out. A few days before the Inauguration, the Centers for Disease Control and Prevention abruptly and with no explanation cancelled a 3-day meeting on the health effects of climate change scheduled for February. Scientists told Ars Technica’s Beth Mole that CDC has a history of running away from politicized issues.

One of the conference organizers from the American Public Health Association was quoted as saying nobody told the organizers to cancel.

I believe it. Just one more example of the chilling effect on global warming. In politics, once the Dear Leader’s wishes are known, some hirelings will rush to gratify them without being asked.

The APHA guy said they simply wanted to head off a potential last-minute cancellation. Yeah, I guess an anticipatory pre-cancellation would do that.

But then—Al Gore to the rescue! He is joining with a number of health groups—including the American Public Health Association—to hold a one-day meeting on the topic Feb 16 at the Carter Center in Atlanta, CDC’s home base. Vox’s Julia Belluz reports that it is not clear whether CDC officials will be part of the Gore rescue event.

### The Sierra Club fights back

The Sierra Club, of which I’m a proud member, is using the Freedom of Information Act or FOIA to battle or at least slow the deletion of government databases. They wisely started even before Trump took power:

• Jennifer A Dlouhy, Fearing Trump data purge, environmentalists push to get records, BloombergMarkets, 13 January 2017.

Here’s how the strategy works:

U.S. government scientists frantically copying climate data they fear will disappear under the Trump administration may get extra time to safeguard the information, courtesy of a novel legal bid by the Sierra Club.

The environmental group is turning to open records requests to protect the resources and keep them from being deleted or made inaccessible, beginning with information housed at the Environmental Protection Agency and the Department of Energy. On Thursday [January 9th], the organization filed Freedom of Information Act requests asking those agencies to turn over a slew of records, including data on greenhouse gas emissions, traditional air pollution and power plants.

The rationale is simple: Federal laws and regulations generally block government agencies from destroying files that are being considered for release. Even if the Sierra Club’s FOIA requests are later rejected, the record-seeking alone could prevent files from being zapped quickly. And if the records are released, they could be stored independently on non-government computer servers, accessible even if other versions go offline.

## February 02, 2017

### Symmetrybreaking - Fermilab/SLAC

The Escaramujo Project delivered detector technology by van to eight universities in Latin America.

Professors and students of physics in Latin America have much to offer the world of physics. But for those interested in designing and building the complex experiments needed to gather physics data, hands-on experimentation in much of Central and South America has been lacking. It was that gap that something called the Escaramujo Project aimed to fill by bringing basic components to students who could then assemble them into fully functional detectors.

“It was something completely new,” says Luis Rodolfo Pérez Sánchez, a student at the Universidad Autónoma de Chiapas, Mexico, who is writing his thesis based on measurements taken with the detector. “Until now, there was no device at the university where one could work directly with their hands.”

Each group of students built a detector, which they used to measure cosmic-ray muons (particles coming from space). But they did more than that. They used a Linux open-source computer operating system for the first time, calibrated the equipment, plotted data using the software ROOT and became part of an international community. The students used their detectors to participate in International Cosmic Day, an annual event where scientists around the world measure cosmic rays and share their data.

The Escaramujo Project is led by Federico Izraelevitch, who worked at Fermi National Accelerator Laboratory near Chicago during its planning stages and is now a professor at Instituto Dan Beninson in Argentina. During the project, Izraelevitch and his wife, Eleonora, traveled with three canine companions on a road trip from Chicago to Buenos Aires, stopping to teach workshops in Mexico, Guatemala, Costa Rica, Colombia, Ecuador, Peru and Bolivia. Many nights found them in spots with no tourist lodging or even places to camp with their van.

“People received us with a smile and gave us a cup of coffee, or food, or whatever we needed at the time,” Izraelevitch says. “People are amazing.”

Federico and Eleonora Izraelevitch traveled by van from Chicago to Buenos Aires.

Escaramujo Project

In many locations, students took their detector on a field trip shortly after assembling it. The group in Pasto, Colombia, turned theirs into a muon telescope and carted it to the nearby Galeras volcano, where a kind local lent them a power supply to get things running. They studied an effect of the volcano: muon attenuation, or weakening of the muon signal. Students in La Paz, Bolivia, placed the detector in the back of a van and drove it to a lofty observatory, measuring how the muon flux changed with altitude.

The Escaramujo Project forged direct connections between students at eight universities, who can now use their detectors to collect and share data with other Escaramujo participants.

“This state is one of the poorest states in Mexico,” says Karen Caballero, a professor at UNACH who brought the Escaramujo Project to the university. “The students in Chiapas don’t have the opportunity to participate in international initiatives, so this has been very, very important for them.”

Caballero says there are plans for the full Escaramujo cohort to use their detectors to calibrate expansions of the Latin American Giant Observatory, used for an experiment that began in 2005. LAGO uses multiple sites throughout Central and South America to study gamma-ray bursts, some of the most powerful explosions in the universe, as well as space weather.

While the workshops for the program wrapped up in early 2016, Izraelevitch says he hopes to visit more universities and lead more workshops in the future.

“Hopefully all these sites can continue growing and working as a collaboration in the future,” he says. “These people are capable and have all the knowledge and enthusiasm for being part of a major, first-class experiment.”

Students at the Universidad Autónoma de Chiapas in Mexico built a detector with the Escaramujo Project.

Federico Izraelevitch

## January 30, 2017

### Symmetrybreaking - Fermilab/SLAC

Sign of a long-sought asymmetry

A result from the LHCb experiment shows what could be the first evidence of matter and antimatter baryons behaving differently.

A new result from the LHCb experiment at CERN could help explain why our universe is made of matter and not antimatter.

Matter particles, such as protons and electrons, all have an antimatter twin. These antimatter twins appear identical in nearly every respect except that their electric and magnetic properties are opposite.

Cosmologists predict that the Big Bang produced an equal amount of matter and antimatter, which is a conundrum because matter and antimatter annihilate into pure energy when they come into contact. Particle physicists are looking for any minuscule differences between matter and antimatter, which might explain why our universe contains planets and stars and not a sizzling broth of light and energy instead.

The Large Hadron Collider doesn’t just generate Higgs bosons during its high-energy proton collisions—it also produces antimatter. By comparing the decay patterns of matter particles with their antimatter twins, the LHCb experiment is looking for minuscule differences in how these rival particles behave.

“Many antimatter experiments study particles in a very confined and controlled environment,” says Nicola Neri, a researcher at Italian research institute INFN and one of the leaders of the study. “In our experiment, the antiparticles flow and decay, so we can examine other properties, such as the momenta and trajectories of their decay products.”

The result, published today in Nature Physics, examined the decay products of matter and antimatter baryons (a particles containing three quarks) and looked at the spatial distribution of the resulting daughter particles within the detector. Specifically, Neri and his colleagues looked for a very rare decay of the lambda-b particle (which contains an up quark, down quark and bottom quark) into a proton and three pions (which contain an up quark and anti-down quark).

Based on data from 6000 decays, Neri and his team found a difference in the spatial orientation of the daughter particles of the matter and antimatter lambda-bs.

“This is the first time we’ve seen evidence of matter and antimatter baryons behaving differently,” Neri says. “But we need more data before we can make a definitive claim.”

Statistically, the result has a significant of 3.3 sigma, which means its chances of being a just a statistical fluctuation (and not a new property of nature) is one out of a thousand. The traditional threshold for discovery is 5 sigma, which equates to odds of one out of more than a million.

For Neri, this result is more than early evidence of a never before seen process—it is a key that opens new research opportunities for LHCb physicists.

“We proved that we are there,” Neri says, “Our experiment is so sensitive that we can start systematically looking for this matter-antimatter asymmetry in heavy baryons at LHCb. We have this capability, and we will be able to do even more after the detector is upgraded next year.”

### Matt Strassler - Of Particular Significance

Penny Wise, Pound Foolish

The cost to American science and healthcare of the administration’s attack on legal immigration is hard to quantify.  Maybe it will prevent a terrorist attack, though that’s hard to say.  What is certain is that American faculty are suddenly no longer able to hire the best researchers from the seven countries currently affected by the ban.  Numerous top scientists suddenly cannot travel here to share their work with American colleagues; or if already working here, cannot now travel abroad to learn from experts elsewhere… not to mention visiting their families.  Those caught outside the country cannot return, hurting the American laboratories where they are employed.

You might ask what the big deal is; it’s only seven countries, and the ban is temporary. Well (even ignoring the outsized role of Iran, whose many immigrant engineers and scientists are here because they dislike the ayatollahs and their alternative facts), the impact extends far beyond these seven.

The administration’s tactics are chilling.  Scientists from certain countries now fear that one morning they will discover their country has joined the seven, so that they too cannot hope to enter or exit the United States.  They will decide now to turn down invitations to work in or collaborate with American laboratories; it’s too risky.  At the University of Pennsylvania, I had a Pakistani postdoc, who made important contributions to our research effort. At the University of Washington we hired a terrific Pakistani mathematical physicist. Today, how could I advise someone like that to accept a US position?

Even those not worried about being targeted may decide the US is not the open and welcoming country it used to be.  Many US institutions are currently hiring people for the fall semester.  A lot of bright young scientists — not just Muslims from Muslim-majority nations — will choose instead to go instead to Canada, to the UK, and elsewhere, leaving our scientific enterprise understaffed.

Well, but this is just about science, yes?  Mostly elite academics presumably — it won’t affect the average person.  Right?

Wrong.  It will affect many of us, because it affects healthcare, and in particular, hospitals around the country.  I draw your attention to an article written by an expert in that subject:

http://www.cnn.com/2017/01/29/opinions/trump-ban-impact-on-health-care-vox/index.html

and I’d like to quote from the article (highlights mine):

“Our training hospitals posted job listings for 27,860 new medical graduates last year alone, but American medical schools only put out 18,668 graduates. International physicians percolate throughout the entire medical system. To highlight just one particularly intense specialty, fully 30% of American transplant surgeons started their careers in foreign medical schools. Even with our current influx of international physicians as well as steadily growing domestic medical school spots, the Association of American Medical Colleges estimates that we’ll be short by up to 94,700 doctors by 2025.

The President’s decision is as ill-timed as it was sudden. The initial 90-day order encompasses Match Day, the already anxiety-inducing third Friday in March when medical school graduates officially commit to their clinical training programs. Unless the administration or the courts quickly fix the mess President Trump just created, many American hospitals could face staffing crises come July when new residents are slated to start working.”

If you or a family member has to go into the hospital this summer and gets sub-standard care due to a lack of trained residents and doctors, you know who to blame.  Terrorism is no laughing matter, but you and your loved ones are vastly more likely to die due to a medical error than due to a terrorist.  It’s hard to quantify exactly, but it is clear that over the years since 2000, the number of Americans dying of medical errors is in the millions, while the number who died from terrorism is just over three thousand during that period, almost all of whom died on 9/11 in 2001. So addressing the terrorism problem by worsening a hospital problem probably endangers Americans more than it protects them.

Such is the problem of relying on alternative facts in place of solid scientific reasoning.

Filed under: Science and Modern Society Tagged: immigration

## January 26, 2017

### Symmetrybreaking - Fermilab/SLAC

The robots of CERN

TIM and other mechanical friends tackle jobs humans shouldn’t.

The Large Hadron Collider is the world’s most powerful particle accelerator. Buried in the bedrock beneath the Franco-Swiss border, it whips protons through its nearly 2000 magnets 11,000 times every second.

As you might expect, the subterranean tunnel which houses the LHC is not always the friendliest place for human visitors.

“The LHC contains 120 tons of liquid helium kept at 1.9 Kelvin,” says Ron Suykerbuyk, an LHC operator. “This cooling system is used to keep the electromagnets in super conducting state capable of carrying up to 13,000 Amps of current through its wires. Even with all the safety systems we have in place, we prefer to limit our underground access when the cryogenic systems are on”.

But as with any machine, sometimes the LHC needs attention: inspections, repairs, tuning. The LHC is so secure that even with perfect conditions, it takes 30 minutes after the beam is shut off for the first humans to even arrive at the entrance to the tunnel.

But the robotics team at CERN asks: Why do we need humans for this job anyway?

Enter TIM—the Train Inspection Monorail. TIM is a chain of wagons, sensors and cameras that snake along a track bolted to the LHC tunnel’s ceiling. In the 1990s, the track held a cable car that transported machinery and people around the Large Electron-Position Collider, the first inhabitant of the tunnel. With the installation of the LHC, there was no longer room for both accelerator and the cable car, so the monorail was reconfigured for the sleeker TIM robots.

There are currently two TIM robots and plans to install two more in the next couple of years. These four TIM robots will patrol the different quadrants of the LHC, enabling operators to reach any part of the 17-mile tunnel within 20 minutes. As TIM slithers along the ceiling, an automated eye keeps watch for any changes in the tunnel and a robotic arm drops down to measure radiation. Other sensors measure the temperature, oxygen level and cell phone reception.

“In addition to performing environmental measurements, TIM is a safety system which can be the eyes and ears for members of the CERN Fire Brigade and operations team,” says Mario Di Castro, the leader of CERN’s robotics team. “Eventually we’d like to equip TIM with a fire extinguisher and other physical operations so that it can be the first responder in case of a crisis.”

TIM isn’t alone in its mission to provide a safer environment for its human coworkers. CERN also has three teleoperated robots that can assess troublesome areas, provide assessments of hazards and carry tools.

The main role of these three robots is to access radioactive areas.

Radiation is a type of energy carried by free-moving subatomic particles. As protons race around CERN’s accelerator complex, special equipment called collimators constrict their passage and absorb particles that have wandered away from the center of the beam pipe. This trimming process ensures that the proton stream is compact and tidy.

After a couple weeks of operation, the collimators have absorbed so many particles that they will reemit their energy—even after the beam is shut off. There is no radiation hazard to humans unless they are within a few meters of the collimators, and because the machine is fully automated, humans rarely need to perform check-ups. But occasionally, material in these restricted areas required attention.

By replacing humans with robots, engineers can quickly fix small problems without needing to wait long periods of time for the radiation to dissipate or sending personnel into potentially unsafe environments.

“CERN robots help perform repetitive and dangerous tasks that humans either prefer to avoid or are unable to do because of hazards, size constraints or the extreme environments in which they take place, such CERN experimental areas,” Di Castro says.

About half the time, these tasks are very simple, such as performing a visual assessment of the area or taking measurements. “Robots can replace humans for these simple tasks and improve the quality and timeliness of work,” he says.

Last year the SPS accelerator (which starts the acceleration process for particles that eventually move to the LHC) needed an oil refill to keep its parts running smoothly. But the accelerator itself was too radioactive for humans to visit, so one of the CERN robotics team’s robots rolled in gripping an oil can in its flexible arm.

In June 2016, scientists needed to dispose of radioactive Cobalt, Cesium and Americium they had used to calibrate radiation sensors. Two CERN robots cycled in with several tools, extracted the radioactive sources and packed them in thick protective containers for removal.

Over the last two years, these two robots have performed more than 30 interventions, saving humans both time and radiation doses.

As the LHC increases the power and particle collisions over the next decade, Di Castro and his team are preening these robot companions to increase their capabilities. “We are putting a strong commitment to adapt and develop existing robotic solutions to fit CERN’s evolving needs,” Di Castro says.

Video of wxKRW1Z2lWo