Particle Physics Planet


February 22, 2017

Christian P. Robert - xi'an's og

ABC’ory in Banff [17w5025]

Cascade mountain, Banff, March 18, 2012Another great day of talks and discussions at BIRS! Continuing on the themes of the workshop between delving into the further validation of those approximation techniques and the devising of ever more approximate solutions for ever more complex problems. Among the points that came clearer to me through discussion, a realisation that the synthetic likelihood perspective is not that far away from our assumptions in the consistency paper. And that a logistic version of the approach can be constructed as well. A notion I had not met before (or have forgotten I had met) is the one of early rejection ABC, which should actually be investigated more thoroughly as it should bring considerable improvement in computing time (with the caveats of calibrating the acceptance step before producing the learning sample and of characterising the output).  Both Jukka Corander and Ewan Cameron reminded us of the case of models that take minutes or hours to produce one single dataset. (In his talk on some challenging applications, Jukka Corander chose to move from socks to boots!) And Jean-Michel Marin produced an illuminating if sobering experiment on the lack of proper Bayesian coverage by ABC solutions. (It appears that Ewan’s video includes a long empty moment when we went out for the traditional group photo, missing the end of his talk.)


Filed under: Mountains, pictures, Statistics, Travel, University life Tagged: ABC, ABC convergence, Banff, Banff International Research Station for Mathematical Innovation, BIRS, Canada, Canadian Rockies, consistency of ABC methods, coverage, synthetic likelihood

by xi'an at February 22, 2017 11:17 PM

Emily Lakdawalla - The Planetary Society Blog

Wonderful potentially habitable worlds around TRAPPIST-1
Scientists have found seven, Earth-size planets orbiting a star just 40 light years away. Three lie in the habitable zone and could have water on their surfaces.

February 22, 2017 06:00 PM

Peter Coles - In the Dark

Robert Grosseteste and the Ordered Universe

Tomorrow I’m off to the historic city of Lincoln to give a public lecture, the inaugural Robert Grosseteste Lecture on Astrophysics/Cosmology.

This new series of lectures is named in honour of Robert Grosseteste (c. 1175 – 9 October 1253), a former Bishop of Lincoln, who (among many other things) played a key role in the development of the Western scientific tradition. His De Luce seu de Inchoatione Formarum (“On Light or the Beginning of the Forms”), written around 1220, includes pioneering discussions about cosmogony, which contains many ideas that resonate what I shall be talking about in my lecture. In particular, De Luce explores the nature of matter and the cosmos. Seven centuries before the Big Bang theory, Grosseteste described the birth of the Universe in an explosion and the crystallisation of matter to form stars and planets in a set of nested spheres around Earth. It therefore probably represents the first attempt to describe the ordered system of the Heavens and Earth using a single set of physical laws.

Anyway, this led me to an interesting website about an interdisciplinary project that involves discussing Robert Grosseteste in the context of mediaeval science, called “Ordered Universe”. Here’s an interesting video from that site, which features both historians and scientists.


by telescoper at February 22, 2017 05:16 PM

ZapperZ - Physics and Physicists

Dark Energy - What Is It?
I've posted many articles on Dark Energy. But here's another one aimed at the general public that actually is quite instructive. It describes not only why we think there is dark energy, but also the puzzling phenomenon of the apparent "switching" between one regime to another.

Please take note that, while it seems that this idea has been floating around for a while, the study of Dark Energy is very much still in its infancy. The general public may find it hard to understand, but we really do need a lot more experimental observations on this, and that is easier said than done. Detection of this is not easy and requires years of design and work, and not to mention, funding!

Zz.

by ZapperZ (noreply@blogger.com) at February 22, 2017 04:43 PM

ZapperZ - Physics and Physicists

Mildred Dresselhaus
An absolute giant in physics, and especially on condensed matter physics, Mildred Dresselhaus passed away recently at the age of 86.

Besides all of her accomplishments in physics, she was truly a trail-blazer for women in science, and in physics in particular with all of her "firsts". She, along with Vera Rubin and Deborah Jin, were the strongest candidates to break the drought of women winning the Nobel Prize in physics. Now we have lost all three.

Zz.

by ZapperZ (noreply@blogger.com) at February 22, 2017 02:32 PM

Peter Coles - In the Dark

Cardiff, City of Cycling?

Two recent news items about Cardiff caught my attention so I thought I’d do a quick post. The first piece was about the terrible state of traffic congestion in the city. This doesn’t affect me directly as I normally work to work and back, but it has definitely got much worse in the last few years. The roads are regularly gridlocked, a situation made worse by the interminable and apparently pointless roadworks going on everywhere as well as absurdly slow and dysfunctional traffic lights. There’s a common view around these parts that this is being allowed to happen – or even engineered – so that Cardiff City Council can justify the introduction of congestion charging. This would be an unpopular move among motorists, but I think a congestion charge would not be a bad idea at all, as what the city really needs is to reduce the number of motor vehicles on its streets, to deal with the growing problem of pollution and long journey times.

One day, about six years ago,  I was almost run over three different times by three different vehicles. The first was near the car park in Sophia Gardens, where there are signs and road marking clearly indicating that there is a speed limit of 5 mph but where the normal speed of cars is probably more like 35; the guy who nearly killed me was doing about 60.

Next, in Bute Park, a heavy lorry belonging to the Council, engaged in some sort of “tree-management” business, thundered along the footpath past me. These paths used to be marked 5mph too, but the Council removed all the signs when it decided to build a huge road into the Park and encourage more vehicles to drive around inside. The lorry wasn’t going as fast as the Boy Racer of Sophia Gardens, but the size of the truck made it just as scary.

Finally, using a green light at the pedestrian crossing at Park Place I was narrowly missed by another car who had clearly jumped a red light to get onto the dual carriageway (Dumfries Place) leading to Newport Road.

I have to say things like this aren’t at all unusual, but that is the only time I’ve had three close encounters in one day! Although most car drivers behave responsibly, there seems to be a strong concentration of idiots in Cardiff whose antics are exacerbated by the hare-brained Highways Department of the local council. There are many things to enjoy about living in Cardiff, and the quality of life here is very good for a wide range of reasons, but of all the cities I’ve lived in it is by a long way the least friendly to pedestrians and cyclists.

Which brings me to the second news item, which is about Cardiff City Council’s ambitious new Cycling Strategy, which aims to double the number of trips made using cyclists over the next ten years. That still wouldn’t reach the level of Cambridge, where 30% of all journeys in the city are done by bicycle.

Cardiff has a long way to go to match Cambridge and further still to be like Copenhagen, one of the loveliest and most livable cities I’ve ever experienced, partly because of its traffic policies.

In the interest of balance I should also point out that I was once actually hit on a pedestrian crossing in Cardiff by a bicycle steered by a maniac who went through a red light. In this case, however, I did manage to push him off his bike as he tried to get away, so he ended up more seriously hurt than I was. I was hoping that a friendly car would run over his bike, which was lying in the road, but sadly that didn’t happen.

I hope in their desire to increase the number of cyclists, the town planners don’t forget those of us who travel on foot!


by telescoper at February 22, 2017 01:42 PM

The n-Category Cafe

Functional Equations III: Explaining Relative Entropy

Much of this functional equations course is about entropy and its cousins, such as means, norms, and measures of diversity. So I thought it worth spending one session talking purely about ways of understanding entropy, without actually proving anything about it. I wanted especially to explain how to think about relative entropy — also known as relative information, information gain, and Kullback-Leibler divergence.

My strategy was to do this via coding theory. Information is a slippery concept, and reasoning about it takes some practice. But putting everything in the framework of coding makes everything more concrete. The central point is:

The entropy of a distribution is the mean number of bits per symbols in an optimal encoding.

All this and more is in the course notes. The part we did today starts on page 11.

Next week: relative entropy is the only quantity that satisfies a couple of reasonable properties.

by leinster (Tom.Leinster@ed.ac.uk) at February 22, 2017 12:22 AM

February 21, 2017

Christian P. Robert - xi'an's og

ABC with kernelised regression

sunset from the Banff Centre, Banff, Canada, March 21, 2012The exact title of the paper by Jovana Metrovic, Dino Sejdinovic, and Yee Whye Teh is DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution Regression. It appeared last year in the proceedings of ICML.  The idea is to build ABC summaries by way of reproducing kernel Hilbert spaces (RKHS). Regressing such embeddings to the “optimal” choice of summary statistics by kernel ridge regression. With a possibility to derive summary statistics for quantities of interest rather than for the entire parameter vector. The use of RKHS reminds me of Arthur Gretton’s approach to ABC, although I see no mention made of that work in the current paper.

In the RKHS pseudo-linear formulation, the prediction of a parameter value given a sample attached to this value looks like a ridge estimator in classical linear estimation. (I thus wonder at why one would stop at the ridge stage instead of getting the full Bayes treatment!) Things get a bit more involved in the case of parameters (and observations) of interest, as the modelling requires two RKHS, because of the conditioning on the nuisance observations. Or rather three RHKS. Since those involve a maximum mean discrepancy between probability distributions, which define in turn a sort of intrinsic norm, I also wonder at a Wasserstein version of this approach.

What I find hard to understand in the paper is how a large-dimension large-size sample can be managed by such methods with no visible loss of information and no explosion of the computing budget. The authors mention Fourier features, which never rings a bell for me, but I wonder how this operates in a general setting, i.e., outside the iid case. The examples do not seem to go into enough details for me to understand how this massive dimension reduction operates (and they remain at a moderate level in terms of numbers of parameters). I was hoping Jovana Mitrovic could present her work here at the 17w5025 workshop but she sadly could not make it to Banff for lack of funding!


Filed under: Mountains, pictures, Statistics, Travel, University life Tagged: 17w5025, ABC, Approximate Bayesian computation, Banff, dimension reduction, Fourier transform, ICML, reproducing kernel Hilbert space, ridge regression, RKHS, summary statistics, Wasserstein distance

by xi'an at February 21, 2017 11:17 PM

Emily Lakdawalla - The Planetary Society Blog

NASA's audacious Europa missions are getting closer to reality
Today, NASA announced progress on a spacecraft that would assess whether Jupiter's Moon Europa is habitable, and earlier this month, an agency-sponsored science team released a report on a separate lander mission that would directly search for signs of life.

February 21, 2017 09:43 PM

Emily Lakdawalla - The Planetary Society Blog

Did Voyager 1 capture an image of Enceladus' plumes erupting?
Amateur image processor Ted Stryk revisited Voyager 1 data of Enceladus and came across a surprise.

February 21, 2017 07:38 PM

Peter Coles - In the Dark

A night Out with Nigel Owens

I’ve had a busy morning teaching and a busy afternoon meeting some interesting people from IBM and elsewhere in connection with Data Innovation Institute business, so just time to mention that I’m looking forward tonight to an event at Cardiff Metropolitan University (whose campus is not far from my house) featuring renowned rugby referee Nigel Owens who, in case you hadn’t realized, is gay. The event is part of the celebrations in Cardiff of LGBT History Month.

I’ll update later with reflections on the evening, but in the meantime here’s some examples of him in action on the rugby field!

Update: it was a thoroughly absorbing evening. Nigel Owens spoke extremely engagingly (and without notes) about his upbringing in a small village  in rural Wales, his mental health struggles as he tried to come to terms with his sexuality, a (nearly successful) suicide attempt when he was in his twenties, and how his decision to come out publicly revitalised his career as an international referee. 

When he takes to the field on Saturday to officiate at the Six Nations match between Ireland and France, it will be his 75th international match as a referee, which is the most for any referee ever.

 


by telescoper at February 21, 2017 04:35 PM

Symmetrybreaking - Fermilab/SLAC

Mobile Neutrino Lab makes its debut

The Mystery Machine for particles hits the road.

White trailer with the words

It’s not as flashy as Scooby Doo’s Mystery Machine, but scientists at Virginia Tech hope that their new vehicle will help solve mysteries about a ghost-like phenomenon: neutrinos.

The Mobile Neutrino Lab is a trailer built to contain and transport a 176-pound neutrino detector named MiniCHANDLER (Carbon Hydrogen AntiNeutrino Detector with a Lithium Enhanced Raghavan-optical-lattice). When it begins operations in mid-April, MiniCHANDLER will make history as the first mobile neutrino detector in the US.

“Our main purpose is just to see neutrinos and measure the signal to noise ratio,” says Jon Link, a member of the experiment and a professor of physics at Virginia Tech’s Center for Neutrino Physics. “We just want to prove the detector works.”

Neutrinos are fundamental particles with no electric charge, a property that makes them difficult to detect. These elusive particles have confounded scientists on several fronts for more than 60 years. MiniCHANDLER is specifically designed to detect neutrinos' antimatter counterparts, antineutrinos, produced in nuclear reactors, which are prolific sources of the tiny particles.

Fission at the core of a nuclear reactor splits uranium atoms, whose products themselves undergo a process that emits an electron and electron antineutrino. Other, larger detectors such as Daya Bay have capitalized on this abundance to measure neutrino properties.

MiniCHANDLER will serve as a prototype for future mobile neutrino experiments up to 1 ton in size.

Link and his colleagues hope MiniCHANDLER and its future counterparts will find answers to questions about sterile neutrinos, an undiscovered, theoretical kind of neutrino and a candidate for dark matter. The detector could also have applications for national security by serving as a way to keep tabs on material inside of nuclear reactors.

MiniCHANDLER echoes a similar mobile detector concept from a few years ago. In 2014, a Japanese team published results from another mobile neutrino detector, but their data did not meet the threshold for statistical significance. Detector operations were halted after all reactors in Japan were shut down for safety inspections.

“We can monitor the status from outside of the reactor buildings thanks to [a] neutrino’s strong penetration power,” Shugo Oguri, a scientist who worked on the Japanese team, wrote in an email.

Link and his colleagues believe their design is an improvement, and the hope is that MiniCHANDLER will be able to better reject background events and successfully detect neutrinos.

Neutrinos, where are you?

To detect neutrinos, which are abundant but interact very rarely with matter, physicists typically use huge structures such as Super-Kamiokande, a neutrino detector in Japan that contains 50,000 tons of ultra-pure water. Experiments are also often placed far underground to block out signals from other particles that are prevalent on Earth’s surface.

With its small size and aboveground location, MiniCHANDLER subverts both of these norms.

The detector uses solid scintillator technology, which will allow it to record about 100 antineutrino interactions per day. This interaction rate is less than the rate at large detectors, but MiniCHANDLER makes up for this with its precise tracking of antineutrinos.

Small plastic cubes pinpoint where in MiniCHANDLER an antineutrino interacts by detecting light from the interaction. However, the same kind of light signal can also come from other passing particles like cosmic rays. To distinguish between the antineutrino and the riffraff, Link and his colleagues look for multiple signals to confirm the presence of an antineutrino.

Those signs come from a process called inverse beta decay. Inverse beta decay occurs when an antineutrino collides with a proton, producing light (the first event) and also kicking a neutron out of the nucleus of the atom. These emitted neutrons are slower than the light and are picked up as a secondary signal to confirm the antineutrino interaction.

“[MiniCHANDLER] is going to sit on the surface; it's not shielded well at all. So it's going to have a lot of background,” Link says. “Inverse beta decay gives you a way of rejecting the background by identifying the two-part event.”

Monitoring the reactors

Scientists could find use for a mobile neutrino detector beyond studying reactor neutrinos. They could also use the detector to measure properties of the nuclear reactor itself.

A mobile neutrino detector could be used to determine whether a reactor is in use, Oguri says. “Detection unambiguously means the reactors are in operation—nobody can cheat the status.”

The detector could also be used to determine whether material from a reactor has been repurposed to produce nuclear weapons. Plutonium, an element used in the process of making weapons-grade nuclear material, produces 60 percent fewer detectable neutrinos than uranium, the primary component in a reactor core.

“We could potentially tell whether or not the reactor core has the right amount of plutonium in it,” Link says.

Using a neutrino detector would be a non-invasive way to track the material; other methods of testing nuclear reactors can be time-consuming and disruptive to the reactor’s processes.

But for now, Link just wants MiniCHANDLER to achieve a simple—yet groundbreaking—goal: Get the mobile neutrino lab running.

by Daniel Garisto at February 21, 2017 02:00 PM

Christian P. Robert - xi'an's og

ABC’ory in Banff [17w5025]

The TransCanada Pipeline pavilion, with Cascade Mountain (?), Banff, March 20, 2012The ABC workshop I co-organised has now started and, despite a few last minutes cancellations, we have gathered a great crowd of researchers on the validation and expansion of ABC methods. Or ABC’ory to keep up with my naming of workshops. The videos of the talks should come up progressively on the BIRS webpage. When I did not forget to launch the recording. The program is quite open and with this size of workshop allows for talks and discussions to last longer than planned: the first days contain several expository talks on ABC convergence, auxiliary or synthetic models, summary constructions, challenging applications, dynamic models, and model assessment. Plus prepared discussions on those topics that hopefully involve several workshop participants. We had also set some time for snap-talks, to induce everyone to give a quick presentation of one’s on-going research and open problems. The first day was rather full but saw a lot of interactions and discussions during and around the talks, a mood I hope will last till Friday! Today in replacement of Richard Everitt who alas got sick just before the workshop, we are conducting a discussion on dimensional issues, part of which is made of parts of the following slides (mostly recycled from earlier talks, including the mini-course in Les Diablerets):


Filed under: Mountains, pictures, Statistics, Travel, University life Tagged: 17w5025, ABC, Approximate Bayesian computation, Banff, BIRS, Canada, convergence, Les Diablerets, Rocky Mountains, synthetic likelihood

by xi'an at February 21, 2017 01:18 PM

February 20, 2017

ZapperZ - Physics and Physicists

Will SMASH Be A Smash?
Here comes a new extension to the Standard Model!

A new theoretical paper in PRL has extended the Standard Model of elementary particles to include new particles, and tries to mash different ideas and theories into this new standard model called SMASH - Standard Model Axion See-saw Higgs portal inflation (yeah, it's a mouthful).

SMASH adds six new particles to the seventeen fundamental particles of the standard model. The particles are three heavy right-handed neutrinos, a color triplet fermion, a particle called rho that both gives mass to the right-handed neutrinos and drives cosmic inflation together with the Higgs boson, and an axion, which is a promising dark matter candidate. With these six particles, SMASH does five things: produces the matter–antimatter imbalance in the Universe; creates the mysterious tiny masses of the known left-handed neutrinos; explains an unusual symmetry of the strong interaction that binds quarks in nuclei; accounts for the origin of dark matter; and explains inflation.

Of course, with ANY theoretical ideas, which often has long gestation period, a lot of patient waiting and testing will have to be done to verify many of its predictions. But this seems to create quite an excitement in revamping the Standard Model.

Zz.


by ZapperZ (noreply@blogger.com) at February 20, 2017 11:26 PM

Peter Coles - In the Dark

People are not “Bargaining Chips”

Today there has been a day of action under the banner of “One Day Without Us” to celebrate the contribution that migrants make to the United Kingdom and to the counter growing tide of racism and xenophobia associated with some elements of the recent campaign for this country to leave the European Union. Here’s a video produced by the campaign.

I wish to make it clear, as someone who was born in the United Kingdom, that I am appalled by the present government’s refusal to guarantee the rights of the millions of EU citizens who have made this country their home and enriched us all with their presence here. Migrants have had a positive effect over all sectors of the UK economy for a very long time, but I wish to highlight from my own experience the enormous contribution “migrants” – or as I prefer to call them “colleagues” – make to our Universities. Non-UK scientists form the backbone of the School of Physics & Astronomy here at Cardiff, just as they did in the School of Mathematical and Physical Sciences at the University of Sussex. Without them I don’t know how we’d carry on.

Now the Article 50 Bill has begun its progress through the House of Lords, I hope that it can be amended to force the government to stop treating such valuable people in such a despicable way. In the meantime all I can do – and I know it’s only a gesture – is say that the government does not speak for me, or for any of my colleagues, and that I hope and believe that it will be made to abandon its repellent notion that people can be treated like bargaining chips.


by telescoper at February 20, 2017 06:29 PM

Peter Coles - In the Dark

The Mower, by Philip Larkin

The mower stalled, twice; kneeling, I found
A hedgehog jammed up against the blades,
Killed. It had been in the long grass.

I had seen it before, and even fed it, once.
Now I had mauled its unobtrusive world
Unmendably. Burial was no help:

Next morning I got up and it did not.
The first day after a death, the new absence
Is always the same; we should be careful

Of each other, we should be kind
While there is still time.

by Philip Larkin (1922-1985)

 


by telescoper at February 20, 2017 02:00 PM

Emily Lakdawalla - The Planetary Society Blog

Finding spacecraft impacts on the Moon
Over nearly 60 years of spacecraft exploration of the Moon, lots of spacecraft have crashed on the lunar surface—some accidental, some intentional. Phil Stooke hunts for their impact sites.

February 20, 2017 12:00 PM

February 19, 2017

Tommaso Dorigo - Scientificblogging

Anomaly! Now Available As E-Book
Today I would like to mention that my book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab" is now available for purchase as E-Book at its World Scientific site.

read more

by Tommaso Dorigo at February 19, 2017 10:46 AM

February 18, 2017

John Baez - Azimuth

Azimuth Backup Project (Part 4)

The Azimuth Climate Data Backup Project is going well! Our Kickstarter campaign ended on January 31st and the money has recently reached us. Our original goal was $5000. We got $20,427 of donations, and after Kickstarter took its cut we received $18,590.96.

Next time I’ll tell you what our project has actually been doing. This time I just want to give a huge “thank you!” to all 627 people who contributed money on Kickstarter!

I sent out thank you notes to everyone, updating them on our progress and asking if they wanted their names listed. The blanks in the following list represent people who either didn’t reply, didn’t want their names listed, or backed out and decided not to give money. I’ll list people in chronological order: first contributors first.

Only 12 people backed out; the vast majority of blanks on this list are people who haven’t replied to my email. I noticed some interesting but obvious patterns. For example, people who contributed later are less likely to have answered my email yet—I’ll update this list later. People who contributed more money were more likely to answer my email.

The magnitude of contributions ranged from $2000 to $1. A few people offered to help in other ways. The response was international—this was really heartwarming! People from the US were more likely than others to ask not to be listed.

But instead of continuing to list statistical patterns, let me just thank everyone who contributed.

thank-you-message2_edited-1

Daniel Estrada
Ahmed Amer
Saeed Masroor
Jodi Kaplan
John Wehrle
Bob Calder
Andrea Borgia
L Gardner

Uche Eke
Keith Warner
Dean Kalahan
James Benson
Dianne Hackborn

Walter Hahn
Thomas Savarino
Noah Friedman
Eric Willisson
Jeffrey Gilmore
John Bennett
Glenn McDavid

Brian Turner

Peter Bagaric

Martin Dahl Nielsen
Broc Stenman

Gabriel Scherer
Roice Nelson
Felipe Pait
Kenneth Hertz

Luis Bruno


Andrew Lottmann
Alex Morse

Mads Bach Villadsen
Noam Zeilberger

Buffy Lyon

Josh Wilcox

Danny Borg

Krishna Bhogaonker
Harald Tveit Alvestrand


Tarek A. Hijaz, MD
Jouni Pohjola
Chavdar Petkov
Markus Jöbstl
Bjørn Borud


Sarah G

William Straub

Frank Harper
Carsten Führmann
Rick Angel
Drew Armstrong

Jesimpson

Valeria de Paiva
Ron Prater
David Tanzer

Rafael Laguna
Miguel Esteves dos Santos 
Sophie Dennison-Gibby




Randy Drexler
Peter Haggstrom


Jerzy Michał Pawlak
Santini Basra
Jenny Meyer


John Iskra

Bruce Jones
Māris Ozols
Everett Rubel



Mike D
Manik Uppal
Todd Trimble

Federer Fanatic

Forrest Samuel, Harmos Consulting








Annie Wynn
Norman and Marcia Dresner



Daniel Mattingly
James W. Crosby








Jennifer Booth
Greg Randolph





Dave and Karen Deeter

Sarah Truebe










Jeffrey Salfen
Birian Abelson

Logan McDonald

Brian Truebe
Jon Leland






Sarah Lim







James Turnbull




John Huerta
Katie Mandel Bruce
Bethany Summer






Anna Gladstone



Naom Hart
Aaron Riley

Giampiero Campa

Julie A. Sylvia


Pace Willisson









Bangskij










Peter Herschberg

Alaistair Farrugia


Conor Hennessy




Stephanie Mohr




Torinthiel


Lincoln Muri 
Anet Ferwerda 


Hanna





Michelle Lee Guiney

Ben Doherty
Trace Hagemann







Ryan Mannion


Penni and Terry O'Hearn



Brian Bassham
Caitlin Murphy
John Verran






Susan


Alexander Hawson
Fabrizio Mafessoni
Anita Phagan
Nicolas Acuña
Niklas Brunberg

Adam Luptak
V. Lazaro Zamora






Branford Werner
Niklas Starck Westerberg
Luca Zenti and Marta Veneziano 


Ilja Preuß
Christopher Flint

George Read 
Courtney Leigh

Katharina Spoerri


Daniel Risse



Hanna
Charles-Etienne Jamme
rhackman41



Jeff Leggett

RKBookman


Aaron Paul
Mike Metzler


Patrick Leiser

Melinda

Ryan Vaughn
Kent Crispin

Michael Teague

Ben



Fabian Bach
Steven Canning


Betsy McCall

John Rees

Mary Peters

Shane Claridge
Thomas Negovan
Tom Grace
Justin Jones


Jason Mitchell




Josh Weber
Rebecca Lynne Hanginger
Kirby


Dawn Conniff


Michael T. Astolfi



Kristeva

Erik
Keith Uber

Elaine Mazerolle
Matthieu Walraet

Linda Penfold




Lujia Liu



Keith



Samar Tareem


Henrik Almén
Michael Deakin 


Erin Bassett
James Crook



Junior Eluhu
Dan Laufer
Carl
Robert Solovay






Silica Magazine







Leonard Saers
Alfredo Arroyo García



Larry Yu













John Behemonth


Eric Humphrey








Øystein Risan Borgersen
David Anderson Bell III











Ole-Morten Duesend







Adam North and Gabrielle Falquero

Robert Biegler 


Qu Wenhao






Steffen Dittmar




Shanna Germain






Adam Blinkinsop







John WS Marvin (Dread Unicorn Games)


Bill Carter
Darth Chronis 



Lawrence Stewart

Gareth Hodges

Colin Backhurst
Christopher Metzger

Rachel Gumper


Mariah Thompson

Falk Alexander Glade
Johnathan Salter




Maggie Unkefer
Shawna Maryanovich






Wilhelm Fitzpatrick
Dylan “ExoByte” Mayo
Lynda Lee




Scott Carpenter



Charles D, Payet
Vince Rostkowski


Tim Brown
Raven Daegmorgan
Zak Brueckner


Christian Page

Adi Shavit


Steven Greenberg
Chuck Lunney



Adriel Bustamente

Natasha Anicich



Bram De Bie
Edward L






Gray Detrick
Robert


Sarah Russell

Sam Leavin

Abilash Pulicken

Isabel Olondriz
James Pierce
James Morrison


April Daniels



José Tremblay Champagne


Chris Edmonds

Hans & Maria Cummings
Bart Gasiewiski


Andy Chamard



Andrew Jackson

Christopher Wright



ichimonji10


Alan Stern
Alison W


Dag Henrik Bråtane





Martin Nilsson


William Schrade


by John Baez at February 18, 2017 07:27 PM

Tommaso Dorigo - Scientificblogging

Two Physics Blogs You Should Not Miss
I would like to use this space to advertise a couple of blogs you might be interesting to know about. Many of you who erratically read this blog may probably have already bumped into those sites, but I figured that as the readership of a site varies continuously, there is always the need to do some periodic evangelization.

read more

by Tommaso Dorigo at February 18, 2017 12:04 PM

The n-Category Cafe

Distributive Laws

Guest post by Liang Ze Wong

The Kan Extension Seminar II continues and this week, we discuss Jon Beck’s “Distributive Laws”, which was published in 1969 in the proceedings of the Seminar on Triples and Categorical Homology Theory, LNM vol 80. In the previous Kan seminar post, Evangelia described the relationship between Lawvere theories and finitary monads, along with two ways of combining them (the sum and tensor) that are very natural for Lawvere theories but less so for monads. Distributive laws give us a way of composing monads to get another monad, and are more natural from the monad point of view.

Beck’s paper starts by defining and characterizing distributive laws. He then describes the category of algebras of the composite monad. Just as monads can be factored into adjunctions, he next shows how distributive laws between monads can be “factored” into a “distributive square” of adjunctions. Finally, he ends off with a series of examples.

Before we dive into the paper, I would like to thank Emily Riehl, Alexander Campbell and Brendan Fong for allowing me to be a part of this seminar, and the other participants for their wonderful virtual company. I would also like to thank my advisor James Zhang and his group for their insightful and encouraging comments as I was preparing for this seminar.

First, some differences between this post and Beck’s paper:

  • I’ll use the standard, modern convention for composition: the composite <semantics>XFYGZ<annotation encoding="application/x-tex">\mathbf{X} \overset{F}{\to} \mathbf{Y} \overset{G}{\to} \mathbf{Z}</annotation></semantics> will be denoted <semantics>GF<annotation encoding="application/x-tex">GF</annotation></semantics>. This would be written <semantics>FG<annotation encoding="application/x-tex">FG</annotation></semantics> in Beck’s paper.

  • I’ll use the terms “monad” and “monadic” instead of “triple” and “tripleable”.

  • I’ll rely quite a bit on string diagrams instead of commutative diagrams. These are to be read from right to left and top to bottom. You can learn about string diagrams through these videos or this paper (warning: they read string diagrams in different directions than this post!).

  • All constructions involving the category of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-algebras, <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics>, will be done in an “object-free” manner involving only the universal property of <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics>.

The last two points have the advantage of making the resulting theory applicable to <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-categories or bicategories other than <semantics>Cat<annotation encoding="application/x-tex">\mathbf{Cat}</annotation></semantics>, by replacing categories/ functors/ natural transformations with 0/1/2-cells.

Since string diagrams play a key role in this post, here’s a short example illustrating their use. Suppose we have functors <semantics>F:XY<annotation encoding="application/x-tex">F: \mathbf{X} \to \mathbf{Y}</annotation></semantics> and <semantics>U:YX<annotation encoding="application/x-tex">U: \mathbf{Y} \to \mathbf{X}</annotation></semantics> such that <semantics>FU<annotation encoding="application/x-tex">F \dashv U</annotation></semantics>. Let <semantics>η:1 XUF<annotation encoding="application/x-tex">\eta: 1_{\mathbf{X}} \Rightarrow{UF}</annotation></semantics> be the unit and <semantics>ε:FU1 Y<annotation encoding="application/x-tex">\varepsilon: FU \Rightarrow 1_{\mathbf{Y}}</annotation></semantics> the counit of the adjunction. Then the composite <semantics>FFηFUFεFF<annotation encoding="application/x-tex">F \overset{F \eta}{\Rightarrow} FUF \overset{\varepsilon F}{\Rightarrow} F</annotation></semantics> can be drawn thus:

Sample string diagram

Most diagrams in this post will not be as meticulously labelled as the above. Unlabelled white regions will always stand for a fixed category <semantics>X<annotation encoding="application/x-tex">\mathbf{X}</annotation></semantics>. If <semantics>FU<annotation encoding="application/x-tex">F \dashv U</annotation></semantics>, I’ll use the same colored string to denote them both, since they can be distinguished from their context: above, <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> goes from a white to red region, whereas <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> goes from red to white (remember to read from right to left!). The composite monad <semantics>UF<annotation encoding="application/x-tex">UF</annotation></semantics> (not shown above) would also be a string of the same color, going from a white region to a white region.

Motivating examples

Example 1: Let <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> be the free monoid monad and <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> be the free abelian group monad over <semantics>Set<annotation encoding="application/x-tex">\mathbf{Set}</annotation></semantics>. Then the elementary school fact that multiplication distributes over addition means we have a function <semantics>STXTSX<annotation encoding="application/x-tex">STX \to TSX</annotation></semantics> for <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> a set, sending <semantics>(a+b)(c+d)<annotation encoding="application/x-tex">(a+b)(c+d)</annotation></semantics>, say, to <semantics>ac+ad+bc+bd<annotation encoding="application/x-tex">ac+ad+bc+bd</annotation></semantics>. Further, the composition of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> with <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> is the free ring monad, <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics>.

Example 2: Let <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> be monoids in a braided monoidal category <semantics>(𝒱,,1)<annotation encoding="application/x-tex">(\mathcal{V}, \otimes, 1)</annotation></semantics>. Then <semantics>AB<annotation encoding="application/x-tex">A \otimes B</annotation></semantics> is also a monoid, with multiplication:

<semantics>ABABAtw BABAABBμ Aμ BAB,<annotation encoding="application/x-tex"> A \otimes B \otimes A \otimes B \xrightarrow{A \otimes tw_{BA} \otimes B} A \otimes A \otimes B \otimes B \xrightarrow{\mu_A \otimes \mu_B} A \otimes B, </annotation></semantics>

where <semantics>tw BA:BAAB<annotation encoding="application/x-tex">tw_{BA}: B \otimes A \to A \otimes B</annotation></semantics> is provided by the braiding in <semantics>𝒱<annotation encoding="application/x-tex">\mathcal{V}</annotation></semantics>.

In example 1, there is also a monoidal category in the background: the category <semantics>(End(Set),,Id)<annotation encoding="application/x-tex">\left(\mathbf{End}(\mathbf{Set}), \circ, \text{Id}\right)</annotation></semantics> of endofunctors on <semantics>Set<annotation encoding="application/x-tex">\mathbf{Set}</annotation></semantics>. But this category is not braided – which is why we need distributive laws!

Distributive laws, composite and lifted monads

Let <semantics>(S,η S,μ S)<annotation encoding="application/x-tex">(S,\eta^S, \mu^S)</annotation></semantics> and <semantics>(T,η T,μ T)<annotation encoding="application/x-tex">(T,\eta^T,\mu^T)</annotation></semantics> be monads on a category <semantics>X<annotation encoding="application/x-tex">\mathbf{X}</annotation></semantics>. I’ll use Scarlet and Teal strings to denote <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> and <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, resp., and white regions will stand for <semantics>X<annotation encoding="application/x-tex">\mathbf{X}</annotation></semantics>.

A distributive law of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> over <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> is a natural transformation <semantics>:STTS<annotation encoding="application/x-tex">\ell:ST \Rightarrow TS</annotation></semantics>, denoted

Definition of distributive law

satisfying the following equalities:

Axioms for a distributive law

A distributive law looks somewhat like a braiding in a braided monoidal category. In fact, it is a local pre-braiding: “local” in the sense of being defined only for <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> over <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, and “pre” because it is not necessarily invertible.

As the above examples suggest, a distributive law allows us to define a multiplication <semantics>m:TSTSTS<annotation encoding="application/x-tex">m:TSTS \Rightarrow TS</annotation></semantics>:

Multiplication for the composite monad

It is easy to check visually that this makes <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics> a monad, with unit <semantics>η Tη S<annotation encoding="application/x-tex">\eta^T \eta^S</annotation></semantics>. For instance, the proof that <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> is associative looks like this:

Associativity for the composite monad

Not only is <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics> a monad, we also have monad maps <semantics>Tη S:TTS<annotation encoding="application/x-tex">T \eta^S: T \Rightarrow TS</annotation></semantics> and <semantics>η TS:STS<annotation encoding="application/x-tex">\eta^T S: S \Rightarrow TS</annotation></semantics>:

Monad maps to the composite monad

Asserting that <semantics>Tη S<annotation encoding="application/x-tex">T \eta^S</annotation></semantics> is a monad morphism is the same as asserting these two equalities:

Check that we have a monad map

Similar diagrams hold for <semantics>η TS<annotation encoding="application/x-tex">\eta^T S</annotation></semantics>. Finally, the multiplication <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> also satisfies a middle unitary law:

Middle unitary law

To get back the distributive law, we can simply plug the appropriate units at both ends of <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>:

Recovering the distributive law

This last procedure (plugging units at the ends) can be applied to any <semantics>m:TSTSTS<annotation encoding="application/x-tex">m':TSTS \Rightarrow TS</annotation></semantics>. It turns out that if <semantics>m<annotation encoding="application/x-tex">m'</annotation></semantics> happens to satisfy all the previous properties as well, then we also get a distributive law. Further, the (distributive law <semantics><annotation encoding="application/x-tex">\to</annotation></semantics> multiplication) and (multiplication <semantics><annotation encoding="application/x-tex">\to</annotation></semantics> distributive law) constructions are mutually inverse:

Theorem   The following are equivalent: (1) Distributive laws <semantics>:STTS<annotation encoding="application/x-tex">\ell:ST \Rightarrow TS</annotation></semantics>; (2) multiplications <semantics>m:TSTSTS<annotation encoding="application/x-tex">m:TSTS \Rightarrow TS</annotation></semantics> such that <semantics>(TS,η Tη S,m)<annotation encoding="application/x-tex">(TS, \eta^T \eta^S, m)</annotation></semantics> is a monad, <semantics>Tη S<annotation encoding="application/x-tex">T\eta^S</annotation></semantics> and <semantics>η TS<annotation encoding="application/x-tex">\eta^T S</annotation></semantics> are monad maps, and the middle unitary law holds.

In addition to making <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics> a monad, distributive laws also let us lift <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> to the category of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-algebras, <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics>. Before defining what we mean by “lift”, let’s recall the universal property of <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics>: Let <semantics>Y<annotation encoding="application/x-tex">\mathbf{Y}</annotation></semantics> be another category; then there is an isomorphism of categories between <semantics>Funct(Y,X S)<annotation encoding="application/x-tex">\mathbf{Funct}(\mathbf{Y}, \mathbf{X}^S)</annotation></semantics> – the category of functors <semantics>G˜:YX S<annotation encoding="application/x-tex">\tilde{G}: \mathbf{Y} \to \mathbf{X}^S</annotation></semantics> and natural transformations between them, and <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-<semantics>Alg(Y)<annotation encoding="application/x-tex">\mathbf{Alg}(\mathbf{Y})</annotation></semantics> – the category of functors <semantics>G:YX<annotation encoding="application/x-tex">G: \mathbf{Y} \to \mathbf{X}</annotation></semantics> equipped with an <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-action <semantics>σ:SGG<annotation encoding="application/x-tex">\sigma: SG \Rightarrow G</annotation></semantics> and natural transformations that commute with the <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-action.

Universal property of S-algebras

Given <semantics>G˜:YX S<annotation encoding="application/x-tex">\tilde{G}: \mathbf{Y} \to \mathbf{X}^S</annotation></semantics>, we get a functor <semantics>YX<annotation encoding="application/x-tex">\mathbf{Y} \to \mathbf{X}</annotation></semantics> by composing with <semantics>U S<annotation encoding="application/x-tex">U^S</annotation></semantics>. This composite <semantics>U SG˜<annotation encoding="application/x-tex">U^S \tilde{G}</annotation></semantics> has an <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-action given by the canonical action on <semantics>U S<annotation encoding="application/x-tex">U^S</annotation></semantics>. The universal property says that every such functor <semantics>G:YX<annotation encoding="application/x-tex">G: \mathbf{Y} \to \mathbf{X}</annotation></semantics> with an <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-action is of the form <semantics>U SG˜<annotation encoding="application/x-tex">U^S \tilde{G}</annotation></semantics>. Similar statements hold for natural transformations. We will call <semantics>G˜<annotation encoding="application/x-tex">\tilde{G}</annotation></semantics> and <semantics>ϕ˜<annotation encoding="application/x-tex">\tilde{\phi}</annotation></semantics> lifts of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> and <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>, resp.

A monad lift of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> to <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics> is a monad <semantics>(T˜,η˜ T,μ˜ T)<annotation encoding="application/x-tex">(\tilde{T}, \tilde{\eta}^T,\tilde{\mu}^T)</annotation></semantics> on <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics> such that

<semantics>U ST˜=TU S,U Sη˜ T=η TU S,U Sμ˜ T=μ TU S.<annotation encoding="application/x-tex"> U^S \tilde{T} = T U^S, \qquad U^S \tilde{\eta}^T = \eta^T U^S, \qquad U^S \tilde{\mu}^T = \mu^T U^S. </annotation></semantics>

We may express <semantics>U ST˜=TU S<annotation encoding="application/x-tex">U^S \tilde{T} = T U^S</annotation></semantics> via the following equivalent commutative diagrams:

Commutative diagrams for lifts

The diagram on the right makes it clear that <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}</annotation></semantics> being a monad lift of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> is equivalent to <semantics>T˜,η˜ T,μ˜ T<annotation encoding="application/x-tex">\tilde{T}, \tilde{\eta}^T, \tilde{\mu}^T</annotation></semantics> being lifts of <semantics>TU S,η TU S,μ TU S<annotation encoding="application/x-tex">TU^S, \eta^T U^S,\mu^T U^S</annotation></semantics>, resp. Thus, to get a monad lift of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, it suffices to produce an <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-action on <semantics>TU S<annotation encoding="application/x-tex">TU^S</annotation></semantics> and check that it is compatible with <semantics>η TU S<annotation encoding="application/x-tex">\eta^T U^S</annotation></semantics> and <semantics>μ TU S<annotation encoding="application/x-tex">\mu^T U^S</annotation></semantics>. We may simply combine the distributive law with the canonical <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-action on <semantics>U S<annotation encoding="application/x-tex">U^S</annotation></semantics> to obtain the desired action on <semantics>TU S<annotation encoding="application/x-tex">TU^S</annotation></semantics>:

S-action on TU^S

(Recall that the unlabelled white region is <semantics>X<annotation encoding="application/x-tex">\mathbf{X}</annotation></semantics>. In subsequent diagrams, we will leave the red region unlabelled as well, and this will always be <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics>. Similarly, teal regions will denote <semantics>X T<annotation encoding="application/x-tex">\mathbf{X}^T</annotation></semantics>.)

Conversely, suppose we have a monad lift <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}</annotation></semantics> of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>. Then the equality <semantics>U ST˜=TU S<annotation encoding="application/x-tex">U^S \tilde{T} = T U^S</annotation></semantics> can be expressed by saying that we have an invertible natural transformation <semantics>χ:U ST˜TU S<annotation encoding="application/x-tex">\chi: U^S \tilde{T} \Rightarrow TU^S</annotation></semantics>. Using <semantics>χ<annotation encoding="application/x-tex">\chi</annotation></semantics> and the unit and counit of the adjunction <semantics>F SU S<annotation encoding="application/x-tex">F^S \dashv U^S</annotation></semantics> that gives rise to <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>, we obtain a distributive law of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> over <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>:

Getting a distributive law from a lift

The key steps in the proof that these constructions are mutually inverse are contained in the following two equalities:

Constructions are mutually inverse

The first shows that the resulting distributive law in the (distributive law <semantics><annotation encoding="application/x-tex">\to</annotation></semantics> monad lift <semantics><annotation encoding="application/x-tex">\to</annotation></semantics> distributive law) construction is the same as the original distributive law we started with. The second shows that in the (monad lift <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}</annotation></semantics> <semantics><annotation encoding="application/x-tex">\to</annotation></semantics> distributive law <semantics><annotation encoding="application/x-tex">\to</annotation></semantics> another lift <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}'</annotation></semantics>) construction, the <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-action on <semantics>U ST˜<annotation encoding="application/x-tex">U^S \tilde{T}'</annotation></semantics> (LHS of the equation) is the same as the original <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-action on <semantics>U ST˜<annotation encoding="application/x-tex">U^S \tilde{T}</annotation></semantics> (RHS), hence <semantics>T˜=T˜<annotation encoding="application/x-tex">\tilde{T} = \tilde{T}'</annotation></semantics> (by virtue of being lifts, <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}</annotation></semantics> and <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}'</annotation></semantics> can only differ in their induced <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-actions on <semantics>U ST˜=U ST˜=TU S<annotation encoding="application/x-tex">U^S \tilde{T} = U^S \tilde{T}' = TU^S</annotation></semantics>). We thus have another characterization of distributive laws:

Theorem   The following are equivalent: (1) Distributive laws <semantics>:STTS<annotation encoding="application/x-tex">\ell:ST \Rightarrow TS</annotation></semantics>; (3) monad lifts of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> to <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics>.

In fact, the converse construction did not rely on the universal property of <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics>, and hence applies to any adjunction giving rise to <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> (with a suitable definition of a monad lift of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> in this situation). In particular, it applies to the Kleisli adjunction <semantics>F SU S<annotation encoding="application/x-tex">F_S \dashv U_S</annotation></semantics>. Since the Kleisli category <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}_S</annotation></semantics> is equivalent to the subcategory of free <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-algebras (in the classical sense) in <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics>, this means that to get a distributive law of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> over <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, it suffices to lift <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> to a monad over just the free <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-algebras! (Thanks to Jonathan Beardsley for pointing this out!) The resulting distributive law may be used to get another lift of <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>, but we should not expect this to be the same as the original lift unless the original lift was “monadic” to begin with, in the sense of being a lift to <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics>.

There are two further characterizations of distributive laws that are not mentioned in Beck’s paper, but whose equivalences follow easily. Eugenia Cheng in Distrbutive laws for Lawvere theories states that distributive laws of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> over <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> are also equivalent to extensions of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> to a monad <semantics>S˜<annotation encoding="application/x-tex">\tilde{S}</annotation></semantics> on the Kleisli category <semantics>X T<annotation encoding="application/x-tex">\mathbf{X}_T</annotation></semantics>. This follows by duality from the above theorem, since <semantics>X T=(X op) T<annotation encoding="application/x-tex">\mathbf{X}_T = (\mathbf{X}^{op})^T</annotation></semantics>. Finally, Ross Street’s The formal theory of monads (which was also covered in a previous Kan Extension Seminar post) says that distributive laws in a <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-category <semantics>K<annotation encoding="application/x-tex">\mathbf{K}</annotation></semantics> are precisely monads in <semantics>Mnd(K)<annotation encoding="application/x-tex">\mathbf{Mnd}(\mathbf{K})</annotation></semantics>. It is a fun and easy exercise to draw string diagrams for objects of <semantics>Mnd(Mnd(K))<annotation encoding="application/x-tex">\mathbf{Mnd}(\mathbf{Mnd}(\mathbf{K}))</annotation></semantics>; it becomes visually obvious that these are the same as distributive laws.

Algebras for the composite monad

After characterizing distributive laws, Beck characterizes the algebras for the composite monad <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics>.

Just as a morphism of rings <semantics>RR<annotation encoding="application/x-tex">R \to R'</annotation></semantics> induces a “restriction of scalars” functor <semantics>R<annotation encoding="application/x-tex">R'</annotation></semantics>-<semantics>ModR<annotation encoding="application/x-tex">\mathbf{Mod} \to R</annotation></semantics>-<semantics>Mod<annotation encoding="application/x-tex">\mathbf{Mod}</annotation></semantics>, the monad maps <semantics>Tη S:TTS<annotation encoding="application/x-tex">T \eta^S: T \Rightarrow TS</annotation></semantics> and <semantics>η TS:STS<annotation encoding="application/x-tex">\eta^T S: S \Rightarrow TS</annotation></semantics> induce functors <semantics>U^ TS:X TSX T<annotation encoding="application/x-tex">\hat{U}^{TS}: \mathbf{X}^{TS} \to \mathbf{X}^T</annotation></semantics> and <semantics>U˜ TS:X TSX S<annotation encoding="application/x-tex">\tilde{U}^{TS}: \mathbf{X}^{TS} \to \mathbf{X}^S</annotation></semantics>.

Equivalently, we have <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>- and <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>-actions on <semantics>U TS<annotation encoding="application/x-tex">U^{TS}</annotation></semantics>, which we call <semantics>σ:SU TSU TS<annotation encoding="application/x-tex">\sigma: SU^{TS} \Rightarrow U^{TS}</annotation></semantics> and <semantics>τ:TU TSU TS<annotation encoding="application/x-tex">\tau: T U^{TS} \Rightarrow U^{TS}</annotation></semantics>. Let <semantics>ε:TSU TSU TS<annotation encoding="application/x-tex">\varepsilon: TS\, U^{TS} \Rightarrow U^{TS}</annotation></semantics> be the canonical <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics>-action on <semantics>U TS<annotation encoding="application/x-tex">U^{TS}</annotation></semantics>. The middle unitary law then implies that <semantics>ε=Tστ<annotation encoding="application/x-tex">\varepsilon = T\sigma \cdot \tau</annotation></semantics>:

Actions of T, S and TS

Further, <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> is distributes over <semantics>τ<annotation encoding="application/x-tex">\tau</annotation></semantics> in the following sense:

S-action distributes over T-action

The properties of these actions allow us to characterize <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics>-algebras:

Theorem   The category of algebras for <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics> coincides with that of <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}</annotation></semantics>:

<semantics>X TS(X S) T˜<annotation encoding="application/x-tex"> \mathbf{X}^{TS} \cong (\mathbf{X}^S)^{\tilde{T}} </annotation></semantics>

To prove this, Beck constructs <semantics>Φ:(X S) T˜X TS<annotation encoding="application/x-tex">\Phi: (\mathbf{X}^{S})^{\tilde{T}} \to \mathbf{X}^{TS}</annotation></semantics> and its inverse <semantics>Φ 1:X TS(X S) T˜<annotation encoding="application/x-tex">\Phi^{-1}: \mathbf{X}^{TS} \to (\mathbf{X}^S)^{\tilde{T}} </annotation></semantics>. These constructions are best summarized in the following diagram of lifts:

Diagram of required lifts

On the left half of the diagram, we see that to get <semantics>Φ 1<annotation encoding="application/x-tex">\Phi^{-1}</annotation></semantics>, we must first produce a functor <semantics>U˜ TS:X TSX S<annotation encoding="application/x-tex">\tilde{U}^{TS}: \mathbf{X}^{TS} \to \mathbf{X}^S</annotation></semantics> with a <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}</annotation></semantics>-action. We already have <semantics>U˜ TS<annotation encoding="application/x-tex">\tilde{U}^{TS}</annotation></semantics> as a lift of <semantics>U TS<annotation encoding="application/x-tex">U^{TS}</annotation></semantics>, given by the <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics>-action <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics>. We also have the <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>-action <semantics>τ<annotation encoding="application/x-tex">\tau</annotation></semantics> on <semantics>U TS<annotation encoding="application/x-tex">U^{TS}</annotation></semantics>, which <semantics>σ<annotation encoding="application/x-tex">\sigma</annotation></semantics> distributes over. This is precisely what is required to get a lift of <semantics>τ<annotation encoding="application/x-tex">\tau</annotation></semantics> to a <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}</annotation></semantics>-action <semantics>τ˜<annotation encoding="application/x-tex">\tilde{\tau}</annotation></semantics> on <semantics>U˜ TS<annotation encoding="application/x-tex">\tilde{U}^{TS}</annotation></semantics>, which gives us <semantics>Φ 1<annotation encoding="application/x-tex">\Phi^{-1}</annotation></semantics>.

On the right half of the diagram, to get <semantics>Φ<annotation encoding="application/x-tex">\Phi</annotation></semantics> we need to produce a functor <semantics>(X S) T˜X<annotation encoding="application/x-tex">(\mathbf{X}^{S})^{\tilde{T}} \to \mathbf{X}</annotation></semantics> with a <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics>-action. The obvious functor is <semantics>U SU T˜<annotation encoding="application/x-tex">U^S U^{\tilde{T}}</annotation></semantics>, and we get an action by using the canonical actions of <semantics>S<annotation encoding="application/x-tex">S</annotation></semantics> on <semantics>U S<annotation encoding="application/x-tex">U^S</annotation></semantics> and <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}</annotation></semantics> on <semantics>U T˜<annotation encoding="application/x-tex">U^{\tilde{T}}</annotation></semantics>:

Lifts for Phi

All that’s left to prove the theorem is to check that <semantics>Φ<annotation encoding="application/x-tex">\Phi</annotation></semantics> and <semantics>Φ 1<annotation encoding="application/x-tex">\Phi^{-1}</annotation></semantics> are inverses. In a similar fashion, we can prove the dual statement (again found in Cheng’s paper but not Beck’s):

Theorem   The Kleisli category of <semantics>TS<annotation encoding="application/x-tex">TS</annotation></semantics> coincides with that of <semantics>S˜<annotation encoding="application/x-tex">\tilde{S}</annotation></semantics>:

<semantics>X TS(X T) S˜<annotation encoding="application/x-tex"> \mathbf{X}_{TS} \cong (\mathbf{X}_T)_{\tilde{S}} </annotation></semantics>

Distributivity for adjoints

From now on, we identify <semantics>X TS<annotation encoding="application/x-tex">\mathbf{X}^{TS}</annotation></semantics> with <semantics>(X S) T˜<annotation encoding="application/x-tex">(\mathbf{X}^S)^{\tilde{T}}</annotation></semantics>. Under this identification, it turns out that <semantics>U˜ TSU T˜<annotation encoding="application/x-tex">\tilde{U}^{TS} \cong U^{\tilde{T}}</annotation></semantics>, and we obtain what Beck calls a distributive adjoint situation comprising 3 pairs of adjunctions:

Distributive adjoint situation

For this to qualify as a distributive adjoint situation, we also require that both composites from <semantics>X TS<annotation encoding="application/x-tex">\mathbf{X}^{TS}</annotation></semantics> to <semantics>X<annotation encoding="application/x-tex">\mathbf{X}</annotation></semantics> are naturally isomorphic, and both composites from <semantics>X S<annotation encoding="application/x-tex">\mathbf{X}^S</annotation></semantics> to <semantics>X T<annotation encoding="application/x-tex">\mathbf{X}^T</annotation></semantics> are naturally isomorphic. This can be expressed in the following diagram by requiring both blue circles to be mutually inverse, and both red circles to be mutually inverse:

Distributive adjoints in string

(Recall that colored regions are categories of algebras for the corresponding monads, and the cap and cup are the unit and counit of <semantics>F SU S<annotation encoding="application/x-tex">F^S \dashv U^S</annotation></semantics>.)

This diagram is very similar to the diagram for getting a distributive law out of a lift <semantics>T˜<annotation encoding="application/x-tex">\tilde{T}</annotation></semantics>, and it is easy to believe that any such distributive adjoint situation (with 3 pairs of adjoints - not necessarily monadic - and the corresponding natural isomorphisms) leads to a distributive law.

Finally, suppose the “restriction of scalars” functor <semantics>U^ TS<annotation encoding="application/x-tex">\hat{U}^{TS}</annotation></semantics> has an adjoint. This adjoint behaves like an “extension of scalars” functor, and Beck fittingly calls it <semantics>() SF T<annotation encoding="application/x-tex">(\,) \otimes_S F^T</annotation></semantics> at the start of his paper. I’ll use <semantics>F^ TS<annotation encoding="application/x-tex">\hat{F}^{TS}</annotation></semantics> instead, to highlight its relationship with <semantics>U^ TS<annotation encoding="application/x-tex">\hat{U}^{TS}</annotation></semantics>.

In such a situation, we get an adjoint square consisting of 4 pairs of adjunctions. By drawing these 4 adjoints in the following manner, it becomes clear which natural transformations we require in order to get a distributive law:

Distributive adjoints again

(Recall that <semantics>S=U SF S<annotation encoding="application/x-tex">S = U^S F^S</annotation></semantics> and <semantics>T=U TF T<annotation encoding="application/x-tex">T = U^T F^T</annotation></semantics>, so this is a “thickened” version of what a distributive law looks like.)

It turns out that given the natural transformation <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> between the composite right adjoints <semantics>U SU T˜<annotation encoding="application/x-tex">U^S U^{\tilde{T}}</annotation></semantics> and <semantics>U TU^ TS<annotation encoding="application/x-tex">U^T \hat{U}^{TS}</annotation></semantics>, we can get the natural transformation <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> as the mate of <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> between the corresponding composite left adjoints <semantics>F^ TSF T<annotation encoding="application/x-tex">\hat{F}^{TS} F^T</annotation></semantics> and <semantics>F T˜F S<annotation encoding="application/x-tex">F^{\tilde{T}} F^S</annotation></semantics>. Note that <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> is invertible if and only if <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> is. We may use <semantics>u<annotation encoding="application/x-tex">u</annotation></semantics> or <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>, along with the units and counits of the relevant adjunctions, to construct <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics>:

Getting e from u or f

But <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is in the wrong direction, so we have to further require that <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> is invertible, to get <semantics>e 1<annotation encoding="application/x-tex">e^{-1}</annotation></semantics>. We get <semantics>e<annotation encoding="application/x-tex">e'</annotation></semantics> from <semantics>u 1<annotation encoding="application/x-tex">u^{-1}</annotation></semantics> or <semantics>f 1<annotation encoding="application/x-tex">f^{-1}</annotation></semantics> in a similar manner. Since <semantics>e<annotation encoding="application/x-tex">e'</annotation></semantics> will turn out to already be in the right direction, we will not require it to be invertible. Finally, given any 4 pairs of adjoints that look like the above, along with natural transformations <semantics>u,f,e,e<annotation encoding="application/x-tex">u,f,e,e'</annotation></semantics> satisfying the above properties, we will get a distributive law!

What next?

Beck ends his paper with some examples, two of which I’ve already mentioned at the start of this post. During our discussion, there were some remarks on these and other examples, which I hope will be posted in the comments below. Instead of repeating those examples, I’d like to end by pointing to some related works:

  • Since we’ve been talking about Lawvere theories, we can ask what distributive laws look like for Lawvere theories. Cheng’s Distributive laws for Lawvere theories, which I’ve already referred to a few times, does exactly that. But first, she comes up with 4 settings in which to define Lawvere theories! She also has a very readable introduction to the correspondence between Lawvere theories and finitary monads.

  • As Beck mentions in his paper, we can similarly define distributive laws between comonads, as well as mixed distributive laws between a monad and a comonad. Just as we can define bimonoids/bialgebras, and thus Hopf monoids/algebras, in a braided monoidal category, such distributive laws allow us to define bimonads and Hopf monads. There are in fact two distinct notions of Hopf monads: the first is described in this paper by Alain Bruguières and Alexis Virelizier (with a follow-up paper coauthored with Steve Lack, and a diagrammatic approach with amazing surface diagrams by Simon Willerton); the second is this paper by Bachuki Mesablishvili and Robert Wisbauer. The difference between these two approaches is described in the Mesablishvili-Wisbauer paper, but both involve mixed distributive laws. Gabriella Böhm also recently gave a talk entitled The Unifying Notion of Hopf Monad, in which she shows how the many generalizations of Hopf algebras are just instances of Hopf monads (in the first sense) in an appropriate monoidal bicategory!

  • We also saw that distributive laws are monads in a category of monads. Instead of thinking of distributive laws as merely a means of composing monads, we can study distributive laws as objects in their own right, just as monoids in a category of monoids (a.k.a. abelian monoids) are studied in their own right! The story for monoids terminates at this step: monoids in abelian monoids are just abelian monoids. But for distributive laws, we can keep going! See Cheng’s paper on Iterated Distributive Laws, where she shows the connection between iterated distributive laws and <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-categories. In addition to requiring distributive laws between each pair of monads involved, it is also necessary to have a Yang-Baxter equation between every three monads:

Yang-Baxter condition

  • Finally, there seems to be a strange connection between distributive laws and factorization systems (e.g. here, here and even in “Distributive Laws for Lawvere theories” mentioned above). I can’t say more because I don’t know much about factorization systems, but hopefully someone else can say something illuminating about this!

by riehl (eriehl@math.jhu.edu) at February 18, 2017 07:30 AM

February 17, 2017

Symmetrybreaking - Fermilab/SLAC

#AskSymmetry Twitter chat with Anne Schukraft

See Fermilab physicist Anne Schukraft's answers to readers’ questions about neutrinos.

Scientist Anne Schukraft surrounded by Harry Potter-inspired imagery and the phrase
<noscript>[<a href="http://storify.com/Symmetry/asksymmetry-twitter-chat-with-anne-schukraft" target="_blank">View the story "#AskSymmetry Twitter Chat with Anne Schukraft 2/17/17" on Storify</a>]</noscript>

February 17, 2017 06:29 PM

Emily Lakdawalla - The Planetary Society Blog

Everything you need to know about tomorrow's historic SpaceX launch
A SpaceX Falcon 9 rocket blasts off from a former space shuttle launch pad tomorrow morning. Here's a rundown of everything you need to know about the historic event.

February 17, 2017 12:00 PM

February 16, 2017

Symmetrybreaking - Fermilab/SLAC

Wizardly neutrinos

Why can a neutrino pass through solid objects?

Scientist Anne Schukraft surrounded by Harry Potter-inspired imagery and the phrase

Physicist Anne Schukraft of Fermi National Accelerator Laboratory explains.

Video of 5SniR5U6YTU

Have a burning question about particle physics? Let us know via email or Twitter (using the hashtag #AskSymmetry). We might answer you in a future video!

You can watch a playlist of the #AskSymmetry videos here. You can see Anne Schukraft's answers to readers' questions about neutrinos on Twitter here.​

by Lauren Biron at February 16, 2017 10:42 PM

February 14, 2017

The n-Category Cafe

Functional Equations II: Shannon Entropy

In the second instalment of the functional equations course that I’m teaching, I introduced Shannon entropy. I also showed that up to a constant factor, it’s uniquely characterized by a functional equation that it satisfies: the chain rule.

Notes for the course so far are here. For a quick summary of today’s session, read on.

You can read the full story in the notes, but here I’ll state the main result as concisely as I can.

For <semantics>n0<annotation encoding="application/x-tex">n \geq 0</annotation></semantics>, let <semantics>Δ n<annotation encoding="application/x-tex">\Delta_n</annotation></semantics> denote the set of probability distributions <semantics>p=(p 1,,p n)<annotation encoding="application/x-tex">\mathbf{p} = (p_1, \ldots, p_n)</annotation></semantics> on <semantics>{1,,n}<annotation encoding="application/x-tex">\{1, \ldots, n\}</annotation></semantics>. The Shannon entropy of <semantics>pΔ n<annotation encoding="application/x-tex">\mathbf{p} \in \Delta_n</annotation></semantics> is

<semantics>H(p)= i:p i>0p ilogp i.<annotation encoding="application/x-tex"> H(\mathbf{p}) = - \sum_{i \colon p_i \gt 0} p_i \log p_i. </annotation></semantics>

Now, given

<semantics>wΔ n,p 1Δ k 1,,p nΔ k n,<annotation encoding="application/x-tex"> \mathbf{w} \in \Delta_n, \,\, \mathbf{p}^1 \in \Delta_{k_1}, \ldots, \mathbf{p}^n \in \Delta_{k_n}, </annotation></semantics>

we obtain a composite distribution

<semantics>w(p 1,,p n)=(w 1p 1 1,,w 1p k 1 1,,w np 1 n,,w np k n n).<annotation encoding="application/x-tex"> \mathbf{w} \circ (\mathbf{p}^1, \ldots, \mathbf{p}^n) = (w_1 p^1_1, \ldots, w_1 p^1_{k_1}, \, \ldots, \, w_n p^n_1, \ldots, w_n p^n_{k_n}). </annotation></semantics>

The chain rule for <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> states that

<semantics>H(w(p 1,,p n))=H(w)+ i=1 nw iH(p i).<annotation encoding="application/x-tex"> H(\mathbf{w} \circ (\mathbf{p}^1, \ldots, \mathbf{p}^n)) = H(\mathbf{w}) + \sum_{i = 1}^n w_i H(\mathbf{p}^i). </annotation></semantics>

So, <semantics>(H:Δ n +) n1<annotation encoding="application/x-tex">(H: \Delta_n \to \mathbb{R}^+)_{n \geq 1}</annotation></semantics> is a sequence of continuous functions satisfying the chain rule. Clearly, the same is true of any nonnegative scalar multiple of <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics>.

Theorem (Faddeev, 1956)   The only sequences of continuous functions <semantics>(Δ n +) n1<annotation encoding="application/x-tex">(\Delta_n \to \mathbb{R}^+)_{n \geq 1}</annotation></semantics> satisfying the chain rule are the scalar multiples of entropy.

One interesting aspect of the proof is where the difficulty lies. Let <semantics>I:Δ n +<annotation encoding="application/x-tex">I: \Delta_n \to \mathbb{R}^+</annotation></semantics> be continuous functions satisfying the chain rule; we have to show that <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> is proportional to <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics>. All the effort and ingenuity goes into showing that <semantics>I<annotation encoding="application/x-tex">I</annotation></semantics> is proportional to <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> when restricted to the uniform distributions. In other words, the hard part is to show that there exists a constant <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics> such that

<semantics>I(1/n,,1/n)=cH(1/n,,1/n)<annotation encoding="application/x-tex"> I(1/n, \ldots, 1/n) = c H(1/n, \ldots, 1/n) </annotation></semantics>

for all <semantics>n1<annotation encoding="application/x-tex">n \geq 1</annotation></semantics>. But once that’s done, showing that <semantics>I(p)=cH(p)<annotation encoding="application/x-tex">I(\mathbf{p}) = c H(\mathbf{p})</annotation></semantics> is a pushover. The notes show you how!

by leinster (Tom.Leinster@ed.ac.uk) at February 14, 2017 11:44 PM

Symmetrybreaking - Fermilab/SLAC

LHCb observes rare decay

Standard Model predictions align with the LHCb experiment’s observation of an uncommon decay.

The Standard Model is holding strong after a new precision measurement of a rare subatomic process.

For the first time, the LHCb experiment at CERN has independently observed the decay of the Bs0 particle—a heavy composite particle consisting of a bottom antiquark and a strange quark—into two muons. The LHCb experiment co-discovered this rare process in 2015 after combining results with the CMS experiment.

Theorists predicted that this particular decay would occur only a few times out of a billion.

“Our measurement is slightly lower than predictions, but well within the range of experimental uncertainty and fully compatible with our models,” says Flavio Archilli, one of the co-leaders of this analysis and a postdoc at Nikhef National Institute for Subatomic Physics. “The theoretical predictions are very accurate, so now we want to improve our precision to see if our measurement is sitting right on top of the expected value or slightly outside, which could be an indication of new physics.”

The LHCb experiment examines the properties and decay patterns of particles to search for cracks in the Standard Model, our best description of the fundamental particles and forces. Any deviations from the Standard Model’s predictions could be evidence of new physics at play.

Supersymmetry, for example, is a popular theory that adds a host of new particles to the Standard Model and ameliorates many of its shortcomings—such as mathematical imbalances between how the different types of particles contribute to subatomic interactions.

“We love this decay because it is one of the most promising places to search for any new effects of supersymmetry,” Archilli says. “Scientists searched for this decay for more than 30 years and now we finally have the first single-experiment observation.”

This new measurement by the LHCb experiment combines data taken from Run 1 and Run 2 of the Large Hadron Collider and employs more refined analysis techniques, making it the most precise measurement of this process to date. In addition to measuring the rate of this rare decay, LHCb researchers also measured how long the Bs0 particle lives before it transforms into the two muons—another measurement that agrees with the Standard Model’s predictions.

“It's gratifying to have achieved these results,” says Universita di Pisa scientist Matteo Rama, one of the co-leaders of this analysis. "They reward the efforts made to improve the analysis techniques, to exploit our data even further. We look forward to updating the measurement with more data with the hope to observe, one day, significant deviations from the Standard Model predictions."

Event display of a typical Bs0 decay into two muons

Event display of a typical Bs0 decay into two muons. The two muon tracks from the Bs0 decay are seen as a pair of green tracks traversing the whole detector.

LHCb collaboration

by Sarah Charley at February 14, 2017 07:39 PM

Symmetrybreaking - Fermilab/SLAC

Physics love poems

Advance your romance with science.

Header: Physics love poems

This Valentine’s Day, we challenged our readers to send us physics-inspired love poems. You answered the call: We received dozens of submissions—in four different languages! You can find some of our favorite entries below. 

But first, as a warm-up, enjoy a video of real scientists at Fermi National Accelerator Laboratory reciting physics-related Valentine’s Day haiku:

Video of lqoFbSyNDF8

Or read the haiku for yourself:

Reader poems

Thanks to all of our readers who submitted poems! In no particular order, here are some of our favorites:


For now, I’m seeing other quarks, some charming and some strange
But when we meet, I know we will all physics rearrange
For you, stop squark, will soon reveal the standard model as deficient
To me, you are my superpartner; the only one sufficient.
Without you, I just spin one-half of what our world could be
But you and I will couple soon in perfect symmetry.
All fundamental forces, we are meant to unify
In brilliant theory only love itself could clarify
Now though I may seem hypercharged and strongly interactive,
I must show my true colors if I hope to be attractive.
Without you, I just don’t feel really quite just like a top
But I’m confident I will yet find love in the name of stop.

- Jared Sagoff


The gravity that
Pulls my soul to you dilates:
Your beauty slows time.

- Philip Michaels


A Valentine for Two Quarks

Some people wish for one true love,
like dear old Ma and Pa.
That lifestyle’s not for us; we like
our quark ménage à trois.

You see, some like a threesome,
and I love both of you.
No green quark would be seen without
a red quark and a blue.

The sea is full of other quarks,
but darlings, I don’t heed ‘em.
You must believe I don’t exploit
my asymptotic freedom.

And when you pull away from me,
I just can’t take the stress.
My attraction just grows stronger
(coefficient alpha-s).

With you, my life is colourless;
you bring stability.
Without you, I’m unstable,
so I need you, Q.C.D.

I love our quirky, quarky love.
My Valentines, let’s carry on
exchanging gluons wantonly,
and make a little baryon.

- Cheryl Patrick


Will it work this time?
The wavefunction collapses.
Single once again.

- Anonymous


Our hearts were once close; two nucleons held tight
By a force that was strong, and a love that burned bright.
But, that force became weaker as the days faded ‘way,
And with it, our bond began to decay.

I’ve realize that opposites don’t always attract
(Otherwise, the atom would be more compact),
And opposites we were, our differences great,
Continuing this way, we’d annihilate.

In truth, I’ve quite had it with your duality,
Your warm disposition; cold mentality.
We must be entangled - what else can explain
How, though we are distant, you still cause me pain?

We’ve exchanged mediators, but our half-lives were short,
All data suggests we should promptly abort.
Our collision is over, and signatures thereof
Have vanished, leaving us not a quantum of love.

- Peter Voznyuk


Love ignited light,
Eternal and everywhere:
A Cosmic Background

- Akshay Jogoo


Like energy dear
our love will last forever,
theoretically

- Lauren Brennan


 

by Kathryn Jepsen at February 14, 2017 04:38 PM

CERN Bulletin

Portrait - Barbora Bruant Gulejova

Barbora Bruant Gulejova, Fellow delegate
Married, 1 child, 36 years, Fellow IR-ECO

Barbora Bruant Gulejova

I started working at CERN in July 2014 as a User in the domain of knowledge transfer (Head of Community Activities of High Energy Physics Technology Transfer Network – HEPTech). Six months later I became a Fellow in Education and Outreach group (today ECO) where I have been working for more than two years on different projects including IPPOG (International Particle Physics Outreach Group) in the framework of my function of Scientific Secretary.

I have a diverse profile, a PhD in Thermonuclear Fusion, a Master’s in Management, and experience in scientific editing in international organizations. Working at CERN has been the best and most enriching experience in my career. The stimulating environment of the Organization opens new perspectives, allows to develop new skills and supports creativity. I respect CERN values and I am proud to work here. Moreover, if one has a possibility to contribute to the well-being of the Organization and his/her colleagues, it becomes even more rewarding. This is where Staff Association plays a role. It is a statutory organ here to listen to CERN employees, including Fellows, and defend them. The Staff Association is driven by the willingness to maintain excellent working conditions for all colleagues.

Before coming to CERN, I had already lived and worked in Switzerland for 11 years and my family lives in France. Thus, I know the employment conditions and social security system of these two Host States well. I have been investigating these conditions for Fellows at CERN even well before I became a Fellow. Fellows are employed by CERN but unlike staff members they have no unemployment insurance in the current system. You may think that it is not important for you today, but in cases where a Fellow finds herself at maternity leave at the end of her contract or if a Fellow encounters a long-term health issue during or at the end of his/her contract, there can be a situation of hardship without any income. I consider that it is worthwhile to try to find solutions for this and that was the principal reason I joined the Staff Association.

In this sense, a lot has been done for the Fellows by the Staff Association together with the Diversity Office. As a Fellow delegate, I had the chance to concentrate my efforts to contribute to the proposals for the 5-yearly review. Even if all the proposals of the Staff Association were not retained, today we can be grateful for increased parental benefits, for example the extension of health care for the whole period of maternity leave, longer paternity leave, increased flexibility for new parents, access to teleworking, and recognition of registered partnerships. See: http://diversity.web.cern.ch/diversity-measures-5-yearly-review.

It is my pleasure and honour to represent the needs and the voice of more than 700 Fellows at CERN. However, together with a colleague Fellow, Jiri, we are only a small team. There are 5 seats for Fellows in the Staff Council and our voice could be much stronger if they were filled. Moreover, Fellows’ contracts are up to 3 years and without continuity there is no success. We hope to have more of you, Fellow friends, on board for the next mandate starting in 2018.

The Staff Association is a friendly and enriching place where you can meet more experienced staff colleagues from all CERN departments and learn a lot about the Organization you work for.

I encourage my colleague Fellows to become members of the Staff Association and to represent the Fellows in the future Staff Council by participating in the elections in autumn 2017.

February 14, 2017 09:02 AM

CERN Bulletin

Fellow's Apéro

Let's get together, meet each other, exchange experiences and ideas, and share useful information on CERN and the Staff Association. Join us for Fellow's Apéro, organised by the Staff Association on Tuesday 21 February at 16.30 in Restaurant 1. There will be drinks and snacks for everybody! We look forward to seeing you there! Please confirm your participation on Doodle http://doodle.com/poll/skvm7ucm2z78i6bt or alternatively on Facebook https://www.facebook.com/events/1862757017340069/.

Your delegates in the Staff Association,

Barbora & Jiri

February 14, 2017 09:02 AM

February 13, 2017

The n-Category Cafe

Functional Equations I: Cauchy's Equation

This semester, I’m teaching a seminar course on functional equations. Why? Among other reasons:

  1. Because I’m interested in measures of biological diversity. Dozens (or even hundreds?) of diversity measures have been proposed, but it would be a big step forward to have theorems of the form: “If you want your measure to have this property, this property, and this property, then it must be that measure. No other will do.”

  2. Because teaching a course on functional equations will force me to learn about functional equations.

  3. Because it touches on lots of mathematically interesting topics, such as entropy of various kinds and the theory of large deviations.

Today was a warm-up, focusing on Cauchy’s functional equation: which functions <semantics>f:<annotation encoding="application/x-tex">f: \mathbb{R} \to \mathbb{R}</annotation></semantics> satisfy

<semantics>f(x+y)=f(x)+f(y)x,y?<annotation encoding="application/x-tex"> f(x + y) = f(x) + f(y) \,\,\,\, \forall x, y \in \mathbb{R}? </annotation></semantics>

(I wrote about this equation before when I discovered that one of the main references is in Esperanto.) Later classes will look at entropy, means, norms, diversity measures, and a newish probabilistic method for solving functional equations.

Read on for today’s notes and an outline of the whole course.

I don’t want to commit to TeXing up notes every week, as any such commitment would suck joy out of something I’m really doing for intellectual fulfilment (also known as “fun”). However, I seem to have done it this week. Here they are. For those who came to the class, the parts in black ink are pretty much exactly what I wrote on the board.

Here’s the overall plan. We’ll take it at whatever pace feels natural, so the section numbers below don’t correspond to weeks. The later sections are pretty tentative — plans might change!

  1. Warm-up   Which functions <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> satisfy <semantics>f(x+y)=f(x)+f(y)<annotation encoding="application/x-tex">f(x + y) = f(x) + f(y)</annotation></semantics>? Which functions of two variables can be separated as a product of functions of one variable?

  2. Shannon entropy   Basic ideas. Characterizations of entropy by Shannon, Faddeev, Rényi, etc. Relative entropy.

  3. Deformed entropies   Rényi and “Tsallis” entropies. Characterizations of them. Relative Rényi entropy.

  4. Probabilistic methods   Cramér’s large deviation theorem. Characterization of <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>-norms and power means.

  5. Diversity of a single community   Background and introduction. Properties of diversity measures. Value. Towards a uniqueness theorem.

  6. Diversity of a metacommunity   Background: diversity within and between subcommunities; beta-diversity in ecology. Link back to relative entropy. Properties.

by leinster (Tom.Leinster@ed.ac.uk) at February 13, 2017 11:35 PM

The n-Category Cafe

M-theory from the Superpoint

You may have been following the ‘Division algebra and supersymmetry’ story, the last instalment of which appeared a while ago under the title M-theory, Octonions and Tricategories. John (Baez) was telling us of some work by his former student John Huerta which relates these entities. The post ends with a declaration which does not suffer from comparison to Prospero’s in The Tempest

But this rough magic

I here abjure. And when I have required

Some heavenly music – which even now I do –

To work mine end upon their senses that

This airy charm is for, I’ll break my staff,

Bury it certain fathoms in the earth,

And deeper than did ever plummet sound

I’ll drown my book.

Well, maybe not quite so poetic:

And with the completion of this series, I can now relax and forget all about these ideas, confident that at this point, the minds of a younger generation will do much better things with them than I could.

Anyway, you may be interested to know that the younger generation has pressed on. John Huerta teamed up with Urs Schreiber to write M-theory from the Superpoint (updated versions here), which looks to grow out of a mere superpoint Lorentzian spacetimes, D-branes and M-branes by the simple device of successive invariant higher central extensions.

It’s like a magical Whitehead tower where you can’t see how they put the rabbit in.

by david (d.corfield@kent.ac.uk) at February 13, 2017 04:59 PM

Symmetrybreaking - Fermilab/SLAC

LZ dark matter detector on fast track

Construction has officially launched for the LZ next-generation dark matter experiment.

Scientists in a cleanroom assemble the prototype for the LZ detector’s core.

The race is on to build the most sensitive US-based experiment designed to directly detect dark matter particles. Department of Energy officials have formally approved a key construction milestone that will propel the project toward its April 2020 goal for completion.

The LUX-ZEPLIN experiment, which will be built nearly a mile underground at the Sanford Underground Research Facility in Lead, South Dakota, is considered one of the best bets yet to determine whether theorized dark matter particles known as WIMPs (weakly interacting massive particles) actually exist. 

The fast-moving schedule for LZ will help the US stay competitive with similar next-gen dark matter direct-detection experiments planned in Italy and China.

On February 9, the project passed a DOE review and approval stage known as Critical Decision 3, which accepts the final design and formally launches construction.

“We will try to go as fast as we can to have everything completed by April 2020,” says Murdock “Gil” Gilchriese, LZ project director and a physicist at Lawrence Berkeley National Laboratory, the lead lab for the project. “We got a very strong endorsement to go fast and to be first.” The LZ collaboration now has about 220 participating scientists and engineers who represent 38 institutions around the globe.

The nature of dark matter—which physicists describe as the invisible component or so-called “missing mass” in the universe —has eluded scientists since its existence was deduced through calculations by Swiss astronomer Fritz Zwicky in 1933.

The quest to find out what dark matter is made of, or to learn whether it can be explained by tweaking the known laws of physics in new ways, is considered one of the most pressing questions in particle physics.

Successive generations of experiments have evolved to provide extreme sensitivity in the search that will at least rule out some of the likely candidates and hiding spots for dark matter, or may lead to a discovery.

LZ will be at least 50 times more sensitive to finding signals from dark matter particles than its predecessor, the Large Underground Xenon experiment, which was removed from Sanford Lab last year to make way for LZ. The new experiment will use 10 metric tons of ultra-purified liquid xenon to tease out possible dark matter signals. 

“The science is highly compelling, so it’s being pursued by physicists all over the world,” says Carter Hall, the spokesperson for the LZ collaboration and an associate professor of physics at the University of Maryland. “It's a friendly and healthy competition, with a major discovery possibly at stake.”

A planned upgrade to the current XENON1T experiment at National Institute for Nuclear Physics’ Gran Sasso Laboratory in Italy, and China's plans to advance the work on PandaX-II, are also slated to be leading-edge underground experiments that will use liquid xenon as the medium to seek out a dark matter signal. Both of these projects are expected to have a similar schedule and scale to LZ, though LZ participants are aiming to achieve a higher sensitivity to dark matter than these other contenders.

Hall notes that while WIMPs are a primary target for LZ and its competitors, LZ’s explorations into uncharted territory could lead to a variety of surprising discoveries. “People are developing all sorts of models to explain dark matter,” he says. “LZ is optimized to observe a heavy WIMP, but it’s sensitive to some less-conventional scenarios as well. It can also search for other exotic particles and rare processes.”

LZ is designed so that if a dark matter particle collides with a xenon atom, it will produce a prompt flash of light followed by a second flash of light when the electrons produced in the liquid xenon chamber drift to its top. The light pulses, picked up by a series of about 500 light-amplifying tubes lining the massive tank—over four times more than were installed in LUX—will carry the telltale fingerprint of the particles that created them.

Illustration showing a dark matter particle interacting inside the LZ detector.

When a theorized dark matter particle known as a WIMP collides with a xenon atom, the xenon atom emits a flash of light (gold) and electrons. The flash of light is detected at the top and bottom of the liquid xenon chamber. An electric field pushes the electrons to the top of the chamber, where they generate a second flash of light (red).

SLAC National Accelerator Laboratory

Daniel Akerib, Thomas Shutt and Maria Elena Monzani are leading the LZ team at SLAC National Accelerator Laboratory. The SLAC effort includes a program to purify xenon for LZ by removing krypton, an element that is typically found in trace amounts with xenon after standard refinement processes. “We have already demonstrated the purification required for LZ and are now working on ways to further purify the xenon to extend the science reach of LZ,” Akerib says.

SLAC and Berkeley Lab collaborators are also developing and testing hand-woven wire grids that draw out electrical signals produced by particle interactions in the liquid xenon tank. Full-size prototypes will be operated later this year at a SLAC test platform. “These tests are important to ensure that the grids don't produce low-level electrical discharge when operated at high voltage, since the discharge could swamp a faint signal from dark matter,” Shutt says. 

Hugh Lippincott, a Wilson Fellow at Fermi National Accelerator Laboratory and the physics coordinator for the LZ collaboration, says, “Alongside the effort to get the detector built and taking data as fast as we can, we’re also building up our simulation and data analysis tools so that we can understand what we’ll see when the detector turns on. We want to be ready for physics as soon as the first flash of light appears in the xenon.” Fermilab is responsible for implementing key parts of the critical system that handles, purifies, and cools the xenon.

All of the components for LZ are painstakingly measured for naturally occurring radiation levels to account for possible false signals coming from the components themselves. A dust-filtering cleanroom is being prepared for LZ's assembly and a radon-reduction building is under construction at the South Dakota site—radon is a naturally occurring radioactive gas that could interfere with dark matter detection. These steps are necessary to remove background signals as much as possible.

The vessels that will surround the liquid xenon, which are the responsibility of the UK participants of the collaboration, are now being assembled in Italy. They will be built with the world's most ultra-pure titanium to further reduce background noise.

To ensure unwanted particles are not misread as dark matter signals, LZ's liquid xenon chamber will be surrounded by another liquid-filled tank and a separate array of photomultiplier tubes that can measure other particles and largely veto false signals. Brookhaven National Laboratory is handling the production of another very pure liquid, known as a scintillator fluid, that will go into this tank.

The cleanrooms will be in place by June, Gilchriese says, and preparation of the cavern where LZ will be housed is underway at Sanford Lab. Onsite assembly and installation will begin in 2018, he adds, and all of the xenon needed for the project has either already been delivered or is under contract. Xenon gas, which is costly to produce, is used in lighting, medical imaging and anesthesia, space-vehicle propulsion systems, and the electronics industry.

“South Dakota is proud to host the LZ experiment at SURF and to contribute 80 percent of the xenon for LZ,” says Mike Headley, executive director of the South Dakota Science and Technology Authority (SDSTA) that oversees the facility. “Our facility work is underway and we’re on track to support LZ’s timeline.”

UK scientists, who make up about one-quarter of the LZ collaboration, are contributing hardware for most subsystems. Henrique Araújo, from Imperial College London, says, “We are looking forward to seeing everything come together after a long period of design and planning.”

Kelly Hanzel, LZ project manager and a Berkeley Lab mechanical engineer, adds, “We have an excellent collaboration and team of engineers who are dedicated to the science and success of the project.” The latest approval milestone, she says, “is probably the most significant step so far,” as it provides for the purchase of most of the major components in LZ’s supporting systems.

Major support for LZ comes from the DOE Office of Science’s Office of High Energy Physics, South Dakota Science and Technology Authority, the UK’s Science & Technology Facilities Council, and by collaboration members in South Korea and Portugal.

Editor's note: This article is based on a press release published by Berkeley Lab.

by Glenn Roberts Jr., Berkeley Lab at February 13, 2017 04:59 PM

CERN Bulletin

Exhibition

Le Point

Isabelle Gailland

Du 20 février au 3 mars 2017
CERN Meyrin, Bâtiment principal

La Diagonale - Isabelle Gailland.

Au départ, un toujours même point minuscule posé au centre de ce que la toile est un espace. Une réplique d'autres points, condensés, alignés, isolés, disséminés construiront dans leur extension, la ligne.
Ces lignes, croisées, courbées, déviées, prolongées, seront la structure contenant et séparant la matière des couleurs.
La rotation de chaque toile en cours d'exécution va offrir un accès illimité à la non-forme et à la forme.
Le point final sera l'ouverture sur différents points de vue de ce que le point et la ligne sont devenus une représentation pour l'œil et l'imaginaire.

A travers la peinture, véhiculée par le geste précis, je réfléchis, je cherche et je parcours dans le minuscule des points, les possibles illimités de la transformation .

Pour plus d’informations : staff.association@cern.ch | Tél: 022 766 37 38

February 13, 2017 02:02 PM

CERN Bulletin

EVE and School

IMPORTANT DATES

Enrolments 2017-2018

Enrolments for the school year 2017-2018 to the Nursery, the Kindergarten and the School will take place on

6, 7 and 8 March 2017 from 10 am to 1 pm at EVE and School.

Registration forms will be available from Thursday 2nd March.

More information on the website: http://nurseryschool.web.cern.ch/.


Saturday 4 March 2017

Open day at EVE and School
of CERN Staff Association

Are you considering enrolling your child to the Children’s Day-Care Centre EVE and School of the CERN Staff Association?

If you work at CERN, then this event is for you: come visit the school and meet the Management

on Saturday 4 March 2017 from 10 to 12 am

We look forward to welcoming you and will be delighted to present our structure, its projects and premises to you, and answer all of your questions.

Sign up for one of the two sessions on Doodle via the link below before Wednesday 1st March 2017 : http://doodle.com/poll/gbrz683wuvixk8as

February 13, 2017 02:02 PM

CERN Bulletin

Cine club

Wednesday 15 February 2017 at 20:00
CERN Council Chamber

Waking Life


Directed by Richard Linklater
USA, 2001, 99 minutes

This is the story of a boy who has a dream that he can float, but unless he holds on, he will drift away into the sky. Even when he is grown up, this idea recurs. After a strange accident, he walks through what may be a dream, flowing in and out of scenarios and encountering various characters. People he meets discuss science, philosophy and the life of dreaming and waking, and the protagonist gradually becomes alarmed that he cannot awake from this confusing dream adventure.

Original version English; French subtitles


Wednesday 22 February 2017 at 20:00
CERN Council Chamber

Paprika

Directed by Satoshi Kon
Japan, 2006, 90 minutes

When a machine that allows therapists to enter their patients' dreams is stolen, all Hell breaks loose. Only a young female therapist, Paprika, can stop it.

Original version Japanese; English subtitles

Save

February 13, 2017 01:02 PM

February 12, 2017

Tommaso Dorigo - Scientificblogging

The Six-Month Cycle Of The Experimental Physicist
Every year, at about this time, the level of activity of physicists working in experimental collaborations at high-energy colliders and elsewhere increases dramatically. We are approaching the time of "winter conferences", so called in order to distinguish them from "summer conferences". 
During winter conferences, which take place between mid-February and the end of March in La Thuile, Lake Louise, and other fashionable places close to ski resorts, experimentalists gather to show off their latest results. The same ritual repeats during the summer in a few more varied locations around the world. 

read more

by Tommaso Dorigo at February 12, 2017 04:39 PM

February 10, 2017

Symmetrybreaking - Fermilab/SLAC

Physics love poem challenge

Think you can do better than the Symmetry staff? Send us your poems!

Illustration of two particles wearing space helmets meeting in a cloud of dark matter

Has the love of your life fallen for particle physics? Let the Symmetry team help you reach their heart—with haiku.

On Valentine’s Day, we will publish a collection of physics-related love poems written by Symmetry staff and—if you are so inclined—by readers like you!

Send your poems (haiku format optional) to letters@symmetrymagazine.org by Monday, February 13, at 10 a.m. Central. If we really like yours, we may send you a prize.

For inspiration, consider the following:

Poem: A strong force binds us: / electromagnetic love. / You're fundamental.
Artwork by Sandbox Studio, Chicago
Poem: Like regular love, / But more massive -- Our love is / Supersymmetric
Artwork by Sandbox Studio, Chicago
Poem: A quantum of love / Or more? The principle here / Is uncertainty.
Artwork by Sandbox Studio, Chicago

by Kathryn Jepsen at February 10, 2017 07:39 PM

ZapperZ - Physics and Physicists

Politics And How It Affects US Physics Research
This is a very poignant article on how politics have impacted Physics research in the US for the past decade or so. Reading this can be very disheartening, so be forewarned!

The one impact that I had mentioned a few years ago is also mentioned here, and that had to do with not only the impact of budget cuts, but also the devastating impact of a budget cut AFTER several months of continuing resolution of the US budget.

I remember one year on December first, we had a faculty meeting where we heard funding levels would be up 10% across the board — a miraculous state of affairs after multiple years of flat-flat budgets (meaning no budgetary increases for cost of living adjustments — which ultimately means it’s a 3% cut). At our next faculty meeting on December fifteenth, we heard that it was going to be a flat-flat year — par for the course. On December nineteenth, we hear the news that there was a 30% cut in funding levels.

Now losing 30% of your budget is very bad in all circumstances, but you have to remember that the fiscal year begins on October first. The only thing you can do is fire people since all the funding is salaries and to do that legally takes about six weeks and with the holiday shutdown, that meant that this was a 50% cut in that year’s funding. There was some carry-forward and other budgetary manipulations, but 30% of the lab was lost, about three or four hundred if I recall. The lab tried to shield career scientists and engineers, but still many dozens were let go.

In a post from a few years ago, I showed the simple mathematics on why this effect is devastating for science research.

Unfortunately, I don't see this changing anytime soon. As the author of this article wrote, science in general does not have a "constituent". No politician pays a political price for not funding science, or wanting funding for science to be cut, unlike cutting funding for social programs, military, or other entitlements.

Regardless of who is in office or who is in control of the US Congress, it is business as usual.

Zz.

by ZapperZ (noreply@blogger.com) at February 10, 2017 07:38 PM

February 09, 2017

Lubos Motl - string vacua and pheno

Some Higgs impostor fake news
I want to start with a tweet by the famous particle physicist Lisa Randall that is two days old:


Through a digg tweet, she was referring to the article in Vice's Motherboard:
Why the Higgs Boson Found at the Large Hadron Collider Could Be an ‘Impostor’
by Farnia Fekri. All three key folks in this "story" are women: Lisa Randall, Farnia Fekri, and Usha Mallik, an Iowa experimental particle physicist who is the main heroine in Fekri's article. I need to emphasize it in order to be sure that the PC people won't accuse me of doing too little to promote the ties between science and the scientifically inferior gender. ;-)

Even though there were some other reactions among Lisa's followers – not really folks who follow particle physics in most cases – and I will discuss their reactions, my response was very similar to Lisa's. The title is fake news (and the body of the article contains some diluted solution of it). Well, it is a falsehood at least to the extent that the negation of the proposition is much more true – and it is a much more important truth, too. What's going on?




By late 2011, the LHC had already accumulated a sufficient number of collisions so that the folks could analyze those with two photons at the end rather finely and they found an excess of events with two photons that seem to arise from a decay of a new boson of mass \(125\GeV\). In other words, the invariant constructed from the two photons' four-momenta \(p^\mu,q^\mu\)\[

(p_\mu+q_\mu)(p^\mu+q^\mu) = m^2

\] was apparently equal to \((125\GeV)^2\) in many more cases than for other values of \(m\). Because a Higgs boson had to be discovered for the Standard Model to be experimentally completed as a consistent theory and this mass was consistent with everything else, I told you that a Higgs of this mass was a sure thing, although some people weren't this sane yet. ;-)




Yes, the formality occurred on the Independence Day in 2012: a sufficient number allowed both ATLAS and CMS to independently claim their 5-sigma discovery. (One detector had 4.8 or 4.9 but it's really an irrelevant historical coincidence now.) The Higgs was first discovered through the decays to \(\gamma\gamma\) i.e. two photons or \(ZZ\), two Z-bosons, heavier and equally neutral electroweak siblings of the photon.

Because the mass was determined and the final state included the pairs of spin-one bosons, people could be sure about a new particle, its mass, and its interactions with the pairs of spin-one bosons. For the particle to be a Higgs boson or "the Higgs boson", it needed to have many other interactions – including those with fermion pairs – equal to the theoretically predicted value.

As the experiments at the LHC continued, they have proven that many more interactions of the new particle are exactly as strong and have exactly the same properties, up to the error margins comparable to dozens of percent, as the Standard Model predicted. So doubts that it should be called a Higgs boson gradually evaporated.

So far it really looks like it is "the Higgs boson", i.e. a Higgs boson with the exact properties determined by the Standard Model that has no extensions, cousins, heavier or charged siblings, and other things. Experiments boldly yet humbly say that the world seems boring and the theorists are 50 years ahead of the experimenters because a flawless confirmation of theories written down 50 years ago is the best thing that the experimenters may do in the 2010s (naughty teenager decade or whatever is the right name for the teens) – while they will only be able to address some ideas we may already be sure about now around 2060. ;-)

This result, "everything seems compatible with the Standard Model", may be interpreted in various ways, i.e. as an argument against specific theories that try to replace the Higgs boson with something else or modify it heavily. Such "heavily alternative" theories that nevertheless want to be compatible with the \(125\GeV\) diphoton or \(ZZ\) signal are basically known the Higgs impostor models. A man is just making fun of us, is jumping inside the detectors, and pretends that he is Peter Higgs. ;-)

OK, I need to reverse this joke because 99% of the readers who also read the mainstream media would take it literally. No, dear brainwashed readers, the impostor isn't a man. It is a particle whose behavior resembles the behavior of the Higgs boson. It may have a wrong value of the spin or parity or be composite – but the properties may accidentally resemble the CP-even, spin-zero, elementary Higgs boson we assume that the new particle actually is.

I could give you links to numerous individual papers that analyzed the LHC collisions and determined that the Higgs impostor theories are basically dead. No one can emulate Higgs so accurately. So an important conclusion from the LHC experiments – that represent dozens of hours spent by hundreds of members of the ATLAS and CMS collaborations since 2011 or 2012 – is that
The Higgs Boson Found at the Large Hadron Collider Almost Certainly Cannot Be an ‘Impostor’
Now, look at the title in the Motherboard journal again:
Why the Higgs Boson Found at the Large Hadron Collider Could Be an ‘Impostor’
Haven't you seen it somewhere? Yes, this title is basically exactly the converse of an important truth. It is not just a falsehood. It is a falsehood that tries to push the readers away from a proposition that is not only true but also important, a proposition that actually represents the results of lots of work that has taken place at the LHC.

Is it OK to write such falsehoods? Why has it happened?

Dr Usha Mallik, the Iowa professor, is working on a sub-detector that should be inserted somewhere to the LHC by 2023. This sub-detector could increase the abilities to detect pairs of bottom quarks which is hard, as I will mention in a minute again. And if the measurements of these decays of the Higgs to bottom quark pairs deviates from the Standard Model, and it is a big if, it could be evidence in favor of an impostor theory. But again, even if she succeeds, it's rather likely that the data will be compatible with the Standard Model just like the data collected with the existing detectors so far.

The fake news title may be "justified" as a shortcut for a more honest title such as
a female experimenter in Iowa is working on some sub-detector that will almost certainly find nothing new even after 2023 when it's installed but the work is justified by the observation that if it could find some deviation from the Standard Model, it would be evidence in favor of a Higgs impostor theory, a type of theories that seem almost dead by now, however.
In other words, a lady is doing some boring stuff that won't lead anywhere. This summary doesn't sound so sexy so they have improved it and basically claimed that the Higgs that has been discovered "could" be an impostor even though the actual evidence that has been collected seems to imply exactly the opposite – that it cannot be an impostor, at least not a generic one.

Let me mention that the Higgs boson is predicted to decay to\[

h \to b\bar b

\] the pair of the bottom quark and its antiparticle in a percentage of cases. However, pairs of bottom quarks are created not only from decaying Higgs bosons but also from the "unavoidable QCD mess" at the LHC – which collides protons i.e. hadrons, messy QCD bound states. You may read about this decay channel e.g. in this CERN Courier article. The observation of the direct decay to the bottom quark-antiquark pair is hard. However, there exists a more complicated process:



It's called the vector-boson fusion, or VBF.

Two quarks from the two colliding protons emit two massive electroweak gauge bosons (W-bosons or Z-bosons) that get merged into a Higgs boson, and that Higgs boson decays to the bottom quark-antiquark pair. What is nice about this Feynman diagram is that the final bottom quarks (the middle right part of the diagram) aren't directly connected to the quarks and gluons inside the LHC protons through quark and gluon (i.e. strongly interacting) propagators.

Instead, these bottom propagators are only connected to the quarks from protons by electroweak propagators – by the Higgs and the electroweak spin-one bosons. This fact makes the central portion of the Feynman diagram clean – liberated from the pollution by the messy QCD effect. Consequently, the complicated QCD effects don't contribute so much to the background and the signal may be rather easily separated from the background. And the data already collected at the LHC are already extensive enough to bring us statistically significant evidence that the process depicted by the diagram is taking place.

The result is almost the same: when the signal is found, the interaction of the Higgs boson to the two bottoms is demonstrated. I discussed VBF in order to show that even if Usha Mallik succeeded in measuring the bottom quark decays of the Higgs using a new sub-detector, it probably wouldn't be a qualitatively new discovery that would bring the theorists new information about an interaction between elementary particles. The \(hb\bar b\) interaction may be probed differently, without the sub-detectors.

Experimental particle physicists have a rather hard job. Even relatively modest, technical advances that almost certainly won't "shift the paradigm" require years of work and sometimes billions of dollars. Even if it were discovered that the Higgs found in 2012 is an impostor, it would be something that 99.9999% of the mankind doesn't care about. For particle physicists, it could be a revolution. But this revolution is very unlikely to take place and it is arguably even less likely to take place because of Usha Mallik's work.

I know that these summaries don't sound as attractive as overhyped titles and they may be a worse starting point for Usha Mallik to get new grants etc. However, what I say is far more true and honest than what the Motherboard wrote.

It has unfortunately become a standard policy to write falsehoods and lies as titles of article and even grant applications and it's not just the very nasty people known as the journalists who are responsible for that. The scientists sometimes encourage such falsehoods themselves – and they often benefit from these lies even more than the journalists do. The ethical standards have dropped sufficiently so that Usha Mallik and others think that "it's just OK to write any lie with keywords related to my research" when her work is being described by the media. Sorry but it is not fine.

I have already mentioned that most of Lisa's followers don't have a clue. But it was one physics PhD student whose cluelessness was more visible:


He was clearly trying to chastise Lisa as if he were some moral authority. What the hell are you talking about, Christopher? Lisa hasn't tweeted any argument. She just made an observation that an article with a title that seems false to her – and to me and, more importantly, to most particle physicists, too – was published in a magazine.

The phrase "fake news" has been fashionable for several months. It's been used by the anti-Trump leftist media and it was often used inadequately. Many things labeled as "fake news" weren't fake at all – and on the contrary, it was many articles about "fake news" and especially about the origin of these alleged "fake news" (especially when the origin was claimed to be in Moscow) that were the actual "fake news".

One may say that the term "fake news" has backfired. It's been used against those who had used it for the first time. There have been various social processes like that. But at the end, "fake news" is just a currently fashionable synonym for an "untrue article in the media". There have always been untrue articles in the media and we always needed to use some words to describe "untrue articles in the media". The term "fake news" does a better job than others – it's catchy and concise enough – and it's plausible that its lifetime will be longer than we might think.

Even if she hasn't posted any long blog post substantiating her views, there is nothing "unethical" about Lisa's usage of the term "fake news" for something that she considers a falsehood published by the media. Christopher, if you think that "ethics" means the rules to avoid several phases that you have arbitrarily declared as taboos, such as the phrase "fake news", then your ethics isn't worth much.

by Luboš Motl (noreply@blogger.com) at February 09, 2017 12:01 PM

February 08, 2017

ZapperZ - Physics and Physicists

Gamma-Ray Imaging At Fukushima Plant
I mentioned earlier of the muon tomography imaging that was done at the damaged reactor at Fukushima, and tried to highlight this as an example of an application that came out of high energy physics. This time a gamma-ray imaging spectroscopy was performed at the same location to pin-point contamination sites.

But as with the muon tomography case, I want to highlight an important fact that many people might miss.

To address these issues of existing methods and visualize the Cs contamination, we have developed and employed an Electron-Tracking Compton Camera (ETCC). ETCCs were originally developed to observe nuclear gammas from celestial objects in MeV astronomy, but have been applied in wider  fields, including medical imaging and environmental monitoring.

So now we have an example of a device that was first developed for astronomical observation, but has found applications elsewhere.

This is extremely important to keep in mind. Experimental physics often pushes the boundaries of technology. We need better detectors, more sensitive devices, better handling of huge amount of data very quickly, etc...etc. Hardware have to be developed to do all this, and the technology from these scientific experiments often trickle down other applications. Look at all of medical technology, which practically owes everything to physics.

This impact from physics must be repeated over and over again to the public, because a significant majority of them are ignorant of it. It is why I will continue to pick out application like this and highlight it in case it is missed.

Zz.

by ZapperZ (noreply@blogger.com) at February 08, 2017 02:23 PM

February 07, 2017

Symmetrybreaking - Fermilab/SLAC

What ended the dark ages of the universe?

New experiments will help astronomers uncover the sources that helped make the universe transparent.

Header: What ended the dark ages of the universe?

When we peer through our telescopes into the cosmos, we can see stars and galaxies reaching back billions of years. This is possible only because the intergalactic medium we’re looking through is transparent. This was not always the case. 

Around 380,000 years after the Big Bang came recombination, when the hot mass of particles that made up the universe cooled enough for electrons to pair with protons, forming neutral hydrogen. This brought on the dark ages, during which the neutral gas in the intergalactic medium absorbed most of the high-energy photons around it, making the universe opaque to these wavelengths of light. 

Then, a few hundred million years later, new sources of energetic photons appeared, stripping hydrogen atoms of their electrons and returning them to their ionized state, ultimately allowing light to easily travel through the intergalactic medium. After this era of reionization was complete, the universe was fully transparent once again. 

Physicists are using a variety of methods to search for the sources of reionization, and finding them will provide insight into the first galaxies, the structure of the early universe and possibly even the properties of dark matter. 

Energetic sources

Current research suggests that most—if not all—of the ionizing photons came from the formation of the first stars and galaxies. “The reionization process is basically a competition between the rate at which stars produce ionizing radiation and the recombination rate in the intergalactic medium,” says Brant Robertson, a theoretical astrophysicist at the University of California, Santa Cruz. 

However, astronomers have yet to find these early galaxies, leaving room for other potential sources. The first stars alone may not have been enough. “There are undoubtedly other contributions, but we argue about how important those contributions are,” Robertson says. 

Active galactic nuclei, or AGN, could have been a source of reionization. AGN are luminous bodies, such as quasars, that are powered by black holes and release ultraviolet radiation and X-rays. However, scientists don’t yet know how abundant these objects were in the early universe. 

Another, more exotic possibility, is dark matter annihilation. In some models of dark matter, particles collide with each other, annihilating and producing matter and radiation. “If through this channel or something else we could find evidence for dark matter annihilation, that would be fantastically interesting, because it would immediately give you an estimate of the mass of the dark matter and how strongly it interacts with Standard Model particles,” says Tracy Slatyer, a particle physicist at MIT. 

Dark matter annihilation and AGN may have also indirectly aided reionization by providing extra heat to the universe. 

Probing the cosmic dawn

To test their theories of the course of cosmic reionization, astronomers are probing this epoch in the history of the universe using various methods including telescope observations, something called “21-centimeter cosmology” and probing the cosmic microwave background. 

Astronomers have yet to find evidence of the most likely source of reionization—the earliest stars—but they’re looking. 

By assessing the luminosity of the first galaxies, physicists could estimate how many ionizing photons they could have released. “[To date] there haven't been observations of the actual galaxies that are reionizing the universe—even Hubble can't deliver any of those—but the hope is that the James Webb Space Telescope can,” says John Wise, an astrophysicist at Georgia Tech. 

Some of the most telling information will come from 21-centimeter cosmology, so called because it studies 21-centimeter radio waves. Neutral hydrogen gives off radio waves of this frequency, ionized hydrogen does not. Experiments such as the forthcoming Hydrogen Epoch of Reionization Array will detect neutral hydrogen using radio telescopes tuned to this frequency. This could provide clinching evidence about the sources of reionization.

“The basic idea with 21-centimeter cosmology is to not look at the galaxies themselves, but to try to make direct measurements of the intergalactic medium—the hydrogen between the galaxies,” says Adrian Liu, a Hubble fellow at UC Berkeley. “This actually lets you, in principle, directly see reionization, [by seeing how] it affects the intergalactic medium.”

By locating where the universe is ionized and where it is not, astronomers can create a map of how neutral hydrogen is distributed in the early universe. “If galaxies are doing it, then you would have ionized bubbles [around them]. If it is dark matter—dark matter is everywhere—so you're ionizing everywhere, rather than having bubbles of ionizing gas,” says Steven Furlanetto, a theoretical astrophysicist at the University of California, Los Angeles. 

Physicists can also learn about sources of reionization by studying the cosmic microwave background, or CMB. 

When an atom is ionized, the electron that is released scatters and disrupts the CMB. Physicists can use this information to determine when reionization happened and put constraints on how many photons were needed to complete the process. 

For example, physicists reported last year that data released from the Planck satellite was able to lower its estimate of how much ionization was caused by sources other than galaxies. “Just because you could potentially explain it with star-forming galaxies, it doesn't mean that something else isn't lurking in the data,” Slatyer says. “We are hopefully going to get much better measurements of the reionization epoch using experiments like the 21-centimeter observations.” 

It is still too early to rule out alternative explanations for the sources of reionization, since astronomers are still at the beginning of uncovering this era in the history of our universe, Liu says. “I would say that one of the most fun things about working in this field is that we don't know exactly what happened.”

by Diana Kwon at February 07, 2017 06:00 PM

February 06, 2017

John Baez - Azimuth

Saving Climate Data (Part 5)

march-for-science-earth-day

There’s a lot going on! Here’s a news roundup. I will separately talk about what the Azimuth Climate Data Backup Project is doing.

I’ll start with the bad news, and then go on to some good news.

Tweaking the EPA website

Scientists are keeping track of how Trump administration is changing the Environmental Protection Agency website, with before-and-after photos, and analysis:

• Brian Kahn, Behold the “tweaks” Trump has made to the EPA website (so far), National Resources Defense Council blog, 3 February 2017.

There’s more about “adaptation” to climate change, and less about how it’s caused by carbon emissions.

All of this would be nothing compared to the new bill to eliminate the EPA, or Myron Ebell’s plan to fire most of the people working there:

• Joe Davidson, Trump transition leader’s goal is two-thirds cut in EPA employees, Washington Post, 30 January 2017.

If you want to keep track of this battle, I recommend getting a 30-day free subscription to this online magazine:

InsideEPA.com.

Taking animal welfare data offline

The Trump team is taking animal-welfare data offline. The US Department of Agriculture will no longer make lab inspection results and violations publicly available, citing privacy concerns:

• Sara Reardon, US government takes animal-welfare data offline, Nature Breaking News, 3 Feburary 2017.

Restricting access to geospatial data

A new bill would prevent the US government from providing access to geospatial data if it helps people understand housing discrimination. It goes like this:

Notwithstanding any other provision of law, no Federal funds may be used to design, build, maintain, utilize, or provide access to a Federal database of geospatial information on community racial disparities or disparities in access to affordable housing._

For more on this bill, and the important ways in which such data has been used, see:

• Abraham Gutman, Scott Burris, and the Temple University Center for Public Health Law Research, Where will data take the Trump administration on housing?, Philly.com, 1 February 2017.

The EDGI fights back

The Environmental Data and Governance Initiative or EDGI is working to archive public environmental data. They’re helping coordinate data rescue events. You can attend one and have fun eating pizza with cool people while saving data:

• 3 February 2017, Portland
• 4 February 2017, New York City
• 10-11 February 2017, Austin Texas
• 11 February 2017, U. C. Berkeley, California
• 18 February 2017, MIT, Cambridge Massachusetts
• 18 February 2017, Haverford Connecticut
• 18-19 February 2017, Washington DC
• 26 February 2017, Twin Cities, Minnesota

Or, work with EDGI to organize one your own data rescue event! They provide some online tools to help download data.

I know there will also be another event at UCLA, so the above list is not complete, and it will probably change and grow over time. Keep up-to-date at their site:

Environmental Data and Governance Initiative.

Scientists fight back

The pushback is so big it’s hard to list it all! For now I’ll just quote some of this article:

• Tabitha Powledge, The gag reflex: Trump info shutdowns at US science agencies, especially EPA, 27 January 2017.

THE PUSHBACK FROM SCIENCE HAS BEGUN

Predictably, counter-tweets claiming to come from rebellious employees at the EPA, the Forest Service, the USDA, and NASA sprang up immediately. At The Verge, Rich McCormick says there’s reason to believe these claims may be genuine, although none has yet been verified. A lovely head on this post: “On the internet, nobody knows if you’re a National Park.”

At Hit&Run, Ronald Bailey provides handles for several of these alt tweet streams, which he calls “the revolt of the permanent government.” (That’s a compliment.)

Bailey argues, “with exception perhaps of some minor amount of national security intelligence, there is no good reason that any information, data, studies, and reports that federal agencies produce should be kept from the public and press. In any case, I will be following the Alt_Bureaucracy feeds for a while.”

NeuroDojo Zen Faulkes posted on how to demand that scientific societies show some backbone. “Ask yourself: “Have my professional societies done anything more political than say, ‘Please don’t cut funding?’” Will they fight?,” he asked.

Scientists associated with the group_ 500 Women Scientists _donned lab coats and marched in DC as part of the Women’s March on Washington the day after Trump’s Inauguration, Robinson Meyer reported at the Atlantic. A wildlife ecologist from North Carolina told Meyer, “I just can’t believe we’re having to yell, ‘Science is real.’”

Taking a cue from how the Women’s March did its social media organizing, other scientists who want to set up a Washington march of their own have put together a closed Facebook group that claims more than 600,000 members, Kate Sheridan writes at STAT.

The #ScienceMarch Twitter feed says a date for the march will be posted in a few days. [The march will be on 22 April 2017.] The group also plans to release tools to help people interested in local marches coordinate their efforts and avoid duplication.

At The Atlantic, Ed Yong describes the political action committee 314Action. (314=the first three digits of pi.)

Among other political activities, it is holding a webinar on Pi Day—March 14—to explain to scientists how to run for office. Yong calls 314Action the science version of Emily’s List, which helps pro-choice candidates run for office. 314Action says it is ready to connect potential candidate scientists with mentors—and donors.

Other groups may be willing to step in when government agencies wimp out. A few days before the Inauguration, the Centers for Disease Control and Prevention abruptly and with no explanation cancelled a 3-day meeting on the health effects of climate change scheduled for February. Scientists told Ars Technica’s Beth Mole that CDC has a history of running away from politicized issues.

One of the conference organizers from the American Public Health Association was quoted as saying nobody told the organizers to cancel.

I believe it. Just one more example of the chilling effect on global warming. In politics, once the Dear Leader’s wishes are known, some hirelings will rush to gratify them without being asked.

The APHA guy said they simply wanted to head off a potential last-minute cancellation. Yeah, I guess an anticipatory pre-cancellation would do that.

But then—Al Gore to the rescue! He is joining with a number of health groups—including the American Public Health Association—to hold a one-day meeting on the topic Feb 16 at the Carter Center in Atlanta, CDC’s home base. Vox’s Julia Belluz reports that it is not clear whether CDC officials will be part of the Gore rescue event.

The Sierra Club fights back

The Sierra Club, of which I’m a proud member, is using the Freedom of Information Act or FOIA to battle or at least slow the deletion of government databases. They wisely started even before Trump took power:

• Jennifer A Dlouhy, Fearing Trump data purge, environmentalists push to get records, BloombergMarkets, 13 January 2017.

Here’s how the strategy works:

U.S. government scientists frantically copying climate data they fear will disappear under the Trump administration may get extra time to safeguard the information, courtesy of a novel legal bid by the Sierra Club.

The environmental group is turning to open records requests to protect the resources and keep them from being deleted or made inaccessible, beginning with information housed at the Environmental Protection Agency and the Department of Energy. On Thursday [January 9th], the organization filed Freedom of Information Act requests asking those agencies to turn over a slew of records, including data on greenhouse gas emissions, traditional air pollution and power plants.

The rationale is simple: Federal laws and regulations generally block government agencies from destroying files that are being considered for release. Even if the Sierra Club’s FOIA requests are later rejected, the record-seeking alone could prevent files from being zapped quickly. And if the records are released, they could be stored independently on non-government computer servers, accessible even if other versions go offline.


by John Baez at February 06, 2017 02:15 AM

February 04, 2017

Tommaso Dorigo - Scientificblogging

LHCb Finds Suppressed Lambda_B Decay
The so-called Lambda_b baryon is a well-studied particle nowadays, with several experiments having measured its main production properties and decay modes in the course of the past two decades. It is a particle made of quarks: three of them, like the proton and the neutron. Being electrically neutral, it is easily likened to the neutron, which has a quark composition "udd". In the space of quark configurations, the Lambda_b is in fact obtained by exchanging a down-type quark of the neutron with a bottom quark, getting the "udb" combination.

read more

by Tommaso Dorigo at February 04, 2017 03:41 PM

February 02, 2017

Symmetrybreaking - Fermilab/SLAC

Road trip science

The Escaramujo Project delivered detector technology by van to eight universities in Latin America.

Group photo of students who participated in the Escaramujo Project

Professors and students of physics in Latin America have much to offer the world of physics. But for those interested in designing and building the complex experiments needed to gather physics data, hands-on experimentation in much of Central and South America has been lacking. It was that gap that something called the Escaramujo Project aimed to fill by bringing basic components to students who could then assemble them into fully functional detectors.

“It was something completely new,” says Luis Rodolfo Pérez Sánchez, a student at the Universidad Autónoma de Chiapas, Mexico, who is writing his thesis based on measurements taken with the detector. “Until now, there was no device at the university where one could work directly with their hands.”

Each group of students built a detector, which they used to measure cosmic-ray muons (particles coming from space). But they did more than that. They used a Linux open-source computer operating system for the first time, calibrated the equipment, plotted data using the software ROOT and became part of an international community. The students used their detectors to participate in International Cosmic Day, an annual event where scientists around the world measure cosmic rays and share their data.

The Escaramujo Project is led by Federico Izraelevitch, who worked at Fermi National Accelerator Laboratory near Chicago during its planning stages and is now a professor at Instituto Dan Beninson in Argentina. During the project, Izraelevitch and his wife, Eleonora, traveled with three canine companions on a road trip from Chicago to Buenos Aires, stopping to teach workshops in Mexico, Guatemala, Costa Rica, Colombia, Ecuador, Peru and Bolivia. Many nights found them in spots with no tourist lodging or even places to camp with their van.

“People received us with a smile and gave us a cup of coffee, or food, or whatever we needed at the time,” Izraelevitch says. “People are amazing.” 

Map showing the route from Chicago to Buenos Aires

Federico and Eleonora Izraelevitch traveled by van from Chicago to Buenos Aires.

Escaramujo Project

In many locations, students took their detector on a field trip shortly after assembling it. The group in Pasto, Colombia, turned theirs into a muon telescope and carted it to the nearby Galeras volcano, where a kind local lent them a power supply to get things running. They studied an effect of the volcano: muon attenuation, or weakening of the muon signal. Students in La Paz, Bolivia, placed the detector in the back of a van and drove it to a lofty observatory, measuring how the muon flux changed with altitude. 

The Escaramujo Project forged direct connections between students at eight universities, who can now use their detectors to collect and share data with other Escaramujo participants.

“This state is one of the poorest states in Mexico,” says Karen Caballero, a professor at UNACH who brought the Escaramujo Project to the university. “The students in Chiapas don’t have the opportunity to participate in international initiatives, so this has been very, very important for them.”

Caballero says there are plans for the full Escaramujo cohort to use their detectors to calibrate expansions of the Latin American Giant Observatory, used for an experiment that began in 2005. LAGO uses multiple sites throughout Central and South America to study gamma-ray bursts, some of the most powerful explosions in the universe, as well as space weather.

While the workshops for the program wrapped up in early 2016, Izraelevitch says he hopes to visit more universities and lead more workshops in the future.

“Hopefully all these sites can continue growing and working as a collaboration in the future,” he says. “These people are capable and have all the knowledge and enthusiasm for being part of a major, first-class experiment.”

Students from the Universidad Autónoma de Chiapas in Mexico

Students at the Universidad Autónoma de Chiapas in Mexico built a detector with the Escaramujo Project.

Federico Izraelevitch

by Lauren Biron at February 02, 2017 04:39 PM

Clifford V. Johnson - Asymptotia

Scribbles (2)

In case you wondered how those scribbles* turned out...

-cvj

(*For the book.)
Click to continue reading this post

The post Scribbles (2) appeared first on Asymptotia.

by Clifford at February 02, 2017 03:50 PM

February 01, 2017

Lubos Motl - string vacua and pheno

Anomaly! by Tommaso Dorigo, a review
A guest blog: a review by Tristan du Pree of CMS at CERN


Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab
World Scientific Publishing

Book by Tommaso Dorigo, 2016


To the outside world, Italian particle physicist Tommaso Dorigo is best known for his blogs, not afraid to express his personal thoughts about particle physics research. His new book, ‘Anomaly!’, describes this research from the inside – it covers the parts of the field that one normally does not hear about: the history, the sociology, and everything usually happening behind the screens.

Dorigo

In the world of Big Science, nowadays performed at CERN in collaborations of thousands of particle physicists, press releases have become as common as reconstruction algorithms. Of course, any personal opinion will be carefully hidden and controversial statements should be avoided at all costs. Dorigo’s new book, about the research at the American Tevatron collider at the end of the 20th century, is his latest piece that goes against this trend.

Within the CMS Collaboration at CERN, where the INFN physicist performs his research nowadays, Dorigo is mostly known as a statistics expert. [And he will be the February 2017 CMS voice, comment by LM.] Professionally, I have interacted with Dorigo as reviewer of some of my own searches at CMS for physics beyond the Standard Model, where I encountered him as a thorough, sometimes pedantic, but usually very efficient reviewer. Unsurprisingly, his role in this book is also one of a reviewer – a young researcher who has the important role of validating the claims of one of the CDF collaborators in this book.

But this book is not about Tommaso himself. Like the extensive lists of authors of experimental particle physics papers, the main character in this book is the CDF Collaboration.

CDF

The CDF experiment was located at one of the interaction points of the Tevatron collider at Fermilab, near Chicago. This book illustrates, after starting from a collaboration of the order of hundred people in the mid eighties, how the CDF collaboration had successfully constructed a silicon tracker, advanced the identification of heavy-flavor quarks, and discovered the top quark. In many senses, they did pioneering work for the research that we, together with my thousands of colleagues, are currently doing at the LHC at CERN.



The discovery at the Tevatron of the top quark, the sixth, and surprisingly heavy quark, was one of the major discoveries of CDF, confirming the Standard Model of particle physics. While reading about the 1990’s research in the US, the similarity of the top quark discovery with the later discovery of the Higgs boson at CERN is easily seen. Some people are quickly convinced this to be a real effect, whereas others needed more evidence. In the end, the first group turned out to be right, but the latter group wasn’t wrong either.


The internal reviews of such anomalous events, events that could possibly be the first signs of a new discovery, lead to lively internal discussions. Those are the main subject of this book.

BSM

Whereas confirming in further detail the Standard Model is a great achievement, truly finding something beyond would be the ultimate scientific jackpot. And apparently, people inside CDF were convinced some unexpected events were the first hints of such new physics. In ‘Anomaly!’, various of such events pass by: anomalous Higgs-like events, events with “superjets”, possible hints of bottom squarks, dimuon bumps, etcetera... Even though we now know these claims were all premature, these discussions are very entertaining and useful to read.

Do we really understand all detector effects? Does it harm our image if we later have to retract a claim? Will the media misrepresent our claims, which could possibly negatively influence our funding? Should experimentalists just provide data, or should we go one step further and provide interpretations? Will people misinterpret our interpretation as a claim for discovery? Is it unscientific to keep a result for ourselves? These are just some of the questions being discussed.

All of these are valid questions, leading to a multidimensional discussion with opposite conclusions. Agreeing how (if at all) to present the results is a topic of various conversations in this book. These time-consuming, but very important, discussions are the reason why experimental particle physicists spend so many days, evenings, and weekends in meetings, phone calls, and video conferences.

3 sigma

In the end, collider experimentalists are a funny bunch. As our collaborations have become so large, we will never be able to win a Nobel Prize for our individual work. Also, constructing a gigantic detector requires close collaboration with various colleagues. But still, curiosity and pride push us to be the first to have the histogram with new events on our own computer.

And whenever a 3 sigma’ish excess appears, some people will claim this might be something not understood, whereas others are convinced that it must be a statistical fluctuation. And one person (usually the first investigator) will claim with certainty this to be the first signs of new physics. Optimists and pessimists are everywhere, and most of the time both sides do have valid arguments.

With so many clever (and stubborn) people, simultaneously collaborating to develop a good experiment while competing on making a first discovery, converging on the final result, and its presentation, can be tough or even impossible. Why, in some cases, this can take so much time – that’s the one thing that Dorigo makes totally clear from this book.

Theorists

What makes the book so interesting and special is the way the discussions are described.

The description of the dialogues and the researchers are often recognizable and hilarious. For example, we hear Michelangelo Mangano (now CERN theorist) in a restaurant asking “Are you crazy?” (while almost spilling his beer on his freshly ironed Yves Saint Laurent shirt) and we see Claudio Campagnari (currently CMS Susy convener) saying in a meeting: “As long as I’m alive, this stuff will not make it out of here!”. In a collaboration with hundreds (and nowadays thousands) of such people with strong opinions, finding agreement will often take time.

And this brings me to the audience that could most benefit from reading this book: theorists! How often do it not hear, at conferences, on social media, and blogs the question from theorists, wondering why the experiments don’t publish faster. Just to quote a very recent post from Luboš at The Reference Frame:
“Don't you find it a bit surprising that now, in early 2017, we are still getting preprints based on the evaluation of the 2012 LHC data? The year was called "now" some five years ago. Are they hiding something? And when they complete an analysis like that, why don't they directly publish the same analysis including all the 2015+2016 = 4031 data as well? Surely the analysis of the same channel applied to the newer data is basically the same work.”
Well, this book gives the answer! Sometimes data is not yet sufficiently understood in order to become public. Often, further studies and crosschecks are needed, for example if the research reveals differences between different periods of data taking. And, finally, despite the large amount of automatization, all this work is in the end done by humans, and the collaboration has to convinced of the soundness of the obtained result.

Read this book, once, and you’ll never be surprised why some experimental particle physics publications appear to take long from the outside.

Comment

It’s a great read, but if I would have to mention one comment on the book, it’s that parts are written for the non-expert reader. I personally think that the book is mostly readable and interesting for professionals (in experiment and theory) and possibly also for an additional audience with a relatively large knowledge of the field. But, in the end, it is a niche market (which also explains the price of the book).

Dorigo has tried to make the book readable for a wider audience, and he is certainly great in trying to explain fundamental concepts in a clear way, using original analogies. His two-page long slow-motion description of a ppbar-collision generating an “anomalous” dielectron-diphoton-met event was actually quite amazing! But I doubt it’s sufficient to clarify our research to the non-expert reader. To me, those descriptions could’ve been dropped, or maybe formatted differently, to distinguish them from the main story line.

Anomaly

But all in all, it’s certainly a unique inside view in the history of particle physics.

This book really makes you experience the research atmosphere during the nineties at Fermilab. The situation around the Fermilab top discovery reminds one of the CERN Higgs discovery. The Tevatron dilepton-diphoton discussion reminds one of the excited discussion around the LHC diphoton events in 2015. And the discussions about internal results that have never been published… well, you know…



After finishing this description focusing on CDF Run-1, I was immediately curious to read more about the stories of CDF Run-2. Not just the measurements, but the everyday research inside a large experimental collaboration.

Beyond

This week, I will attend CMS Week for the last time. After >6 years in this collaboration, I move to the ATLAS experiment. I will take all the stories of eventful internal meetings with me, as I also did in 2010, after >4 years in the LHCb collaboration. I won’t write a book about it now, this would certainly be too early. Maybe someone else will do it, in like twenty years from now.

If you want to know how experimental particle physics is really done behind the screens, read this book! In all those years, the technology has advanced, as has our knowledge of particle physics, but the sociology is still very much the same.

Tristan du Pree


PS: Thanks to Orange, my provider in France, for recently cutting my telephone, tv, and internet (for no reason). It allowed me to quickly read this paper book without modern disturbance.

by Luboš Motl (noreply@blogger.com) at February 01, 2017 02:28 PM

Tommaso Dorigo - Scientificblogging

A Slow-Motion Particle Collision In Anomaly!
Lubos Motl published the other day in his crazily active blog a very nice new review of "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab". The review is authored by Tristan du Pree, a colleague of mine who has worked in CMS until very recently - now he moved to a new job and changed to ATLAS! (BTW  thanks Lubos, and thanks Tristan!)
I liked a lot Tristan's commentary of my work, and since he mentions with quite appreciative terms the slow-motion description of a peculiar collision I offer in my book, I figured I'd paste that below.

read more

by Tommaso Dorigo at February 01, 2017 08:57 AM

John Baez - Azimuth

Information Geometry (Part 16)

This week I’m giving a talk on biology and information:

• John Baez, Biology as information dynamics, talk for Biological Complexity: Can it be Quantified?, a workshop at the Beyond Center, 2 February 2017.

While preparing this talk, I discovered a cool fact. I doubt it’s new, but I haven’t exactly seen it elsewhere. I came up with it while trying to give a precise and general statement of ‘Fisher’s fundamental theorem of natural selection’. I won’t start by explaining that theorem, since my version looks rather different than Fisher’s, and I came up with mine precisely because I had trouble understanding his. I’ll say a bit more about this at the end.

Here’s my version:

The square of the rate at which a population learns information is the variance of its fitness.

This is a nice advertisement for the virtues of diversity: more variance means faster learning. But it requires some explanation!

The setup

Let’s start by assuming we have n different kinds of self-replicating entities with populations P_1, \dots, P_n. As usual, these could be all sorts of things:

• molecules of different chemicals
• organisms belonging to different species
• genes of different alleles
• restaurants belonging to different chains
• people with different beliefs
• game-players with different strategies
• etc.

I’ll call them replicators of different species.

Let’s suppose each population P_i is a function of time that grows at a rate equal to this population times its ‘fitness’. I explained the resulting equation back in Part 9, but it’s pretty simple:

\displaystyle{ \frac{d}{d t} P_i(t) = f_i(P_1(t), \dots, P_n(t)) \, P_i(t)   }

Here f_i is a completely arbitrary smooth function of all the populations! We call it the fitness of the ith species.

This equation is important, so we want a short way to write it. I’ll often write f_i(P_1(t), \dots, P_n(t)) simply as f_i, and P_i(t) simply as P_i. With these abbreviations, which any red-blooded physicist would take for granted, our equation becomes simply this:

\displaystyle{ \frac{dP_i}{d t}  = f_i \, P_i   }

Next, let p_i(t) be the probability that a randomly chosen organism is of the ith species:

\displaystyle{ p_i(t) = \frac{P_i(t)}{\sum_j P_j(t)} }

Starting from our equation describing how the populations evolve, we can figure out how these probabilities evolve. The answer is called the replicator equation:

\displaystyle{ \frac{d}{d t} p_i(t)  = ( f_i - \langle f \rangle ) \, p_i(t) }

Here \langle f \rangle is the average fitness of all the replicators, or mean fitness:

\displaystyle{ \langle f \rangle = \sum_j f_j(P_1(t), \dots, P_n(t)) \, p_j(t)  }

In what follows I’ll abbreviate the replicator equation as follows:

\displaystyle{ \frac{dp_i}{d t}  = ( f_i - \langle f \rangle ) \, p_i }

The result

Okay, now let’s figure out how fast the probability distribution

p(t) = (p_1(t), \dots, p_n(t))

changes with time. For this we need to choose a way to measure the length of the vector

\displaystyle{  \frac{dp}{dt} = (\frac{d}{dt} p_1(t), \dots, \frac{d}{dt} p_n(t)) }

And here information geometry comes to the rescue! We can use the Fisher information metric, which is a Riemannian metric on the space of probability distributions.

I’ve talked about the Fisher information metric in many ways in this series. The most important fact is that as a probability distribution p(t) changes with time, its speed

\displaystyle{  \left\| \frac{dp}{dt} \right\|}

as measured using the Fisher information metric can be seen as the rate at which information is learned. I’ll explain that later. Right now I just want a simple formula for the Fisher information metric. Suppose v and w are two tangent vectors to the point p in the space of probability distributions. Then the Fisher information metric is given as follows:

\displaystyle{ \langle v, w \rangle = \sum_i \frac{1}{p_i} \, v_i w_i }

Using this we can calculate the speed at which p(t) moves when it obeys the replicator equation. Actually the square of the speed is simpler:

\begin{array}{ccl}  \displaystyle{ \left\| \frac{dp}{dt}  \right\|^2 } &=& \displaystyle{ \sum_i \frac{1}{p_i} \left( \frac{dp_i}{dt} \right)^2 } \\ \\  &=& \displaystyle{ \sum_i \frac{1}{p_i} \left( ( f_i - \langle f \rangle ) \, p_i \right)^2 } \\ \\  &=& \displaystyle{ \sum_i  ( f_i - \langle f \rangle )^2 p_i }   \end{array}

The answer has a nice meaning, too! It’s just the variance of the fitness: that is, the square of its standard deviation.

So, if you’re willing to buy my claim that the speed \|dp/dt\| is the rate at which our population learns new information, then we’ve seen that the square of the rate at which a population learns information is the variance of its fitness!

Fisher’s fundamental theorem

Now, how is this related to Fisher’s fundamental theorem of natural selection? First of all, what is Fisher’s fundamental theorem? Here’s what Wikipedia says about it:

It uses some mathematical notation but is not a theorem in the mathematical sense.

It states:

“The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time.”

Or in more modern terminology:

“The rate of increase in the mean fitness of any organism at any time ascribable to natural selection acting through changes in gene frequencies is exactly equal to its genetic variance in fitness at that time”.

Largely as a result of Fisher’s feud with the American geneticist Sewall Wright about adaptive landscapes, the theorem was widely misunderstood to mean that the average fitness of a population would always increase, even though models showed this not to be the case. In 1972, George R. Price showed that Fisher’s theorem was indeed correct (and that Fisher’s proof was also correct, given a typo or two), but did not find it to be of great significance. The sophistication that Price pointed out, and that had made understanding difficult, is that the theorem gives a formula for part of the change in gene frequency, and not for all of it. This is a part that can be said to be due to natural selection

Price’s paper is here:

• George R. Price, Fisher’s ‘fundamental theorem’ made clear, Annals of Human Genetics 36 (1972), 129–140.

I don’t find it very clear, perhaps because I didn’t spend enough time on it. But I think I get the idea.

My result is a theorem in the mathematical sense, though quite an easy one. I assume a population distribution evolves according to the replicator equation and derive an equation whose right-hand side matches that of Fisher’s original equation: the variance of the fitness.

But my left-hand side is different: it’s the square of the speed of the corresponding probability distribution, where speed is measured using the ‘Fisher information metric’. This metric was discovered by the same guy, Ronald Fisher, but I don’t think he used it in his work on the fundamental theorem!

Something a bit similar to my statement appears as Theorem 2 of this paper:

• Marc Harper, Information geometry and evolutionary game theory.

and for that theorem he cites:

• Josef Hofbauer and Karl Sigmund, Evolutionary Games and Population Dynamics, Cambridge University Press, Cambridge, 1998.

However, his Theorem 2 really concerns the rate of increase of fitness, like Fisher’s fundamental theorem. Moreover, he assumes that the probability distribution p(t) flows along the gradient of a function, and I’m not assuming that. Indeed, my version applies to situations where the probability distribution moves round and round in periodic orbits!

Relative information and the Fisher information metric

The key to generalizing Fisher’s fundamental theorem is thus to focus on the speed at which p(t) moves, rather than the increase in fitness. Why do I call this speed the ‘rate at which the population learns information’? It’s because we’re measuring this speed using the Fisher information metric, which is closely connected to relative information, also known as relative entropy or the Kullback–Leibler divergence.

I explained this back in Part 7, but that explanation seems hopelessly technical to me now, so here’s a faster one, which I created while preparing my talk.

The information of a probability distribution q relative to a probability distribution p is

\displaystyle{     I(q,p) = \sum_{i =1}^n q_i \log\left(\frac{q_i}{p_i}\right) }

It says how much information you learn if you start with a hypothesis p saying that the probability of the ith situation was p_i, and then update this to a new hypothesis q.

Now suppose you have a hypothesis that’s changing with time in a smooth way, given by a time-dependent probability p(t). Then a calculation shows that

\displaystyle{ \left.\frac{d}{dt} I(p(t),p(t_0)) \right|_{t = t_0} = 0 }

for all times t_0. This seems paradoxical at first. I like to jokingly put it this way:

To first order, you’re never learning anything.

However, as long as the velocity \frac{d}{dt}p(t_0) is nonzero, we have

\displaystyle{ \left.\frac{d^2}{dt^2} I(p(t),p(t_0)) \right|_{t = t_0} > 0 }

so we can say

To second order, you’re always learning something… unless your opinions are fixed.

This lets us define a ‘rate of learning’—that is, a ‘speed’ at which the probability distribution p(t) moves. And this is precisely the speed given by the Fisher information metric!

In other words:

\displaystyle{ \left\|\frac{dp}{dt}(t_0)\right\|^2 =  \left.\frac{d^2}{dt^2} I(p(t),p(t_0)) \right|_{t = t_0} }

where the length is given by Fisher information metric. Indeed, this formula can be used to define the Fisher information metric. From this definition we can easily work out the concrete formula I gave earlier.

In summary: as a probability distribution moves around, the relative information between the new probability distribution and the original one grows approximately as the square of time, not linearly. So, to talk about a ‘rate at which information is learned’, we need to use the above formula, involving a second time derivative. This rate is just the speed at which the probability distribution moves, measured using the Fisher information metric. And when we have a probability distribution describing how many replicators are of different species, and it’s evolving according to the replicator equation, this speed is also just the variance of the fitness!


by John Baez at February 01, 2017 01:18 AM

January 31, 2017

John Baez - Azimuth

Biology as Information Dynamics

This is my talk for the workshop Biological Complexity: Can It Be Quantified?

• John Baez, Biology as information dynamics, 2 February 2017.

Abstract. If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’—a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Leibler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clean general formulation of Fisher’s fundamental theorem of natural selection.

For more, read:

• Marc Harper, The replicator equation as an inference dynamic.

• Marc Harper, Information geometry and evolutionary game theory.

• Barry Sinervo and Curt M. Lively, The rock-paper-scissors game and the evolution of alternative male strategies, Nature 380 (1996), 240–243.

• John Baez, Diversity, entropy and thermodynamics.

• John Baez, Information geometry.

The last reference contains proofs of the equations shown in red in my slides.
In particular, Part 16 contains a proof of my updated version of Fisher’s fundamental theorem.


by John Baez at January 31, 2017 05:17 AM

January 30, 2017

Symmetrybreaking - Fermilab/SLAC

Sign of a long-sought asymmetry

A result from the LHCb experiment shows what could be the first evidence of matter and antimatter baryons behaving differently.

Artistic representation of an asymmetric particle decay

A new result from the LHCb experiment at CERN could help explain why our universe is made of matter and not antimatter.

Matter particles, such as protons and electrons, all have an antimatter twin. These antimatter twins appear identical in nearly every respect except that their electric and magnetic properties are opposite.

Cosmologists predict that the Big Bang produced an equal amount of matter and antimatter, which is a conundrum because matter and antimatter annihilate into pure energy when they come into contact. Particle physicists are looking for any minuscule differences between matter and antimatter, which might explain why our universe contains planets and stars and not a sizzling broth of light and energy instead.

The Large Hadron Collider doesn’t just generate Higgs bosons during its high-energy proton collisions—it also produces antimatter. By comparing the decay patterns of matter particles with their antimatter twins, the LHCb experiment is looking for minuscule differences in how these rival particles behave.

“Many antimatter experiments study particles in a very confined and controlled environment,” says Nicola Neri, a researcher at Italian research institute INFN and one of the leaders of the study. “In our experiment, the antiparticles flow and decay, so we can examine other properties, such as the momenta and trajectories of their decay products.”

The result, published today in Nature Physics, examined the decay products of matter and antimatter baryons (a particles containing three quarks) and looked at the spatial distribution of the resulting daughter particles within the detector. Specifically, Neri and his colleagues looked for a very rare decay of the lambda-b particle (which contains an up quark, down quark and bottom quark) into a proton and three pions (which contain an up quark and anti-down quark).

Based on data from 6000 decays, Neri and his team found a difference in the spatial orientation of the daughter particles of the matter and antimatter lambda-bs.

“This is the first time we’ve seen evidence of matter and antimatter baryons behaving differently,” Neri says. “But we need more data before we can make a definitive claim.”

Statistically, the result has a significant of 3.3 sigma, which means its chances of being a just a statistical fluctuation (and not a new property of nature) is one out of a thousand. The traditional threshold for discovery is 5 sigma, which equates to odds of one out of more than a million.

For Neri, this result is more than early evidence of a never before seen process—it is a key that opens new research opportunities for LHCb physicists.

“We proved that we are there,” Neri says, “Our experiment is so sensitive that we can start systematically looking for this matter-antimatter asymmetry in heavy baryons at LHCb. We have this capability, and we will be able to do even more after the detector is upgraded next year.”

by Sarah Charley at January 30, 2017 06:14 PM

Matt Strassler - Of Particular Significance

Penny Wise, Pound Foolish

The cost to American science and healthcare of the administration’s attack on legal immigration is hard to quantify.  Maybe it will prevent a terrorist attack, though that’s hard to say.  What is certain is that American faculty are suddenly no longer able to hire the best researchers from the seven countries currently affected by the ban.  Numerous top scientists suddenly cannot travel here to share their work with American colleagues; or if already working here, cannot now travel abroad to learn from experts elsewhere… not to mention visiting their families.  Those caught outside the country cannot return, hurting the American laboratories where they are employed.

You might ask what the big deal is; it’s only seven countries, and the ban is temporary. Well (even ignoring the outsized role of Iran, whose many immigrant engineers and scientists are here because they dislike the ayatollahs and their alternative facts), the impact extends far beyond these seven.

The administration’s tactics are chilling.  Scientists from certain countries now fear that one morning they will discover their country has joined the seven, so that they too cannot hope to enter or exit the United States.  They will decide now to turn down invitations to work in or collaborate with American laboratories; it’s too risky.  At the University of Pennsylvania, I had a Pakistani postdoc, who made important contributions to our research effort. At the University of Washington we hired a terrific Pakistani mathematical physicist. Today, how could I advise someone like that to accept a US position?

Even those not worried about being targeted may decide the US is not the open and welcoming country it used to be.  Many US institutions are currently hiring people for the fall semester.  A lot of bright young scientists — not just Muslims from Muslim-majority nations — will choose instead to go instead to Canada, to the UK, and elsewhere, leaving our scientific enterprise understaffed.

Well, but this is just about science, yes?  Mostly elite academics presumably — it won’t affect the average person.  Right?

Wrong.  It will affect many of us, because it affects healthcare, and in particular, hospitals around the country.  I draw your attention to an article written by an expert in that subject:

http://www.cnn.com/2017/01/29/opinions/trump-ban-impact-on-health-care-vox/index.html

and I’d like to quote from the article (highlights mine):

“Our training hospitals posted job listings for 27,860 new medical graduates last year alone, but American medical schools only put out 18,668 graduates. International physicians percolate throughout the entire medical system. To highlight just one particularly intense specialty, fully 30% of American transplant surgeons started their careers in foreign medical schools. Even with our current influx of international physicians as well as steadily growing domestic medical school spots, the Association of American Medical Colleges estimates that we’ll be short by up to 94,700 doctors by 2025.

The President’s decision is as ill-timed as it was sudden. The initial 90-day order encompasses Match Day, the already anxiety-inducing third Friday in March when medical school graduates officially commit to their clinical training programs. Unless the administration or the courts quickly fix the mess President Trump just created, many American hospitals could face staffing crises come July when new residents are slated to start working.”

If you or a family member has to go into the hospital this summer and gets sub-standard care due to a lack of trained residents and doctors, you know who to blame.  Terrorism is no laughing matter, but you and your loved ones are vastly more likely to die due to a medical error than due to a terrorist.  It’s hard to quantify exactly, but it is clear that over the years since 2000, the number of Americans dying of medical errors is in the millions, while the number who died from terrorism is just over three thousand during that period, almost all of whom died on 9/11 in 2001. So addressing the terrorism problem by worsening a hospital problem probably endangers Americans more than it protects them.

Such is the problem of relying on alternative facts in place of solid scientific reasoning.


Filed under: Science and Modern Society Tagged: immigration

by Matt Strassler at January 30, 2017 01:27 PM

January 29, 2017

Clifford V. Johnson - Asymptotia

Scribbles

Another Sunday share: It's almost entirely freehand iPad line work for this chapter of the book. Here's a peek at what turned out to be a fun page in progress.

(As you might guess, if you know me, the tree form is relevant to the narrative... feel free to guess what that might be.)

-cvj Click to continue reading this post

The post Scribbles appeared first on Asymptotia.

by Clifford at January 29, 2017 10:09 PM

January 26, 2017

Symmetrybreaking - Fermilab/SLAC

The robots of CERN

TIM and other mechanical friends tackle jobs humans shouldn’t.

Robot with wheels, an arm and a camera in the tunnel of the Large Hadron Collider

The Large Hadron Collider is the world’s most powerful particle accelerator. Buried in the bedrock beneath the Franco-Swiss border, it whips protons through its nearly 2000 magnets 11,000 times every second.

As you might expect, the subterranean tunnel which houses the LHC is not always the friendliest place for human visitors.

“The LHC contains 120 tons of liquid helium kept at 1.9 Kelvin,” says Ron Suykerbuyk, an LHC operator. “This cooling system is used to keep the electromagnets in super conducting state capable of carrying up to 13,000 Amps of current through its wires. Even with all the safety systems we have in place, we prefer to limit our underground access when the cryogenic systems are on”.

But as with any machine, sometimes the LHC needs attention: inspections, repairs, tuning. The LHC is so secure that even with perfect conditions, it takes 30 minutes after the beam is shut off for the first humans to even arrive at the entrance to the tunnel.

But the robotics team at CERN asks: Why do we need humans for this job anyway?

Enter TIM—the Train Inspection Monorail. TIM is a chain of wagons, sensors and cameras that snake along a track bolted to the LHC tunnel’s ceiling. In the 1990s, the track held a cable car that transported machinery and people around the Large Electron-Position Collider, the first inhabitant of the tunnel. With the installation of the LHC, there was no longer room for both accelerator and the cable car, so the monorail was reconfigured for the sleeker TIM robots.

There are currently two TIM robots and plans to install two more in the next couple of years. These four TIM robots will patrol the different quadrants of the LHC, enabling operators to reach any part of the 17-mile tunnel within 20 minutes. As TIM slithers along the ceiling, an automated eye keeps watch for any changes in the tunnel and a robotic arm drops down to measure radiation. Other sensors measure the temperature, oxygen level and cell phone reception.

“In addition to performing environmental measurements, TIM is a safety system which can be the eyes and ears for members of the CERN Fire Brigade and operations team,” says Mario Di Castro, the leader of CERN’s robotics team. “Eventually we’d like to equip TIM with a fire extinguisher and other physical operations so that it can be the first responder in case of a crisis.”

TIM isn’t alone in its mission to provide a safer environment for its human coworkers. CERN also has three teleoperated robots that can assess troublesome areas, provide assessments of hazards and carry tools.

The main role of these three robots is to access radioactive areas.

Radiation is a type of energy carried by free-moving subatomic particles. As protons race around CERN’s accelerator complex, special equipment called collimators constrict their passage and absorb particles that have wandered away from the center of the beam pipe. This trimming process ensures that the proton stream is compact and tidy.

After a couple weeks of operation, the collimators have absorbed so many particles that they will reemit their energy—even after the beam is shut off. There is no radiation hazard to humans unless they are within a few meters of the collimators, and because the machine is fully automated, humans rarely need to perform check-ups. But occasionally, material in these restricted areas required attention.

By replacing humans with robots, engineers can quickly fix small problems without needing to wait long periods of time for the radiation to dissipate or sending personnel into potentially unsafe environments.

“CERN robots help perform repetitive and dangerous tasks that humans either prefer to avoid or are unable to do because of hazards, size constraints or the extreme environments in which they take place, such CERN experimental areas,” Di Castro says.

About half the time, these tasks are very simple, such as performing a visual assessment of the area or taking measurements. “Robots can replace humans for these simple tasks and improve the quality and timeliness of work,” he says.

Last year the SPS accelerator (which starts the acceleration process for particles that eventually move to the LHC) needed an oil refill to keep its parts running smoothly. But the accelerator itself was too radioactive for humans to visit, so one of the CERN robotics team’s robots rolled in gripping an oil can in its flexible arm.

In June 2016, scientists needed to dispose of radioactive Cobalt, Cesium and Americium they had used to calibrate radiation sensors. Two CERN robots cycled in with several tools, extracted the radioactive sources and packed them in thick protective containers for removal.

Over the last two years, these two robots have performed more than 30 interventions, saving humans both time and radiation doses.

As the LHC increases the power and particle collisions over the next decade, Di Castro and his team are preening these robot companions to increase their capabilities. “We are putting a strong commitment to adapt and develop existing robotic solutions to fit CERN’s evolving needs,” Di Castro says.

Video of wxKRW1Z2lWo

by Sarah Charley at January 26, 2017 02:00 PM

January 25, 2017

Sean Carroll - Preposterous Universe

What Happened at the Big Bang?

I had the pleasure earlier this month of giving a plenary lecture at a meeting of the American Astronomical Society. Unfortunately, as far as I know they don’t record the lectures on video. So here, at least, are the slides I showed during my talk. I’ve been a little hesitant to put them up, since some subtleties are lost if you only have the slides and not the words that went with them, but perhaps it’s better than nothing.

My assigned topic was “What We Don’t Know About the Beginning of the Universe,” and I focused on the question of whether there could have been space and time even before the Big Bang. Short answer: sure there could have been, but we don’t actually know.

So what I did to fill my time was two things. First, I talked about different ways the universe could have existed before the Big Bang, classifying models into four possibilities (see Slide 7):

  1. Bouncing (the universe collapses to a Big Crunch, then re-expands with a Big Bang)
  2. Cyclic (a series of bounces and crunches, extending forever)
  3. Hibernating (a universe that sits quiescently for a long time, before the Bang begins)
  4. Reproducing (a background empty universe that spits off babies, each of which begins with a Bang)

I don’t claim this is a logically exhaustive set of possibilities, but most semi-popular models I know fit into one of the above categories. Given my own way of thinking about the problem, I emphasized that any decent cosmological model should try to explain why the early universe had a low entropy, and suggested that the Reproducing models did the best job.

My other goal was to talk about how thinking quantum-mechanically affects the problem. There are two questions to ask: is time emergent or fundamental, and is Hilbert space finite- or infinite-dimensional. If time is fundamental, the universe lasts forever; it doesn’t have a beginning. But if time is emergent, there may very well be a first moment. If Hilbert space is finite-dimensional it’s necessary (there are only a finite number of moments of time that can possibly emerge), while if it’s infinite-dimensional the problem is open.

Despite all that we don’t know, I remain optimistic that we are actually making progress here. I’m pretty hopeful that within my lifetime we’ll have settled on a leading theory for what happened at the very beginning of the universe.

by Sean Carroll at January 25, 2017 11:30 PM

Andrew Jaffe - Leaves on the Line

SOLE Survivor

I recently finished my last term lecturing our second-year Quantum Mechanics course, which I taught for five years. It’s a required class, a mathematical introduction to one of the most important set of ideas in all of physics, and really the basis for much of what we do, whether that’s astrophysics or particle physics or almost anything else. It’s a slightly “old-fashioned” course, although it covers the important basic ideas: the Schrödinger Equation, the postulates of quantum mechanics, angular momentum, and spin, leading almost up to what is needed to understand the crowning achievement of early quantum theory: the structure of the hydrogen atom (and other atoms).

A more modern approach might start with qubits: the simplest systems that show quantum mechanical behaviour, and the study of which has led to the revolution in quantum information and quantum computing.

Moreover, the lectures rely on the so-called Copenhagen interpretation, which is the confusing and sometimes contradictory way that most physicists are taught to think about the basic ontology of quantum mechanics: what it says about what the world is “made of” and what happens when you make a quantum-mechanical measurement of that world. Indeed, it’s so confusing and contradictory that you really need another rule so that you don’t complain when you start to think too deeply about it: “shut up and calculate”. A more modern approach might also discuss the many-worlds approach, and — my current favorite — the (of course) Bayesian ideas of QBism.

The students seemed pleased with the course as it is — at the end of the term, they have the chance to give us some feedback through our “Student On-Line Evaluation” system, and my marks have been pretty consistent. Of the 200 or so students in the class, only about 90 bother to give their evaluations, which is disappointingly few. But it’s enough (I hope) to get a feeling for what they thought.

SOLE 2016 Chart

So, most students Definitely/Mostly Agree with the good things, although it’s clear that our students are most disappointed in the feedback that they receive from us (this is a more general issue for us in Physics at Imperial and more generally, and which may partially explain why most of them are unwilling to feed back to us through this form).

But much more fun and occasionally revealing are the “free-text comments”. Given the numerical scores, it’s not too surprising that there were plenty of positive ones:

  • Excellent lecturer - was enthusiastic and made you want to listen and learn well. Explained theory very well and clearly and showed he responded to suggestions on how to improve.

  • Possibly the best lecturer of this term.

  • Thanks for providing me with the knowledge and top level banter.

  • One of my favourite lecturers so far, Jaffe was entertaining and cleary very knowledgeable. He was always open to answering questions, no matter how simple they may be, and gave plenty of opportunity for students to ask them during lectures. I found this highly beneficial. His lecturing style incorporates well the blackboards, projectors and speach and he finds a nice balance between them. He can be a little erratic sometimes, which can cause confusion (e.g. suddenly remembering that he forgot to write something on the board while talking about something else completely and not really explaining what he wrote to correct it), but this is only a minor fix. Overall VERY HAPPY with this lecturer!

But some were more mixed:

  • One of the best, and funniest, lecturers I’ve had. However, there are some important conclusions which are non-intuitively derived from the mathematics, which would be made clearer if they were stated explicitly, e.g. by writing them on the board.

  • I felt this was the first time I really got a strong qualitative grasp of quantum mechanics, which I certainly owe to Prof Jaffe’s awesome lectures. Sadly I can’t quite say the same about my theoretical grasp; I felt the final third of the course less accessible, particularly when tackling angular momentum. At times, I struggled to contextualise the maths on the board, especially when using new techniques or notation. I mostly managed to follow Prof Jaffe’s derivations and explanations, but struggled to understand the greater meaning. This could be improved on next year. Apart from that, I really enjoyed going to the lectures and thought Prof Jaffe did a great job!

  • The course was inevitably very difficult to follow.

And several students explicitly commented on my attempts to get students to ask questions in as public a way as possible, so that everyone can benefit from the answers and — this really is true! — because there really are no embarrassing questions!

  • Really good at explaining and very engaging. Can seem a little abrasive at times. People don’t like asking questions in lectures, and not really liking people to ask questions in private afterwards, it ultimately means that no questions really get answered. Also, not answering questions by email makes sense, but no one really uses the blackboard form, so again no one really gets any questions answered. Though the rationale behind not answering email questions makes sense, it does seem a little unnecessarily difficult.

  • We are told not to ask questions privately so that everyone can learn from our doubts/misunderstandings, but I, amongst many people, don’t have the confidence to ask a question in front of 250 people during a lecture.

  • Forcing people to ask questions in lectures or publically on a message board is inappropriate. I understand it makes less work for you, but many students do not have the confidence to ask so openly, you are discouraging them from clarifying their understanding.

Inevitably, some of the comments were contradictory:

  • Would have been helpful to go through examples in lectures rather than going over the long-winded maths to derive equations/relationships that are already in the notes.

  • Professor Jaffe is very good at explaining the material. I really enjoyed his lectures. It was good that the important mathematics was covered in the lectures, with the bulk of the algebra that did not contribute to understanding being left to the handouts. This ensured we did not get bogged down in unnecessary mathematics and that there was more emphasis on the physics. I liked how Professor Jaffe would sometimes guide us through the important physics behind the mathematics. That made sure I did not get lost in the maths. A great lecture course!

And also inevitably, some students wanted to know more about the exam:

  • It is a difficult module, however well covered. The large amount of content (between lecture notes and handouts) is useful. Could you please identify what is examinable though as it is currently unclear and I would like to focus my time appropriately?

And one comment was particularly worrying (along with my seeming “a little abrasive at times”, above):

  • The lecturer was really good in lectures. however, during office hours he was a bit arrogant and did not approach the student nicely, in contrast to the behaviour of all the other professors I have spoken to

If any of the students are reading this, and are willing to comment further on this, I’d love to know more — I definitely don’t want to seem (or be!) arrogant or abrasive.

But I’m happy to see that most students don’t seem to think so, and even happier to have learned that I’ve been nominated “multiple times” for Imperial’s Student Academic Choice Awards!

Finally, best of luck to my colleague Jonathan Pritchard, who will be taking over teaching the course next year.

by Andrew at January 25, 2017 09:17 AM

January 24, 2017

Symmetrybreaking - Fermilab/SLAC

Five extreme facts about neutron stars

Neutron stars have earned their share of superlatives since their discovery in 1967.

Header: Five extreme facts about neutron stars

As a massive star dies, expelling most of its guts across the universe in a supernova explosion, its iron heart, the star’s core, collapses to create the densest form of observable matter in the universe: a neutron star. 

A neutron star is basically a giant nucleus, says Mark Alford, a professor at Washington University.

“Imagine a little lead pellet with cotton candy around it,” Alford says. “That’s an atom. All the of mass is in the little lead pellet in the middle, and there’s this big puffy cloud of electrons around it like cotton candy.” 

In neutron stars, the atoms have all collapsed. The electron clouds have all been sucked in, and the whole thing becomes a single entity with electrons running around side-by-side with protons and neutrons in a gas or fluid.

Neutron stars are pretty small, as far as stellar objects go. Although scientists are still working on pinning down their exact diameter, they estimate that they’re somewhere around 12 to 17 miles across, just about the length of Manhattan. Despite that, they have about 1.5 times the mass of our sun.

If a neutron star were any denser, it would collapse into a black hole and disappear, Alford says. “It’s the next to last stop on the line.” 

These extreme objects offer intriguing test cases that could help physicists understand the fundamental forces, general relativity and the early universe. Here are some fascinating facts to get you acquainted:

Inline 1: Five extreme facts about neutron stars
Illustration by Corinne Mucha

1. In just the first few seconds after a star begins its transformation into a neutron star, the energy leaving in neutrinos is equal to the total amount of light emitted by all of the stars in the observable universe.

Ordinary matter contains roughly equal numbers of protons and neutrons. But most of the protons in a neutron star convert into neutrons—neutron stars are made up of about 95 percent neutrons. When protons convert to neutrons, they release ubiquitous particles called neutrinos. 

Neutron stars are made in supernova explosions which are giant neutrino factories. A supernova radiates 10 times more neutrinos than there are particles, protons, neutrons and electrons in the sun.

Inline 2: Five extreme facts about neutron stars
Illustration by Corinne Mucha

2. It’s been speculated that if there were life on neutron stars, it would be two-dimensional.

Neutron stars have some of the strongest gravitational and magnetic fields in the universe.  The gravity is strong enough to flatten almost anything on the surface. The magnetic fields of neutron stars can be a billion times to a million billion times the magnetic field on the surface of Earth. 

“Everything about neutron stars is extreme,” says James Lattimer, a professor at Stony Brook University. “It goes to the point of almost being ridiculous.” 

Because they’re so dense, neutron stars provide the perfect testbed for the strong force, allowing scientists to probe the way quarks and gluons interact under these conditions. Many theories predict that the core of a neutron star compresses neutrons and protons, liberating the quarks of which they are constructed. Scientists have created a hotter version of this freed “quark matter” in the Relativistic Heavy Ion Collider and the Large Hadron Collider. 

The intense gravity of neutron stars requires scientists to use the general theory of relativity to describe the physical properties of neutron stars. In fact, measurements of neutron stars give us some of the most precise tests of general relativity that we currently have.

Despite their incredible densities and extreme gravity, neutron stars still manage to maintain a surprising amount of internal structure, housing crusts, oceans and atmospheres. “They’re a weird mixture of something the mass of a star with some of the other properties of a planet,” says Chuck Horowitz, a professor at Indiana University.

But while here on Earth we’re used to having an atmosphere that extends hundreds of miles into the sky, because a neutron star’s gravity is so extreme, its atmosphere may stretch up less than a foot.

Inline 3: Five extreme facts about neutron stars
Illustration by Corinne Mucha

3. The fastest known spinning neutron star rotates about 700 times each second.

Scientists believe that most neutron stars either currently are or at one point have been pulsars, stars that spit out beams of radio waves as they rapidly spin. If a pulsar is pointed toward our planet, we see these beams sweep across Earth like light from a lighthouse.

Scientists first observed neutron stars in 1967, when a graduate student named Jocelyn Bell noticed repeated radio pulses arriving from a pulsar outside our solar system. (The 1974 Nobel Prize in Physics went to her thesis advisor, Anthony Hewish, for the discovery.)

Pulsars can spin anywhere from tens to hundreds of times per second. If you were standing on the equator of the fastest known pulsar, the rotational velocity would be about 1/10 the speed of light.

The 1993 Nobel Prize in Physics went to scientists who measured the rate at which a pair of neutron stars orbiting each other were spiraling together due to the emission of gravitational radiation, a phenomenon predicted by Albert Einstein's general theory of relativity.

Scientists from the Laser Interferometer Gravitational-Wave Observatory, or LIGO, announced in 2016 that they had directly detected gravitational waves for the first time. In the future, it might be possible to use pulsars as giant, scaled-up versions of the LIGO experiment, trying to detect the small changes in the distance between the pulsars and Earth as a gravitational wave passes by.

Inline 4: Five extreme facts about neutron stars
Illustration by Sandbox Studio, Chicago

4. The wrong kind of neutron star could wreak havoc on Earth.

Neutron stars can be dangerous because of their strong fields. If a neutron star entered our solar system, it could cause chaos, throwing off the orbits of the planets and, if it got close enough, even raising tides that would rip the planet apart.

But the closest known neutron star is about 500 light-years away. And considering Proxima Centauri, the closest star to Earth at a little over 4 light-years away, has no bearing on our planet, it’s unlikely we’ll feel these catastrophic effects anytime soon.

Probably even more dangerous would be radiation from a neutron star’s magnetic field. Magnetars are neutron stars with magnetic fields a thousand times stronger than the extremely strong fields of “normal” pulsars. Sudden rearrangements of these fields can produce flares somewhat like solar flares but much more powerful.

On December 27, 2004, scientists observed a giant gamma-ray flare from Magnetar SGR 1806-20, estimated to be about 50,000 light years away. In 0.2 seconds the flare radiated as much energy as the sun produces in 300,000 years. The flare saturated many spacecraft detectors and produced detectable disturbances in the Earth’s ionosphere.

Fortunately, we are not aware of any nearby magnetars powerful enough to cause any damage.

Inline 5: Five extreme facts about neutron stars
Illustration by Corinne Mucha

5. Despite the extremes of neutron stars, researchers still have ways to study them.

There are many things we don’t know about neutron stars—including just how many of them are out there, Horowitz says. “We know of about 2000 neutron stars in our own galaxy, but we expect there to be billions more. So most neutron stars, even in our own galaxy, are completely unknown.”

Many radio, X-ray and optical light telescopes are used to investigate the properties of neutron stars. NASA’s upcoming Neutron Star Interior Composition ExploreR Mission (NICER), which is scheduled to attach to the side of the International Space Station in 2017, is one mission devoted to learning more about these extreme objects. NICER will look at X-rays coming from rotating neutron stars to try to more accurately pin down their mass and radii.

We could also study neutron stars by detecting gravitational waves. LIGO scientists hope to detect gravitational waves produced by the merger of two neutron stars. Studying those gravitational waves might clue scientists in to the properties of the extremely dense matter that neutron stars are made of.

Studying neutron stars might help us figure out the origin of the heavy chemical elements, including gold and platinum, in our universe. There’s a possibility that when neutron stars collide, not everything gets swallowed up into a more massive neutron star or black hole, but instead some fraction gets flung out and forms these heavy nuclei.

“If you want to use the lab of 24th or 25th century,” says Roger Romani, a professor at Stanford University, “then studying neutron stars is a way of looking at conditions that we cannot produce in labs on Earth.”

by Ali Sundermier at January 24, 2017 03:34 PM

Matt Strassler - Of Particular Significance

Alternative Facts and Crying Wolf

My satire about “alternative facts” from yesterday took some flak for propagating the controversial photos of inaugurations that some say are real and some say aren’t. I don’t honestly care one bit about those photos. I think it is of absolutely no importance how many people went to Trump’s inauguration; it has no bearing on how he will perform as president, and frankly I don’t know why he’s making such a big deal out of it. Even if attendance was far less than he and his people claim, it could be for two very good reasons that would not reflect badly on him at all.

First, Obama’s inauguration was extraordinarily historic. For a nation with our horrific past —  with most of our dark-skinned citizens brought to this continent to serve as property and suffer under slavery for generations — it was a huge step to finally elect an African-American president. I am sure many people chose to go to the 2009 inauguration because it was special to them to be able to witness it, and to be able to say that they were there. Much as many people adore Trump, it’s not so historic to have an aging rich white guy as president.

Second, look at a map of the US, with its population distribution. A huge population with a substantial number of Obama’s supporters live within driving distance or train distance of Washington DC. From South Carolina to Massachusetts there are large left-leaning populations. Trump’s support was largest in the center of the US, but people would not have been able to drive from there or take a train. The cost of travel to Washington could have reduced Trump’s inauguration numbers without reflecting on his popularity.

So as far as I’m concerned, it really doesn’t make any difference if Trump’s inauguration numbers were small, medium or large. It doesn’t count in making legislation or in trade negotiations; it doesn’t count in anything except pride.

But what does count, especially in foreign affairs, is whether people listen to what a president says, and by extension to what his or her press secretary says. What bothers me is not the political spinning of facts. All politicians do that. What bothers me is the claim of having hosted “the best-attended inauguration ever” without showing any convincing evidence, and the defense of those claims (and we heard it again today) that this is because it’s ok to disagree with facts.

If facts can be chosen at will, even in principle, then science ceases to function. Science — a word that means “evidence-based reasoning applied logically to determine how reality really works” — depends on the existence and undeniability of evidence. It’s not an accident that physics, unlike some subjects, does not have a Republican branch and a Democratic branch; it doesn’t have a Muslim, Christian, Buddhist or Jewish branch;  there’s just one type.  I work with people from many countries and with many religious and political beliefs; we work together just fine, and we don’t have discussions about “alternative facts.”

If instead you give up evidence-based reasoning, then soon you have politics instead of science determining your decisions on all sorts of things that matter to people because it can hurt or kill them: food safety, road safety, airplane safety, medicine, energy policy, environmental protection, and most importantly, defense. A nation that abandons evidence is abandoning applied reason and logic; and the inevitable consequence is that people will die unnecessarily.  It’s not a minor matter, and it’s not outside the purview of scientists to take a stand on the issue.

Meanwhile, I find the context for this discussion almost as astonishing as the discussion itself. It’s one thing to say unbelievable things during a campaign, but it’s much worse once in power. For the press secretary on day two of a new administration to make an astonishing and striking claim, but provide unconvincing evidence, has the effect of completely undermining his function.  As every scientist knows by heart, extraordinary claims require extraordinary evidence.  Imagine the press office at the CERN laboratory announcing the discovery of the Higgs particle without presenting plots of its two experiments’ data; or imagine if the LIGO experimenters had claimed discovery of gravitational waves but shown no evidence.  Mistakes are going to happen, but they have to be owned: imagine if OPERA’s tentative suggestion of neutrinos-faster-than-light, which was an experimental blunder, or BICEP’s loud misinterpretation of their cosmological data, had not been publicly retracted, with a clear public explanation of what happened.  When an organization makes a strong statement but won’t present clear evidence in favor, and isn’t willing to retract the statement when shown evidence against it, it not only introduces immediate suspicion of the particular claim but creates a wider credibility problem that is extremely difficult to fix.

Fortunately, the Higgs boson has been observed by two different experiments, in two different data-taking runs of both experiments; the evidence is extraordinary.  And LIGO’s gravitational waves data is public; you can check it yourself, and moreover there will be plenty of opportunities for further verification as Advanced VIRGO comes on-line this year.    But the inauguration claim hasn’t been presented with extraordinary evidence in its favor, and there’s significant contradictory evidence (from train ridership and from local sales).    When something extraordinary is actually true, it’s true from all points of view, not subject to “alternative facts”; and the person claiming it has the responsibility to find evidence, of several different types, as soon as possible.  If firm evidence is lacking, the claim should only be made tentatively.  (A single photo isn’t convincing, one way or the other, especially nowadays.)

As any child knows, it’s like crying wolf.  If your loud claim isn’t immediately backed up, or isn’t later retracted with a public admission of error, then the next time you claim something exceptional, people will just laugh and ignore you.  And nothing’s worse than suggesting that “I have my facts and you have yours;” that’s the worst possible argument, used only when firm evidence simply isn’t available.

I can’t understand why a press secretary would blow his credibility so quickly on something of so little importance.  But he did it.  If the new standards are this low, can one expect truth on anything that actually matters?  It’s certainly not good for Russia that few outside the country believe a word that Putin says; speaking for myself, I would never invest a dollar there. Unfortunately, leaders and peoples around the world, learning that the new U.S. administration has “alternative facts” at its disposal, may already have drawn the obvious conclusion.    [The extraordinary claim that “3-5 million” non-citizens (up from 2-3 million, the previous version of the claim) voted in the last election, also presented without extraordinary evidence, isn’t helping matters.] There’s now already a risk that only the president’s core supporters will believe what comes from this White House, even in a time of crisis or war.

Of course all governments lie sometimes.  But it’s wise to tell the truth most of the time, so that your occasional lies will sometimes be thought to be true.  Governments that lie constantly, even pointlessly, aren’t believed even when they say something true.  They’ve cried wolf too often.

So what’s next?  Made-up numbers for inflation, employment, the budget deficit, tax revenue? Invented statistics for the number of people who have health insurance?  False information about the readiness of our armed forces and the cost of our self-defense?  How far will this go?  And how will we know?


Filed under: Science and Modern Society Tagged: facts, ScienceAndSociety

by Matt Strassler at January 24, 2017 01:44 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Revolutions in Science at UCD

Earlier today , I gave my first my undergraduate lecture at University College Dublin (UCD). The lecture marked the start of a module called Revolutions in Science, a new course that is being offered to UCD students across the disciplines of science, engineering business, law and the humanities.

img_10261

As far as I know, this is the first course in the history and philosophy of science (HPS) offered at an Irish university and I’m delighted to be part of the initiative. I’ve named my component of the module Science, Society and the Universe – a description of the evolution of ideas about the universe, from the Babylonians to the ancient Greeks, from Ptolemy to Copernicus, from Newton to Einstein (it’s a version of a module I’ve taught at Waterford Institute of Technology for some years).

Hopefully, the new module will be the start of a new trend. It has long surprised me that interdisciplinary courses like this are not a staple of the university experience in Ireland. Certainly, renowned universities like Harvard, Oxford and Cambridge all have strong HPS departments with associated undergraduate modules offered to students across all disciplines. After all, such courses offer a very nice mix of history, philosophy and science, not to mention a useful glimpse into the history of ideas.

universitycollegedublin_51_picx4nq

In the meantime, I think I will really enjoy being back at my alma mater once a week. I can’t believe how UCD has developed into a really attractive campus


by cormac at January 24, 2017 12:47 AM

January 23, 2017

John Baez - Azimuth

Quantifying Biological Complexity

Next week I’m going to this workshop:

Biological Complexity: Can It Be Quantified?, 1-3 February 2017, Beyond Center for Fundamental Concepts in Science, Arizona State University, Tempe Arizona. Organized by Paul Davies.

I haven’t heard that any of it will be made publicly available, but I’ll see if there’s something I can show you. Here’s the schedule:

Wednesday February 1st

9:00 – 9:30 am Paul Davies

Brief welcome address, outline of the subject and aims of the meeting

Session 1. Life: do we know it when we see it?

9:30 – 10:15 am: Chris McKay, “Mission to Enceladus”

10:15 – 10:45 am: Discussion

10:45– 11:15 am: Tea/coffee break

11:15 – 12:00 pm: Kate Adamala, “Alive but not life”

12:00 – 12:30 pm: Discussion

12:30 – 2:00 pm: Lunch

Session 2. Quantifying life

2:00 – 2:45 pm: Lee Cronin, “The living and the dead: molecular signatures of life”

2:45 – 3:30 pm: Sara Walker, “Can we build a life meter?”

3:30 – 4:00 pm: Discussion

4:00 – 4:30 pm: Tea/coffee break

4:30 – 5:15 pm: Manfred Laubichler, “Complexity is smaller than you think”

5:15 – 5:30 pm: Discussion

The Beyond Annual Lecture

7:00 – 8:30 pm: Sean Carroll, “Our place in the universe”

Thursday February 2nd

Session 3: Life, information and the second law of thermodynamics

9:00 – 9:45 am: James Crutchfield, “Vital bits: the fuel of life”

9:45 – 10:00 am: Discussion

10:00 – 10:45 pm: John Baez, “Information and entropy in biology”

10:45 – 11:00 am: Discussion

11:00 – 11:30 pm: Tea/coffee break

11:30 – 12:15 pm: Chris Adami, “What is biological information?”

12:15 – 12:30 pm: Discussion

12:30 – 2:00 pm: Lunch

Session 4: The emergence of agency

2:00 – 2:45 pm: Olaf Khang Witkowski, “When do autonomous agents act collectively?”

2:45 – 3:00 pm: Discussion

3:00 – 3:45 pm: William Marshall, “When macro beats micro”

3:45 – 4:00 pm: Discussion

4:00 – 4:30 am: Tea/coffee break

4:30 – 5:15pm: Alexander Boyd, “Biology’s demons”

5:15 – 5:30 pm: Discussion

Friday February 3rd

Session 5: New physics?

9:00 – 9:45 am: Sean Carroll, “Laws of complexity, laws of life?”

9:45 – 10:00 am: Discussion

10:00 – 10:45 am: Andreas Wagner, “The arrival of the fittest”

10:45 – 11:00 am: Discussion

11:00 – 11:30 am: Tea/coffee break

11:30 – 12:30 pm: George Ellis, “Top-down causation demands new laws”

12:30 – 2:00 pm: Lunch


by John Baez at January 23, 2017 06:35 PM

Matt Strassler - Of Particular Significance

What’s all this fuss about having alternatives?

I don’t know what all the fuss is about “alternative facts.” Why, we scientists use them all the time!

For example, because of my political views, I teach physics students that gravity pulls down. That’s why the students I teach, when they go on to be engineers, put wheels on the bottom corners of cars, so that the cars don’t scrape on the ground. But in some countries, the physicists teach them that gravity pulls whichever way the country’s leaders instruct it to. That’s why their engineers build flying carpets as transports for their country’s troops. It’s a much more effective way to bring an army into battle, if your politics allows it.  We ought to consider it here.

Another example: in my physics class I claim that energy is “conserved” (in the physics sense) — it is never created out of nothing, nor is it ever destroyed. In our daily lives, energy is taken in with food, converted into special biochemicals for storage, and then used to keep us warm, maintain the pumping of our hearts, allow us to think, walk, breathe — everything we do. Those are my facts. But in some countries, the facts and laws are different, and energy can be created from nothing. The citizens of those countries never need to eat; it is a wonderful thing to be freed from this requirement. It’s great for their military, too, to not have to supply food for troops, or fuel for tanks and airplanes and ships. Our only protection against invasion from these countries is that if they crossed our borders they’d suddenly need fuel tanks.

Facts are what you make them; it’s entirely up to you. You need a good, well-thought-out system of facts, of course; otherwise they won’t produce the answers that you want. But just first figure out what you want to be true, and then go out and find the facts that make it true. That’s the way science has always been done, and the best scientists all insist upon this strategy.  As a simple illustration, compare the photos below.  Which picture has more people in it?   Obviously, the answer depends on what facts you’ve chosen to use.   [Picture copyright Reuters]  If you can’t understand that, you’re not ready to be a serious scientist!

A third example: when I teach physics to students, I instill in them the notion that quantum mechanics controls the atomic world, and underlies the transistors in every computer and every cell phone. But the uncertainty principle that arises in quantum mechanics just isn’t acceptable in some countries, so they don’t factualize it. They don’t use seditious and immoral computer chips there; instead they use proper vacuum tubes. One curious result is that their computers are the size of buildings. The CDC advises you not to travel to these countries, and certainly not to take electronics with you. Not only might your cell phone explode when it gets there, you yourself might too, since your own molecules are held together with quantum mechanical glue. At least you should bring a good-sized bottle of our local facts with you on your travels, and take a good handful before bedtime.

Hearing all the naive cries that facts aren’t for the choosing, I became curious about what our schools are teaching young people. So I asked a friend’s son, a bright young kid in fourth grade, what he’d been learning about alternatives and science. Do you know what he answered?!  I was shocked. “Alternative facts?”, he said. “You mean lies?” Sheesh. Kids these days… What are we teaching them? It’s a good thing we’ll soon have a new secretary of education.


Filed under: LHC News, Science and Modern Society, The Scientific Process Tagged: facts, science, Science&Society

by Matt Strassler at January 23, 2017 01:50 PM

Clifford V. Johnson - Asymptotia

Pouring (2)

For those of you who want to know how it turned out...

(And thanks for looking in....)

-cvj Click to continue reading this post

The post Pouring (2) appeared first on Asymptotia.

by Clifford at January 23, 2017 01:24 AM

January 22, 2017

Clifford V. Johnson - Asymptotia

Pouring…

Yes, it is pouring with rain outside, but inside... another pouring situation is taking shape as part of my Sunday activity.

It's a discussion about gravity. But not in the way you think.

-cvj

(For those interested, the hands and head were done on paper with pencil... then I connected them roughly digitally, which you see in progress...) Click to continue reading this post

The post Pouring… appeared first on Asymptotia.

by Clifford at January 22, 2017 09:21 PM

Jon Butterworth - Life and Physics

Geraint Lewis - Cosmic Horizons

Proton: a life story
Proton: a life story 
by Geraint F. Lewis


1035 years: I’ve lived a long and eventful life, but I know that death is almost upon me. Around me, my kind are slowly melting into the darkness that is now the universe, and my time will eventually come.

I’ve lived a long and eventful life…


10-43 seconds: A time of unbelievable light, unbelievable heat! I don’t remember the time before I was born, but I was there, disembodied, ethereal, part of the swirling, roaring fires of the universe coming in to being.

But the universe cooled. From the featureless inferno, its character crystalized into a seething sea of particles and forces. Electrons and quarks tore about, smashing and crashing into photons and neutrinos. The universe continued to cool.

1 second: The intensity of the heat steadily died away, and I was born. In truth, there was no precise moment of my birth, but as the universe cooled my innards, free quarks, bound together, and I was suddenly there! A proton!

But my existence seemed fleeting, and in this still crowded and superheated universe in an instant I was bumped and I transformed, changing from proton to neutron. And with another thump I was a proton again. Then neutron. Then proton. I can’t remember how many times I flipped and flopped from one to the other. But as the universe continued to cool, my identity eventually settled. I was a proton, and staying that way. At least for now!

10 seconds: The universe was now filled with jostling protons and neutrons. We crashed and collided, but I was drawn to the neutrons, and they to me. As one approached, we reached out to one another, but in the fleeting moment of a touch, the still immense heat of the universe tore us apart.

The universe cooled, and the jostling diminished. I held onto a passing neutron and we danced. Together we were something new, together we were a nucleus of deuterium. Around us, all of the neutrons found proton partners, although there were not enough to go around and many protons remained alone.

1 minute: And still the universe cooled. Things steadily slowed, and before I realised we had grabbed onto another pair, one more proton, one more neutron, and as the new group we were helium. And it was not just us! All around us in the universe, protons and neutrons were getting together. The first elements were being forged.

But as quickly as it begun, it was over. The temperature continued to drop as the universe expanded. Collisions calmed. Instead of eagerly joining together, us newly formed nuclei of atoms now avoided one another. I settled down into my life as helium.

380,000 years: After its superbly energetic start, the universe rapidly darkened. And in the darkness, other nuclei bounced around me. Electrons, still holding on to the fire of their birth, zipped between us. But the universe cooled and cooled, slowly robbing these speedy electrons of their energy, and they were inexorably drawn closer.

Two electrons joined, orbiting about us protons and neutrons. We had become a new thing entirely, an atom of helium! Other helium nuclei were doing the same, while lone protons, also grabbing at electrons, transformed into hydrogen! This excitement was fleeting, and very soon us atoms settled back into the darkness.

10 million years: The universe was still dark, but that didn’t mean that nothing was happening. Gravity was there, pulling on us atoms of hydrogen and helium, pooling us into clouds and clumps. It felt uncomfortable to be close to so many other atoms, and the constant bumping and grinding ripped off our electrons. Back to being just a nucleus of helium!

Throughout the universe, many massive lumps of hydrogen and helium were forming, with intense gravity squeezing hard at their very hearts. Temperatures soared, and protons again began to crash together, combining first into deuterium and then into helium, and then into carbon, oxygen and other elements not yet seen in the universe. And from these furnaces came heat and light, and the first stars shone and lit up the darkness.

2 billion years: I was spared the intensity at the stellar core, riding the plasma currents in the outer parts of a star. There was a lot of jostling and bumping, but it was relatively cool here, and I retained my identity of helium. But things were changing.

My star was aging quickly, and instead of the steady burning of its youth, it began to groan and wheeze, puffing and swelling as its nuclear reactor faltered and failed.  The stellar pulsing was becoming a wild ride, until eventually I was blown completely off the star and thrown back into the gas of interstellar space.

3 billion years: I swirled around for a while, bathed in the light of a hundred billion stars. But gravity does not sleep and I soon found myself back inside a newly born star. But this time it was different! No sedate atmospheric bobbing for me. I found myself in the intense blaze of the stellar core.

The temperature rose quickly, and nuclei smashed together. These collisions were violent, with a violence I had last seen at the start of the universe. And after a bruising series of collisions, I was helium no more. Now I resided with other protons and neutrons in a nucleus of carbon.

3.1 billion years: The stellar heart roared, and just beneath me the fires burnt unbelievably hot. Down there, at the very centre, carbon was forged into oxygen, neon and silicon, building heavier and heavier elements. Eventually the stellar furnace was producing iron, a nuclear dead-end that cannot fuel the burning that keeps a star alive.

As the fires at the stellar furnace continued to rage, more and more iron built up in the core. Until there was so much that the nuclear fires went out and the heart of the star suddenly stopped. With nothing to prevent gravity’s insatiable attraction, the star’s outer layers collapsed, and in an instant this crushing reignited the nuclear fires, now burning uncontrollably. The star exploded and ripped itself apart. In my new carbon identity, I found myself thrust again out into the universe.

5 billion years: Deep space is now different. Yes, there is plenty of hydrogen and helium out here, but there are lots of heavier atoms, like myself, bobbing about, the ashes from billions of dead and dying stars. We gather into immense clouds of gas and dust, illuminated by continuing generations of stars that shine.

In this cool environment, we can again collect some electrons and live as an atom, but this time an atom of carbon. Before long, we’re bumping into other atoms, linking together and forming long molecules, alcohols, formaldehydes and more. But gravity is at work again, tugging on the clouds and drawing us in. It looks like I’m heading for another journey inside a star.

8 billion years: Although this time it’s different. I find myself in the swish and swirl of material orbiting the forming star. And strange things are happening, as molecules build and intertwine, growing and clumping as the fetal star steadily grows. The heart of the star ignites, and the rush of radiation blows away the swirling disk, sending back into deep space.

But I remain, deep in a molecule bound with other molecules and held within a rock, a rock too large to billow in the solar wind. And these rocks are colliding and sticking, growing and forming a planet. In the blink of a cosmic eye, billions of tonnes have come together, which gravity has moulded into a ball. Initially hot from its birth, this newly built planet steadily cooled and solidified in the light of its host star.

10 billion years: For a brief while, this planet was dead and sterile, but water began to flow on its surface and an atmosphere wrapped and warmed it. I remained in the ground, although the rocks and the dirt continued to churn as the planet continued to cooled.

And then something amazing happed! Things moved on the surface! I didn’t see how it began, but collections of molecules were acting in unison. These bags of chemical processes slurped across the planet for billions of years, and then themselves begun to join and work together.

13 billion years: Eventually I found myself caught up in this stuff called life, with me, as carbon, integrated into the various pieces of changing and evolving creatures. But it was oh so transitory, being part of one bit of life, then back to the soil, and then part of another. Some times, as one type of life, other life consumed me, with molecules dismembered and reintegrated into other creatures.

Once I found myself in the fronds of a plant, a fern, waving gently in the breeze under a sunlit sky. But when this beautiful plant died, its body was pressed into the mud of the swamp in which it sat, and I was ripped evermore from the cycle of life. Pressures and temperatures grew as more and more material was pressed upon me, and I was buried deeper and deeper within the ground.

13.7 billion years: And there I lay, with the intense squeezing rapidly ripping away my molecular identity. Again, I was simply carbon. But here, deep in the planet, there were a lot of carbon atoms and slowly we found affinity for one another. Through soaring pressures, we bound together, pressed and shaped into a crystal, a crystal of diamond.

Suddenly I was torn from my underground home, gazed at by a living creature I had never seen before. Accompanied by some gold, I spent a mere moment of the cosmos adorning the finger of one of these living creatures, these humans. This was truly some of the strangest time of my existence, oh the world I saw, but before long I was lost and buried in the dark ground. And there I stayed as rocks shifted and moved, and the planet aged.

19 billion years: With many other carbon atoms, I was still locked in diamond as the planet started to melt around me. The star, whose birth I had witnessed, was now old. It glowed intensely, immense and swollen, so that its erratic atmosphere engulfed the planet. The heat and the raging wind of the dying star ripped at the planet’s surface, hurling it into space.

And so too I was gone, embedded in the dust of my long dead planet, thrown again into space between stars. The rocky core of the planet that had been my home for almost ten billion years continually dragged against the star’s immense atmosphere, and fell to complete annihilation in the last beats of the stellar heart.

100 billion years: All around me, the universe has continued to expand and cool, but the expansion, originally slowing, has been steadily accelerating. Immense groups of stars, the galaxies, are moving away from each other faster and faster. Their light, which blazed in the distant universe, has dimmed and diminished as they rushed away.

And by now the expansion is so rapid all of these distant galaxies have completely faded from view. Near me, stars continue to burn, but now set in the infinite blackness of a distant sky.

1 trillion years: The universe got older and my journey continued. Each time the story was the same; I’d swirl in space before gravity’s irresistible pull dragged me into a forming star. My diamond identity was rapidly lost on my first such plunge, with the immense pressures and temperatures ripping us into individual atoms of carbon. Eventually the star aged and died and I was spat back out into space.

While the story of stellar birth and stellar death repeated, I noticed that that there was steadily less and less hydrogen, and more and more other elements tumbling through interstellar space. And while I sometimes existed fleetingly in this molecule or that, I inevitably found myself pulled into flowing into the birth of a new star.

10 trillion years: I have passed through countless generations of stars, each time slightly different. Many of these have been relatively gentle affairs, but now and again I find myself caught up in a massive star, a star destined to explode when it dies.

And within this stellar forge, my identity was changed to heavier and heavier elements. But in the eventual cataclysm of stellar death, a supernova, the smashing of elements can create extraordinary heavy collections of protons and neutrons, nuclei of gold and lead. I emerged from one explosion in a nucleus of cobalt, 27 of us protons with 33 neutrons.

But this was not a happy nucleus, heaving and shaking. This instability cannot last. Relief came when one of many protons changed into a neutron, spitting out an electron and transforming us into nickel. But as nickel we did not settle, more heaves and shakes, and we continued to transform and transform again until we were iron, and then we are calm.

50 trillion years: The cycle continues, with endless eons in empty space punctuated with the burst of excitement spent within a star. But through each cycle, there is less and less gas was to be found in interstellar space, with many atoms locked up in the dead hearts of stars, dead hearts that simply sit in the darkness.  

And the stars are different too. Instead of the bright burning of immense young stars, the darkness is punctuated with an innumerable sea of feeble, small stars, lighting the universe with their faint, red glow.

85 trillion years: I am dragged once more into a forming star. While I don’t realise it, this is the end of the cycle for me as the puny star that is forming will never explode, will never shed its outer layers, never return me to the depths of deep space. More and more of my kin, the protons and neutrons, have an identical fate, a destiny to be locked seemingly forever in to the last generations of stars to shine. 

And deep within my final star, I am still hidden inside an iron nucleus. Around me, the nuclear furnace burns very slowly and very steadily, as some of the remaining hydrogen is burnt into helium, illuminating the universe with a watery glow.

100 trillion years: My star still gently shines, with many others scattered through the universe, but the raw fuel for the formation of stars, the gas between the stars, is gone. No more stars will ever be born.

The universe is a mix of the dead and the dying, the remnants of long dead stars, and the last of those still feebly burning, destined to join the graveyard when they exhaust the last of their nuclear fuel. From this point, only death and darkness face the universe.

110 trillion years: The inevitable has come, and my star has exhausted the last of its nuclear fuel. At its heart, the fires have gone out. My star has died, not with a bang, but with a very silent whimper.

And I, a single proton, am still locked inside my nucleus of iron, deep, deep within the star. It is still warm, a left over heat from when the fires burnt, and atoms bounce and jostle, but it’s a dying heat as the star cools, leaking its radiation into space.

120 trillion years: The last star, aged and exhausted, has died, and the universe is filled with little more than fading embers. The gentle glow continues for a short while, but darkness returns, a darkness not seen for more than a hundred trillion years.

The universe feels like it’s entering its end days, but in reality an infinite future stretches ahead. In the darkness, I still sit, locked within the corpse of my long dead star.

10 quadrillion years: The last heat in my star has gone, radiated away into space, and we are as cold and dark as space itself. Everything has slowed to a crawl as the universe continues to wind down.   

But in the darkness, monsters lurk. Black holes, the crushed cores of massive dead stars, have been slowly slurping matter and eating the stellar dead, and in the dark they continue to feed, continue to grow. My remnant home is lucky, avoiding this fate, but many dead stars are crushed out of existence within the black hole’s heart.

1031 years: Further countless eons have passed, eons where nothing happened, nothing changed. But now, in the darkness, something new is stirring, a slow, methodic activity as matter itself has started to melt. My kindred protons, protons that have existed since the birth of time, are vanishing from existence, decaying to be replaced with other small particles.

My own remnant star is slowly falling apart, as individual atoms decay and break down. My own iron home is also disintegrating around me, with protons steadily decaying away. All of the dead stars are steadily turning to dust.

1034 years:  The stars are gone, and I find myself alone, a single proton sitting in the blackness of space. In the darkness around me, protons are still decaying away, still ceasing to be. The universe is slowly becoming a featureless sea, with little more than electrons and photons in the darkness.

Looking back over the immense history of the universe, it is difficult to remember the brief glory days, the days where the stars shone, with planets, and at least some life. But that has all gone, and is gone forever.

1035 years: There are very few of us protons left, and I am amongst the last. I know the inevitable will come soon, and I too will cease to exist, and will return to the ephemeral state that existed before my birth.

I will be gone, but there are still things hidden in the darkness. Even the black holes eventually die, spitting decaying particles into the void. And after 10100 years, even this will end as the last black hole dies away. And as it does so, the universe will enter into the last night, a night that will truly last forever.
  


I’ve lived a long and eventful life…



by Cusp (noreply@blogger.com) at January 22, 2017 01:27 AM

January 19, 2017

Lubos Motl - string vacua and pheno

A monstrously symmetric cousin of our heterotic Universe
Natalie M. Paquette, Daniel Persson, and Roberto Volpato (Stanford, Sweden, Italy) published a mathematically pretty preprint based on the utterly physical construction of the heterotic string.
BPS Algebras, Genus Zero, and the Heterotic Monster
Well, this paper elaborates upon their previous PPV1 paper which is exactly 1 year old now but I am sure that you will forgive me a 1-year delay in the reporting.

It's just remarkable that something so mathematically exceptional – by its symmetries – may be considered "another solution" to the same spacetime equations that also admit our Universe as a solution.



I still consider the \(E_8\times E_8\) heterotic string to be the most well-motivated candidate description of Nature including quantum gravity. Dualities probably admit other descriptions as well – F-theory, M-theory, braneworlds – but the heterotic string may be the "closest one" or the "most weakly coupled" among all the descriptions.

Heterotic string theory describes our Universe as a 10-dimensional spacetime occupied by weakly coupled strings whose 2-dimensional world sheet is a "hybrid" ("heterosis" is "hybrid vigor", the ability of offspring to surpass the average of both parents). The left-moving excitations on the world sheet are taken from the \(D=26\) bosonic string theory while the right-moving ones are those from the \(D=10\) fermionic string theory (with the \(\NNN=1\) world sheet supersymmetry).

Because the critical dimensions don't agree, the remaining \(D_L-D_R=26-10=16\) left-moving dimensions have to be compactified on the torus deduced from an even self-dual lattice (or fermionized to 32 fermions whose boundary conditions must be modular invariant). There are two even self-dual lattices in 16 dimensions and we obtain theories with spacetime gauge groups \(SO(32)\) or \(E_8\times E_8\). Both of them have rank \(16\) and dimension \(496\).




Six of the remaining 9+1 dimensions of the \(E_8\times E_8\) heterotic string may be compactified on something like a Calabi-Yau manifold which breaks the \(\NNN=4\) spacetime supersymmetry of the heterotic string to a realistic \(\NNN=1\), allows one to break \(E_8\times E_8\) to a grand unified group such as \(E_6\) or \(SO(10)\), and we may end up with at least semi-realistic (all qualitative things up to some level surely agree) effective field theory at low energies.




Natalie and co-authors are de facto doing "something equally good" as far as the validity and consistency of the heterotic compactification is concerned. The outcome is more interesting mathematically and less relevant physically, however. They're just compactifying the \(26\) left-moving and \(10\) right-moving dimensions of the heterotic string differently than in the most realistic vacua. Well, for the bosonic (left-moving) side and the fermionic/superstring (right-moving) part they choose the following world sheet conformal field theories:
  1. Monstrous moonshine module of Frenkel, Lepowsky, and Meurman (or its orbifolds)
  2. the Conway module
It's absolutely wonderful. The first, left-moving side basically compactifies \(24\) bosons on the torus obtained from the 24-dimensional Leech lattice. It's another even self-dual lattice but in dimension \(24\) – the only one in that dimension that contains no sites whose squared length is equal to two. The shortest nonzero vectors have the squared length equal to four – which is why no continuous Lie symmetry (such as those from the lattices \(\Gamma_{16}\) and \(\Gamma_8\oplus\Gamma_8\)) arises.

That CFT was known to explain the monstrous moonshine – the appearance of the number \(196,883.5\pm 0.5\) both in \(j\)-invariants as well as in representations of the monster group. The monster group has \(8\times 10^{53}\) elements or so. It's important to realize that not all the discrete transformations in the monster group are represented as geometric actions on the Leech lattice. Some of them act in more stringy ways – just like when string theory on the self-dual circle is capable of enhancing the \(U(1)\times U(1)\) gauge "Kaluza-Klein plus B-field \(p\)-form gauge invariance" symmetry to an \(SU(2)\times SU(2)\).

If you only look at the geometric symmetries of the Leech lattice, you really search for the group of automorphisms of the Leech lattice. This group is known as the Conway group \({\rm Co}_0\) and its number of elements is much smaller than that of the monster group, just \(8\times 10^{18}\) or so. It's the largest one among the so-called Conway groups. However, it's other three groups \({\rm Co}_1,{\rm Co}_2,{\rm Co}_3\) that belong among the 26 or 27 "sporadic simple groups" in the classification of the finite groups.

\({\rm Co}_1\) is a simple quotient of \({\rm Co}_0\) by its \(\ZZ_2\) center (the 24-dimensional parity) – so the sporadic \({\rm Co}_1\) group simply has 1/2 of the number of elements of \({\rm Co}_0\). The groups \({\rm Co}_2,{\rm Co}_3\) are subgroups of \({\rm Co}_0\) that also leave a vector of type 2 or type 3 unchanged, respectively.

Aside from the monster group and the three sporadic Conway groups, the remaining 22 full-blown sporadic groups are the baby monster, five Mathieu groups, four Janko groups, three Fischer groups, Higman-Sims, McLaughlin, Held, Rudvalis, Suzuki, O'Nan, Harada-Norton, Lyons, and Thompson group, while Tits group is sometimes counted as the 27th sporadic group. Mathematics is weird – these groups look almost as arbitrary as the names and nationalities of their human namesakes but they are objective facts about mathematics that every mathematically literate E.T. civilization agrees with.

OK, the funny construction behind this paper combines the \(c_L=24\) bosons on the Leech lattice, along with a \(c_R=12\) conformal field theory for the supersymmetric side whose symmetry is the Conway group. These two ingredients may be combined to produce a consistent two-dimensional, modular-invariant, conformal field theory coupled to gravity – also known as a perturbative string theory.

(The supersymmetric, right-moving, Conway side has a smaller group and plays a correspondingly smaller role in the hybrid construction, the role of a "passive spectator". The usage of a subgroup for the fermionic side seems reminiscent of the fact that the 32 fermionic supercharges in string theory only transform under a compact subgroup \(Spin(16)\) of \(E_{8(8)}\) or similar groups: the "bulk" of the exceptional generators of the U-duality group isn't manifest even though the states constructed with the help of the fermions ultimately transform under the whole exceptional U-duality group.)

Natalie et al. study their model in quite some detail. They verify some special property of the CFT that is seen on the sphere – and that involves some elements in \(SL(2,\RR)\) outside the modular group \(SL(2,\ZZ)\). They also look at all the BPS states in the second-quantized string theory – those would be actual BPS states in the spacetime that is a sibling of the spacetime around us. And they find out that these BPS states form a representation of a sort of some true new "staggeringly stinky animal", Borcherds's and Carnahan's "generalized Kač-Moody algebras \(\mathfrak{m}_g\)". (Carnahan added some "twisted" cousins of the original construction by Borcherds.)

I had to use a new phrase, "staggeringly stinky animal", because all simpler phrases such as monsters and beasts had already been reserved by other important mathematical structures in this enterprise. :-) It is not hard to see that these generalized Kač-Moody algebras are a "sweet bastard hybrid" of the ordinary Kač-Moody algebras based on continuous groups and the discrete monster group.

Much of the paper is written in a physicist-friendly language but at the end, they switch to the language of axiomatic vertex operator algebras, a jargon of mathematicians designed so that you wouldn't even necessarily recognize that they're studying a perturbative string theory.



A twelve-minute talk – given on the day when LIGO announced the waves – by then graduate student (!, she already has 10+1 papers; soon-to-be a Burke fellow at Caltech) Natalie Paquette about the "unreasonable effectiveness of physics in mathematics", a thought-provoking reversal of a well-known Wigner's title. After a brief introduction to Wigner, string theory, and compactification, she gives her edition of the story how string physicists have hilariously trumped mathematicians in the counting of the spheres within the quintic hypersurface. (Most people in Palo Alto will surely try hard to cleverly permute my "hilariously trumped" and experiment with phrases such as "trumpishly hillarized", but I predict that at the end, spellcheckers will confirm that they will have lost this battle just like Hillary has.)

It's sort of metaphysically thrilling to realize that these almost maximally, sporadically, staggeringly symmetric constructions are siblings of the heterotic compactification that may be responsible for the world we inhabit. Because of the relationship of these siblings, I think that we should appreciate that in some very early epoch of cosmology, the Universe was deciding whether it wanted to live in the Monster \(\times\) Conway heterotic hybrid described in this paper; or in one of the relatively boring Calabi-Yau-like compactifications that generates the particle physics we know.



The Monster \(\times\) Conway heterotic VOA hybrid is smart and beautiful, not to mention imaginary.

We may even suggest that our Universe has spent some Planckian time in the Monster \(\times\) Conway hybrid before the monstrous and generalized Kač-Moody symmetries were broken, to yield the world that we know, love, and are sometimes annoyed by. In quantum field theory, we may only imagine our world as a broken phase of grand unified theories with groups like \(E_6\) which are large but not "really" qualitatively different from the Standard Model group. On the other hand, string theory allows us to conjecture that our world is also a broken phase of a theory that had huge symmetries such as the monster group to start with.

The process of the breaking of this monster group symmetry is very different from the Higgs mechanism in quantum field theory when it comes to the technical details but these mechanisms may be fundamentally equally relevant and important.

by Luboš Motl (noreply@blogger.com) at January 19, 2017 10:05 PM

Symmetrybreaking - Fermilab/SLAC

Matter-antimatter mystery remains unsolved

Measuring with high precision, physicists at CERN found a property of antiprotons perfectly mirrored that of protons.

Stefan Ulmer, spokesperson of BASE collaboration, points at a screen with data about a 405-day-old antiproton

There is little wiggle room for disparities between matter and antimatter protons, according to a new study published by the BASE experiment at CERN.

Charged matter particles, such as protons and electrons, all have an antimatter counterpart. These antiparticles appear identical in every respect to their matter siblings, but they have an opposite charge and an opposite magnetic property. This recalcitrant parity is a head-scratcher for cosmologists who want to know why matter triumphed over antimatter in the early universe.

“We’re looking for hints,” says Stefan Ulmer, spokesperson of the BASE collaboration. “If we find a slight difference between matter and antimatter particles, it won’t tell us why the universe is made of matter and not antimatter, but it would be an important clue.”

Ulmer and his colleagues working on the BASE experiment at CERN closely scrutinize the properties of antiprotons to look for any miniscule divergences from protons. In a paper published today in the journal Nature Communications, the BASE collaboration at CERN reports the most precise measurement ever made of the magnetic moment of the antiproton.

“Each spin-carrying charged particle is like a small magnet,” Ulmer says. “The magnetic moment is a fundamental property which tells us the strength of that magnet.”

The BASE measurement shows that the magnetic moments of the proton and antiproton are identical, apart from their opposite signs, within the experimental uncertainty of 0.8 parts per million. The result improves the precision of the previous best measurement by the ATRAP collaboration in 2013, also at CERN, by a factor of six. This new measurement shows an almost perfect symmetry between matter and antimatter particles, thus further constricting leeway for incongruencies which might have explained the cosmic asymmetry between matter and antimatter.

The measurement was made at the Antimatter Factory at CERN, which generates antiprotons by first crashing normal protons into a target and then focusing and slowing the resulting antimatter particles using the Antiproton Decelerator. Because matter and antimatter annihilate upon contact, the BASE experiment first traps antiprotons in a vacuum using sophisticated electromagnetics and then cools them to about 1 degree Celsius above absolute zero. These electromagnetic reservoirs can store antiparticles for long periods of time; in some cases, over a year. Once in the reservoir, the antiprotons are fed one-by-one into a trap with a superimposed magnetic bottle, in which the antiprotons oscillate along the magnetic field lines. Depending on their North-South alignment in the magnetic bottle, the antiprotons will vibrate at two slightly different rates. From these oscillations (combined with nuclear magnetic resonance methods), physicists can determine the magnetic moment.

The challenge with this new measurement was developing a technique sensitive to the miniscule differences between antiprotons aligned with the magnetic field versus those anti-aligned.

“It’s the equivalent of determining if a particle has vibrated 5 million times or 5 million-plus-one times over the course of a second,” Ulmer says. “Because this measurement is so sensitive, we  stored antiprotons in the reservoir and performed the measurement when the antiproton decelerator was off and the lab was quiet.”

BASE now plans to measure the antiproton magnetic moment using a new trapping technique that should enable a precision at the level of a few parts per billion—that is, a factor of 200 to 800 improvement.

Members of the BASE experiment hope that a higher level of precision might provide clues as to why matter flourishes while cosmic antimatter lingers on the brink of extinction.

“Every new precision measurement helps us complete the framework and further refine our understanding of antimatter’s relationship with matter,” Ulmer says.

by Sarah Charley at January 19, 2017 04:59 PM

January 18, 2017

Axel Maas - Looking Inside the Standard Model

Can we tell when unification works? - Some answers.
This time, the following is a guest entry by one of my PhD students, Pascal Törek, writing about the most recent results of his research, especially our paper.

Some time ago the editor of this blog, offered me to write about my PhD research here. Since now I gained some insight and collected first results, I think this is the best time to do so.

In a previous blog entry, Axel explained what I am working on and which questions we try to answer. The most important one was: “Does the miracle repeat itself for a unified theory?”. Before I answer this question and explain what is meant by “miracle”, I want to recap some things.

The first thing I want to clarify is, what a unified or a grand unified theory is. The standard model of particle physics describes all the interactions (neglecting gravity) between elementary particles. Those interactions or forces are called strong, weak and electromagnetic force. All these forces or sectors of the standard model describe different kinds of physics. But at very high energies it could be that these three forces are just different parts of one unified force. Of course a theory of a unified force should also be consistent with what has already been measured. What usually comes along in such unified scenarios is that next to the known particles of the standard model, additional particles are predicted. These new particles are typically very heavy and thus makes them very hard to detect in experiments in the near future (if one of those unified theories really describes nature).

What physicists often use to make predictions in an unified theory is perturbation theory. But here comes the hook: what one does in this framework is to do something really arbitrarily, namely to fix a so-called “gauge”. This rather technical term just means that we have to use a mathematical trick to make calculations easier. Or to be more precise, we have to use that trick to even perform a calculation in perturbation theory in those kinds of theories which would be impossible otherwise.

Since nature does not care about this man-made choice, every quantity which could be measured in experiments must be independent of the gauge. But this is exactly how the elementary particles are treated in conventional perturbation theory, they depend on the gauge. An even more peculiar thing is that also the particle spectrum (or the number of particles) predicted by these kinds of theories depends on the gauge.
This problem appears already in the standard model: what we call the Higgs, W, Z, electron, etc. depends on the gauge. This is pretty confusing because those particles have been measured experimentally but should not have been observed like that if you take the theory serious. 

This contradiction in the standard model is resolved by a certain mechanism (the so-called “FMS mechanism”) which maps quantities which are independent of the gauge to the gauge-dependent objects. Those gauge-independent quantities are so called bound states. What you essentially do is to “glue” the gauge-dependent objects together in such a way that the result does not depended on the gauge. This exactly the miracle I wrote about in the beginning: one interprets something (gauge-dependent objects as e.g. the Higgs) as if it will be observable and you indeed find this something in experiments. The correct theoretical description is then in terms of bound states and there exists a one-to-one mapping to the gauge-dependent objects. This is the case in the standard model and it seems like a miracle that everything fits so perfectly such that everything works out in the end. The claim is that you see those bound states in experiments and not the gauge-dependent objects.

However, it was not clear if the FMS mechanism works also in a grand unified theory (“Does the miracle repeat itself?”). This is exactly what my research is about. Instead of taking a realistic grand unified theory we decided to take a so called “toy theory”. What is meant by that is that this theory is not a theory which can describe nature but rather covers the most important features of such kind of theory. The reason is simply that I use simulations for answering the question raised above and due to time constraints and the restricted resources a toy model is more feasible than a realistic model. By applying the FMS mechanism to the toy model I found that there is a discrepancy to perturbation theory, which was not the case in the standard model. In principle there were three possible outcomes: the mechanism works in this model and perturbation theory is wrong, the mechanism fails and perturbation theory gives the correct result or both are wrong. So I performed simulations to see which statement is correct and what I found is that only the FMS mechanism predicts the correct result and perturbation theory fails. As a theoretician this result is very pleasing since we like to have nature independent of a arbitrarily chosen gauge.

The question you might ask is: “What is it good for?” Since we know that the standard model is not the theory which can describe everything, we look for theories beyond the standard model as for instance grand unified theories. There are many of these kinds of theories on the market and there is yet no way to check each of them experimentally. What one can do now is to use the FMS mechanism to rule some of them out. This is done by, roughly speaking, applying the mechanism to the theory you want to look at, count the number of particles predicted by the mechanism, compare it to the number particles of the standard model. If there are more the theory is probably a good candidate to study and if not you can throw it away.

Right now Axel, a colleague from Jena University, and myself look at more realistic grand unified theories and try to find general features concerning the FMS mechanism. I am sure Axel or maybe myself keep you updated on this topic.

by Axel Maas (noreply@blogger.com) at January 18, 2017 04:06 PM

Georg von Hippel - Life on the lattice

January 17, 2017

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

A new year, a new semester

I always enjoy the start of the second semester. There’s usually a great atmosphere around the college – after long weeks of quiet, it’s great to see the students back and all the restaurants, shops and canteens back open. The students themselves always seem to be in good form too. I suspect it’s the prospect of starting afresh with new modules, one of the benefits of semesterisation.

I’m particularly enjoying the start of term this year as I managed to finish a hefty piece of research before the teaching semester got under way. I’ve been working steadily on the project, a review of a key paper published by Einstein in 1917, since June 1st, so it’s nice to have it off my desk for a while. Of course, the paper will come back in due course with corrections and suggestions from the referees, but I usually enjoy that part of the process.

In the meantime, I’d forgotten how much I enjoy teaching, especially in the absence of a great cloud of research to be done in the evenings. One of the courses I’m teaching this semester is a history of the atomic hypothesis. It’s fascinating to study how the idea emerged from different roots: philosophical considerations in ancient Greece, considerations of chemical reactions in the 18th and 19th century , and considerations of statistical mechanics in the 19th century. The big problem  was how to test the hypothesis: at least until a brilliant young patent clerk suggested that the motion of small particles suspended in water might betray the presence of millions of water molecules.  Einstein’s formula was put to the test by the French physicist Jean Perrin in 1908, and it is one of Einstein’s great triumphs that by 1910, most scientists no longer talked of the ‘atomic hypothesis’, but of ‘atoms’.

fig-2

In 1905, a young Albert Einstein developed a formula describing the motion of particles  suspended in a liquid, based on the hypothesis that the liquid was made up of millions of molecules. In 1908, the French physicist Jean Perrin demonstrated that the motion of such particles matched Einstein’s formula, giving strong support for the atomic hypothesis.  

For more on Perrin’s exeriment see here

 


by cormac at January 17, 2017 05:47 PM

Symmetrybreaking - Fermilab/SLAC

The value of basic research

How can we measure the worth of scientific knowledge? Economic analysts give it a shot.

Header Image: Economic Impact

Before building any large piece of infrastructure, potential investors or representatives from funding agencies or governments have to decide whether it’s worth it. Teams of economists perform a cost-benefit analysis to help them determine how a project will affect a region and whether it makes sense to back and build it. 

But when it comes to building infrastructure for basic science, the process gets a little more complicated. It’s not so easy to pin an exact value on the benefits of something like the Large Hadron Collider.

“The main goal is priceless and therefore has no price attached,” says Stefano Forte, a professor of theoretical physics at the University of Milan and part of a team that developed a new method of economic analysis for fundamental science. “We give no value to discovering the Higgs boson in the past or supersymmetry or extra dimensions in the future, because we wouldn’t be able to say what the value of the discovery of extra dimensions is.”

Forte’s team was co-led by two economists, academic Massimo Florio, also of the University of Milan, and private business consultant Silvia Vignetti. They answered a 2012 call by the European Investment Bank’s University Sponsorship Program, which provides grants to university research centers, for assistance with this issue. The bank funded their research into a new way to evaluate proposed investments in science.

Before anyone can start evaluating any sort of impact, they have to define what they’re measuring. Generally, economic impact analyses are highly local, measuring exclusively money flowing in and out of a particular area. 

Because of the complicated nature of financing any project, the biggest difficulty for economists performing an analysis is usually coming up with an appropriate counterfactual: If the project isn’t built, what will happen? As Forte asks, “If you hadn’t spent the money there, where else would you have spent it, and are you sure that by spending it there rather than somewhere else you actually gain something?” 

Based on detailed information about where a scientific collaboration intends to spend their money, economists can take the first step in painting a picture of how that funding will affect the region. The next step is accounting for the secondary spending that this brings.

Companies are paid to do construction work for a scientific project, “and then it sort of cascades throughout the region,” says Jason Horwitz of Anderson Economic Group, which regularly performs economic analyses for universities and physics collaborations. “As they hire more people, the employees themselves are probably going to local grocery stores, going to local restaurants, they might go to a movie now and then—there’s just more local spending.”  

These first parts of the analysis account only for the tangible, concrete-and-steel process of building and maintaining an experiment, though. 

“If you build a bridge, the main benefit is from people who use the build—transportation of goods over the bridge and whatnot,” Forte says. But the benefit of constructing a telescope array or a huge laser interferometer is knowledge-formation, “which is measured in papers and publications, references and so on,” he says. 

One way researchers like Horwitz and Forte have begun to assign value to such projects is by measuring the effect of the project on the people who run it. Like attending university, working on a scientific collaboration gives you an education—and an education changes your earning capabilities. 

“Fundamental research has a huge added value in terms of human capital formation, even if you work there for two years and then you go and work in a company on Wall Street,” Forte says. Using the same methods used by universities, they found doing research at the LHC would raise students’ earning potential by about 11 percent over a 40-year career.

This method of measuring the value of scientific projects still has limitations. In it, the immeasurable, grander purpose of a fundamental science experiment is still assigned no value at all. When it comes down to it, Forte says, if all we cared about were a big construction project, technology spinoffs and the earning potential of students, we wouldn’t have fundamental physics research. 

“The actual purpose of this is not a big construction project,” Horwitz says. “It’s to do this great research which obviously has other benefits of its own, and we really don’t capture any of that.” Instead, his group appends qualitative explanations of the knowledge to be gained to their economic reports. 

Forte explains, “The fact that this kind of enterprise exists is comparable and evaluated in the same way as, say, the value of the panda not being extinct. If the panda is extinct, there is no one who’s actually going to lose money or make money—but many taxpayers would be willing to pay money for the panda not to be extinct.” 

Forte and his colleagues found a 90 percent chance of the LHC’s benefits exceeding its costs (by 2.9 billion euros, they estimate). But even in the 10 percent chance that its economics aren’t quite so Earth-shaking, its discoveries could change the way we understand our universe.

by Leah Crane at January 17, 2017 04:53 PM

January 16, 2017

Clifford V. Johnson - Asymptotia

Just When You’re Settling…

You know how this goes: He's not going to let the matter drop. He's thinking of a comeback. Yeah, don't expect to finish that chapter any time soon...

-cvj Click to continue reading this post

The post Just When You’re Settling… appeared first on Asymptotia.

by Clifford at January 16, 2017 04:53 PM

Axel Maas - Looking Inside the Standard Model

Writing a review
As I have mentioned recently on Twitter, I have been given the opportunity, and the mandate, to write a review on Higgs physics. Especially, I should describe how the connection is established from the formal basics to what we see in experiment. While I will be writing in the next time a lot about the insights I gain and the connection I make during writing, this time I want to talk about something different. About what this means, and what the purpose of reviews is.

So what is a review good for? Physics is not static. Physics is about our understanding of the world around us. It is about making things we experience calculable. This is done by phrasing so-called laws of nature as mathematical statements. Then making predictions (or explaining something what happens) is, essentially, just evaluating equations. At least in principle, because this may be technically extremely complicated and involved. There are cases in which our current abilities are not even yet able to do so. But this is technology and, often, resources in form of computing time. Not some conceptual problem.

But there is also a conceptual problem. Our mathematical statements encode what we know. One of their most powerful feature is that they tell us themselves that they are incomplete. That our mathematical formulation of nature only reaches this far. That are things, we do not even yet know what they are, which we cannot describe. Physics is at the edge of knowledge. But we are not lazy. Every day, thousands of physicists all around the world work together to push this edge daily a little bit farther out. Thus, day by day, we know more. And, in a global world, this knowledge is shared almost instantaneously.

A consequence of this progress is that the textbooks at the edge become outdated. Because we get a better understanding. Or we figure out that something is different than we thought. Or because we find a way to solve a problem which withstood solution for decades. However, what we find today or tomorrow is not yet confirmed. Every insight we gain needs to be checked. Has to be investigated from all sides. And has to be fitted into our existing knowledge. More often that not some of these insights turn out to be false hopes. That we thought we understood something. But there is still that one little hook, this one tiny loop, which in the end lets our insight crumble. This can take a day or a month or a year, or even decades. Thus, insights should not directly become part of textbooks, which we use to teach the next generation of students.

To deal with this, a hierarchy of establishing knowledge has formed.

In the beginning, there are ideas and first results. These we tell our colleagues at conferences. We document the ideas and first results in write-ups of our talks. We visit other scientists, and discuss our ideas. By this we find many loopholes and inadequacies already, and can drop things, which do not work.

Results which survive this stage then become research papers. If we write such a paper, it is usually about something, which we personally believe to be well funded. Which we have analyzed from various angles, and bounced off the wisdom and experience of our colleagues. We are pretty sure that it is solid. By making these papers accessible to the rest of the world, we put this conviction to the test of a whole community, rather than some scientists who see our talks or which we talk to in person.

Not all such results remain. In fact, many of these are later to be found to be only partly right, or still have overlooked a loophole, or are invalidated by other results. But this stage already a considerable amount of insights survive.

Over years, and sometimes decades, insights in papers on a topic accumulate. With every paper, which survives the scrutiny of the world, another piece in the puzzle fits. Thus, slowly a knowledge base emerges on a topic, carried by many papers. And then, at some point, the amount of knowledge has provided a reasonable good understanding of the topic. This understanding is still frayed at the edges towards the unknown. There is still here and there some holes to be filled. But overall, the topic is in fairly good condition. That is the point where a review is written on the topic. Which summarizes the finding of the various papers, often hundreds of them. And which draws the big picture, and fits all the pieces into it. Its duty is also to point out all remaining problems, and where the ends are still frayed. But at this point usually the things are well established. They often will not change substantially in the future. Of course, no rule without exception.

Over time, multiple reviews will evolve the big picture, close all holes, and connect the frayed edges to neighboring topics. By this, another patch in the tapestry of a field is formed. It becomes a stable part of the fabric of our understanding of physics. When this process is finished, it is time to write textbooks. To make even non-specialist students of physics aware of the topic, its big picture, and how it fits into our view of the world.

Those things, which are of particular relevance, since they form the fabric of our most basic understanding of the world, will eventually filter further down. At some point, the may become part of the textbooks at school, rather then university. And ultimately, they will become part of common knowledge.

This has happened many times in physics. Mechanics, classical electrodynamics, thermodynamics, quantum and nuclear physics, solid state physics, particle physics, and many other fields have undergone these level of hierarchies. Of course, often only with hindsight the transitions can be seen, which lead from the first inspiration to the final revelation of our understanding. But in this way our physics view of the world evolves.

by Axel Maas (noreply@blogger.com) at January 16, 2017 10:34 AM

January 14, 2017

Andrew Jaffe - Leaves on the Line

Electoral woes and votes

Like everyone else in my bubble, I’ve been angrily obsessing about the outcome of the US Presidential election for the last two weeks. I’d like to say that I’ve been channelling that obsession into action, but so far I’ve mostly been reading and hoping (and being disappointed). And trying to parse all the “explanations” for Trump’s election.

Mostly, it’s been about what the Democrats did wrong (imperfect Hillary, ignoring the white working class, not visiting Wisconsin, too much identity politics), and what the Republicans did right (imperfect Trump, dog whistles, focusing on economics and security).

But there has been an ongoing strain of purely procedural complaint: that the system is rigged, but (ironically?) in favour of Republicans. In fact, this is manifestly true: liberals (Democrats) are more concentrated — mostly in cities — than conservatives (Republicans) who are spread more evenly and dominate in rural areas. And the asymmetry is more true for the sticky ideologies than the fungible party affiliations, especially when “liberal” encompasses a whole raft of social issues rather than just left-wing economics. This has been exacerbated by a few decades of gerrymandering. So the House of Representatives, in particular, tilts Republican most of the time. And the Senate, with its non-proportional representation of two per state, regardless of size, favours those spread-out Republicans, too (although party dominance of the Senate is less of a stranglehold for the Republicans than that of the House).

But one further complaint that I’ve heard several times is that the Electoral College is rigged, above and beyond those reasons for Republican dominance of the House and Senate: as we know, Clinton has won the popular vote, by more than 1.5 million as of this writing — in fact, my own California absentee ballot has yet to be counted. The usual argument goes like this: the number of electoral votes allocated to a state is the sum of the number of members of congress (proportional to the population) and the number of senators (two), giving a total of five hundred and thirty-eight. For the most populous states, the addition of two electoral votes doesn’t make much of a difference. New Jersey, for example, has 12 representatives, and 14 electoral votes, about a 15% difference; for California it’s only about 4%. But the least populous states (North and South Dakota, Montana, Wyoming, Alaska) have only one congressperson each, but three electoral votes, increasing the share relative to population by a factor of 3 (i.e., 300%). In a Presidential election, the power of a Wyoming voter is more than three times that of a Californian.

This is all true, too. But it isn’t why Trump won the election. If you changed the electoral college to allocate votes equal to the number of congressional representatives alone (i.e., subtract two from each state), Trump would have won 245 to 191 (compared to the real result of 306 to 232).1 As a further check, since even the representative count is slightly skewed in favour of small states (since even the least populous state has at least one), I did another version where the electoral vote allocation is exactly proportional to the 2010 census numbers, but it gives the same result. (Contact me if you would like to see the numbers I use.)

Is the problem (I admit I am very narrowly defining “problem” as “responsible for Trump’s election”, not the more general one of fairness!), therefore, not the skew in vote allocation, but instead the winner-take-all results in each state? Maine and Nebraska already allocate their two “Senatorial” electoral votes to the statewide winner, and one vote for the winner of each congressional district, and there have been proposals to expand this nationally. Again, this wouldn’t solve the “problem”. Although I haven’t crunched the numbers myself, it appears that ticket-splitting (voting different parties for President and Congress) is relatively low. Since the Republicans retained control of Congress, their electoral votes under this system would be similar to their congressional majority of 239 to 194 (their are a few results outstanding), and would only get worse if we retain the two Senatorial votes per state. Indeed, with this system, Romney would have won in 2012.

So the “problem” really does go back to the very different geographical distribution of Democrats and Republicans. Almost any system which segregates electoral votes by location (especially if subjected to gerrymandering) will favour the more widely dispersed party. So perhaps the solution is to just to use nationwide popular voting for Presidential elections. This would also eliminate the importance of a small number of swing states and therefore require more national campaigning. (It could be enacted by a Constitutional amendment, or a scheme like the National Popular Vote Interstate Compact.) Alas, it ain’t gonna happen.


  1. I have assumed Trump wins Michigan, and I have allocated all of Maine to Clinton and all of Nebraska to Trump; see below. ↩︎

by Andrew at January 14, 2017 08:54 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
February 23, 2017 02:36 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at