Particle Physics Planet


November 17, 2017

ZapperZ - Physics and Physicists

Can A Simple Physics Error Cast Doubt On A da Vinci Paintaing?
It seems that the recent auction of a Leonardo da Vinci painting (for $450 million no less) has what everyone seems to call a physics flaw. It involves the crystal orb that is being held in the painting.

A major flaw in the painting — which is the only one of da Vinci's that remains in private hands — makes some historians think it's a fake. The crystal orb in the image doesn't distort light in the way that natural physics does, which would be an unusual error for da Vinci.

My reaction when I first read this is that, it is not as if da Vinci was painting this live with the actual Jesus Christ holding the orb. So either he made a mistake, or he knew what he was doing and didn't think it would matter. I don't think this observation is enough to call the painting a fake.

Still, it may make a good class example in Intro Physics optics.

Zz.

by ZapperZ (noreply@blogger.com) at November 17, 2017 06:34 PM

Peter Coles - In the Dark

And then there were five….

…black hole mergers detected via gravitational waves, that is. Here are the key measurements for Number 5, codename GW170608. More information can be found here.

Here is the abstract of the discovery paper:

On June 8, 2017 at 02:01:16.49 UTC, a gravitational-wave signal from the merger of two stellar-mass black holes was observed by the two Advanced LIGO detectors with a network signal-to-noise ratio of 13. This system is the lightest black hole binary so far observed, with component masses 12+7-2 M⊙ and 7+2-2 M⊙ (90% credible intervals). These lie in the range of measured black hole masses in low-mass X-ray binaries, thus allowing us to compare black holes detected through gravitational waves with electromagnetic observations. The source’s luminosity distance is 340 +140-140Mpc, corresponding to redshift 0.07+0.03-0.03. We verify that the signal waveform is consistent with the predictions of general relativity.

This merger seems to have been accompanied by a lower flux of press releases than previous examples…


by telescoper at November 17, 2017 04:38 PM

Peter Coles - In the Dark

Where The North Begins

I see that, once again, questions are being raised in the English media about where The North begins. I see it as symptomatic of the decline of educational standards that this issue is still being discussed, when it was settled definitively several years ago.  Let me once again put an end to the argument about what is The North and what isn’t.

For reference please consult the following map:

 

I think this map in itself proves beyond all reasonable doubt that`The North’  actually means Northumberland: the clue is in the name, really. It is abundantly clear that Manchester, Leeds, Liverpool, etc, are all much further South than The North. North Yorkshire isn’t in the North either, as any objective reading proves.  All these places are actually in The Midlands.

If you’re looking for a straightforward definition of where The North begins, I would suggest the most sensible choice is the River Tyne, which formed the old Southern boundary of Northumberland. The nameless County shown on the map between Northumberland and Durham is `Tyne  & Wear’, a relatively recent invention which confuses the issue slightly, as including all of it in The North would be absurd.  Everyone knows that Sunderland is in the Midlands.

If this cartographical evidence fails to convince you, then I refer to a different line of argument. Should you take a journey by car up the A1 or M1 or, indeed, the A1(M) you will find signs like this at regular intervals:

This particular one demonstrates beyond any doubt that Leeds is not in The North. If you keep driving in a northerly direction you will continue to see signs of this type until you cross the River Tyne at Gateshead, at which point `The North’ disappears and is replaced by `Scotland’. It therefore stands to reason that The North begins at the River Tyne, and that the most northerly point of the Midlands is at Gateshead.

I rest my case.


by telescoper at November 17, 2017 01:52 PM

November 16, 2017

Christian P. Robert - xi'an's og

long journey to reproducible results [or not]

A rather fascinating article in Nature of last August [hidden under a pile of newspapers at home!]. By Gordon J. Lithgow, Monica Driscoll and Patrick Phillips. About their endeavours to explain for divergent outcomes in the replications [or lack thereof] of an earlier experiment on anti-aging drugs tested on roundworms. Rather than dismissing the failures or blaming the other teams, the above researchers engaged for four years (!) into the titanic and grubby task of understanding the reason(s) for such discrepancies.

Finding that once most causes for discrepancies (like gentle versus rough lab technicians!) were eliminated, there were still two “types” of worms, those short-lived and those long-lived, for reasons yet unclear. “We need to repeat more experiments than we realized” is a welcome conclusion to this dedicated endeavour, worth repeating in different circles. And apparently missing in the NYT coverage by Susan Dominus of the story of Amy Cuddy, a psychologist at the origin of the “power pose” theory that got later disputed for lack of reproducibility. Article which main ideological theme is that Cuddy got singled-out in the replication crisis because she is a woman and because her “power pose” theory is towards empowering women and minorities. Rather than because she keeps delivering the same message, mostly outside academia, despite the lack of evidence and statistical backup. (Dominus’ criticisms of psychologists with “an unusual interest in statistics” and of Andrew’s repeated comments on the methodological flaws of the 2010 paper that started all are thus particularly unfair. A Slate article published after the NYT coverage presents an alternative analysis of this affair. Andrew also posted on Dominus paper, with a subsequent humongous trail of comments!)


Filed under: Books, pictures, Statistics, University life Tagged: ageing, Amy Cuddy, Nature, NYT, power pose, psychology, replication crisis, roundworms, Slate, The New York Times

by xi'an at November 16, 2017 11:17 PM

Peter Coles - In the Dark

Convergence

Jackson Pollock, Convergence, 1952 oil on canvas; 93.5 inches by 155 inches. Albright-Knox Art Gallery, Buffalo, NY, US.


by telescoper at November 16, 2017 09:15 PM

Clifford V. Johnson - Asymptotia

Henry Jenkins Interview!

Just after waking up today I read Henry Jenkins' introduction to an interview that he did with me, posted on his fascinating blog (Confessions of an ACA-Fan: about culture, media, communication, and more). I was overcome with emotion for a moment there - He is very generous with his remarks about the book! What a great start to the day!

I recommend reading the interview in full. Part one is up now. It is a very in-depth [...] Click to continue reading this post

The post Henry Jenkins Interview! appeared first on Asymptotia.

by Clifford at November 16, 2017 05:24 PM

November 15, 2017

Peter Coles - In the Dark

Hic Sunt Leones

Just time for a very quick post, as today I travelled to Brighton to attend an inaugural lecture by Professor Antonella De Santo at the University of Sussex.

Antonella was the first female Professor of Physics at the University of Sussex and I’m glad to say she was promoted to a Chair during my watch as Head of the School of Mathematical and Physical Sciences, at Sussex. That was about four years ago, so it has taken a while to arrange her inaugural lecture, but it was worth the wait to be able to celebrate Antonella’s many achievements.

The lecture was about the search for physics beyond the standard model using the ATLAS experiment at the Large Hadron Collider, with a focus on supersymmetry and possibly candidates for dark matter. It was a very nice lecture that told a complex story through pictures and avoiding any difficult mathematics, followed by a drinks reception during which I got to have a gossip with some former colleagues.

The title, by the way, stems from the practice among mediaeval cartographers of marking terra incognita with `Here be lions’ or `Here be dragons‘. I hasten to add that no lions were harmed during the talk.

Anyway, it was nice to have an excuse to visit Brighton again. The last time I was here was over a year ago. It was nice to see some familiar faces, especially in the inestimable Miss Lemon, with whom I enjoyed a very nice curry after the talk!

Now for a sleep and the long journey back to Cardiff tomorrow morning!


by telescoper at November 15, 2017 11:37 PM

Christian P. Robert - xi'an's og

Le Monde puzzle [#1028]

Back to standard Le Monde mathematical puzzles (no further competition!), with this arithmetic one:

While n! cannot be a squared integer for n>1, does there exist 1<n<28 such that 28(n!) is a square integer? Does there exist 1<n,m<28 such that 28(n!)(m!) is a square integer? And what is the largest group of distinct integers between 2 and 27 such that the product of 28! by their factorials is a square?

The fact that n! cannot be a square follows from the occurrence of single prime numbers in the resulting prime number decomposition. When considering 28!, there are several single prime numbers like 17, 19, and 23, which means n is at least 23, but then the last prime in the decomposition of 28! being 7 means this prime remains alone in a product by any n! when n<28. However, to keep up with the R resolution tradition, I started by representing all integers between 2 and 28 in terms of their prime decomposition:

primz=c(2,3,5,7,11,13,17,19,23)
dcmpz=matrix(0,28,9)
for (i in 2:28){
 for (j in 1:9){
    k=i
    while (k%%primz[j]==0){ 
      k=k%/%primz[j];dcmpz[i,j]=dcmpz[i,j]+1}}
}

since the prime number factorisation of the factorials n! follows by cumulated sums (over the rows) of dcmpz, after which checking for one term products

fctorz=apply(dcmpz,2,cumsum)
for (i in 23:28)
  if (max((fctorz[28,]+fctorz[i,])%%2)==0) print(i)

and two term products

for (i in 2:28)
for (j in i:27)
 if (max((fctorz[28,]+fctorz[i,]+fctorz[j,])%%2)==0) 
  print(c(i,j))

is easy and produces i=28 [no solution!] in the first case and (i,j)=(10,27) in the second case. For the final question,  adding up to twelve terms together still produced solutions so I opted for the opposite end by removing one term at a time and

for (a in 2:28)
  if (max(apply(fctorz[-a,],2,sum)%%2)==0) print(a)

exhibited a solution for a=14. Meaning that

2! 3! …. 13! 15! …. 28!

is a square.


Filed under: Books, Kids Tagged: factorial, Le Monde, mathematical puzzle, prime numbers

by xi'an at November 15, 2017 11:17 PM

November 14, 2017

Christian P. Robert - xi'an's og

Darmois, Koopman, and Pitman

When [X’ed] seeking a simple proof of the Pitman-Koopman-Darmois lemma [that exponential families are the only types of distributions with constant support allowing for a fixed dimension sufficient statistic], I came across a 1962 Stanford technical report by Don Fraser containing a short proof of the result. Proof that I do not fully understand as it relies on the notion that the likelihood function itself is a minimal sufficient statistic.


Filed under: Books, Statistics Tagged: cross validated, Don Fraser, exponential families, George Darmois, mathematical statistics, Pitman-Koopman theorem, proof, Stanford University, sufficient statistics

by xi'an at November 14, 2017 11:17 PM

Peter Coles - In the Dark

Merging Galaxies in the Early Universe

I just saw this little movie circulated by the European Space Agency.

The  source displayed in the video was first identified by European Space Agency’s now-defunct Herschel Space Observatory, and later imaged with much higher resolution using the ground-based Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. It’s a significant discovery because it shows two large galaxies at quite high redshift (z=5.655) undergoing a major merger. According to the standard cosmological model this event occurred about a billion years after the Big Bang. The first galaxies are thought to have formed after a few hundred million years, but these objects are expected to have been be much smaller than present-day galaxies like the Milky Way. Major mergers of the type seen apparently seen here are needed if structures are to grow sufficiently rapidly, through hierarchical clustering, to produce what we see around us now, about 13.7 Gyrs after the Big Bang.

The ESA press release can be found here and for more expert readers the refereed paper (by Riechers et al.) can be found here (if you have a subscription to the Astrophysical Journal or for free on the arXiv here.

The abstract (which contains a lot of technical detail about the infra-red/millimetre/submillimetre observations involved in the study) reads:

We report the detection of ADFS-27, a dusty, starbursting major merger at a redshift of z=5.655, using the Atacama Large Millimeter/submillimeter Array (ALMA). ADFS-27 was selected from Herschel/SPIRE and APEX/LABOCA data as an extremely red “870 micron riser” (i.e., S_250<S_350<S_500<S_870), demonstrating the utility of this technique to identify some of the highest-redshift dusty galaxies. A scan of the 3mm atmospheric window with ALMA yields detections of CO(5-4) and CO(6-5) emission, and a tentative detection of H2O(211-202) emission, which provides an unambiguous redshift measurement. The strength of the CO lines implies a large molecular gas reservoir with a mass of M_gas=2.5×10^11(alpha_CO/0.8)(0.39/r_51) Msun, sufficient to maintain its ~2400 Msun/yr starburst for at least ~100 Myr. The 870 micron dust continuum emission is resolved into two components, 1.8 and 2.1 kpc in diameter, separated by 9.0 kpc, with comparable dust luminosities, suggesting an ongoing major merger. The infrared luminosity of L_IR~=2.4×10^13Lsun implies that this system represents a binary hyper-luminous infrared galaxy, the most distant of its kind presently known. This also implies star formation rate surface densities of Sigma_SFR=730 and 750Msun/yr/kpc2, consistent with a binary “maximum starburst”. The discovery of this rare system is consistent with a significantly higher space density than previously thought for the most luminous dusty starbursts within the first billion years of cosmic time, easing tensions regarding the space densities of z~6 quasars and massive quiescent galaxies at z>~3.

The word `riser’ refers to the fact that the measured flux increases with wavelength from the range of wavelengths measured by Herschel/Spire (250 to 500 microns) and up 870 microns. The follow-up observations with higher spectral resolution are based on identifications of carbon monoxide (CO) and water (H20) in the the spectra, which imply the existence of large quantities of gas capable of fuelling an extended period of star formation.

Clearly a lot was going on in this system, a long time ago and a long way away!

 


by telescoper at November 14, 2017 05:52 PM

November 13, 2017

Christian P. Robert - xi'an's og

normal variates in Metropolis step

A definitely puzzled participant on X validated, confusing the Normal variate or variable used in the random walk Metropolis-Hastings step with its Normal density… It took some cumulated efforts to point out the distinction. Especially as the originator of the question had a rather strong a priori about his or her background:

“I take issue with your assumption that advice on the Metropolis Algorithm is useless to me because of my ignorance of variates. I am currently taking an experimental course on Bayesian data inference and I’m enjoying it very much, i believe i have a relatively good understanding of the algorithm, but i was unclear about this specific.”

despite pondering the meaning of the call to rnorm(1)… I will keep this question in store to use in class when I teach Metropolis-Hastings in a couple of weeks.


Filed under: Books, Kids, R, Statistics, University life Tagged: cross validated, Gaussian random walk, Markov chain Monte Carlo algorithm, MCMC, Metropolis-Hastings algorithm, Monte Carlo Statistical Methods, normal distribution, normal generator, random variates

by xi'an at November 13, 2017 11:17 PM

The n-Category Cafe

HoTT at JMM

At the 2018 U.S. Joint Mathematics Meetings in San Diego, there will be an AMS special session about homotopy type theory. It’s a continuation of the HoTT MRC that took place this summer, organized by some of the participants to especially showcase the work done during and after the MRC workshop. Following is the announcement from the organizers.

We are pleased to announce the AMS Special Session on Homotopy Type Theory, to be held on January 11, 2018 in San Diego, California, as part of the Joint Mathematics Meetings (to be held January 10 - 13).

Homotopy Type Theory (HoTT) is a new field of study that relates constructive type theory to abstract homotopy theory. Types are regarded as synthetic spaces of arbitrary dimension and type equality as homotopy equivalence. Experience has shown that HoTT is able to represent many mathematical objects of independent interest in a direct and natural way. Its foundations in constructive type theory permit the statement and proof of theorems about these objects within HoTT itself, enabling formalization in proof assistants and providing a constructive foundation for other branches of mathematics.

This Special Session is affiliated with the AMS Mathematics Research Communities (MRC) workshop for early-career researchers in Homotopy Type Theory organized by Dan Christensen, Chris Kapulkin, Dan Licata, Emily Riehl and Mike Shulman, which took place last June.

The Special Session will include talks by MRC participants, as well as by senior researchers in the field, on various aspects of higher-dimensional type theory including categorical semantics, computation, and the formalization of mathematical theories. There will also be a panel discussion featuring distinguished experts from the field.

Further information about the Special Session, including a schedule and abstracts, can be found at: http://jointmathematicsmeetings.org/meetings/national/jmm2018/2197_program_ss14.html. Please note that the early registration deadline is December 20, 2017.

If you have any questions about about the Special Session, please feel free to contact one of the organizers. We look forward to seeing you in San Diego.

Simon Cho (University of Michigan)

Liron Cohen (Cornell University)

Ed Morehouse (Wesleyan University)

by shulman (viritrilbia@gmail.com) at November 13, 2017 10:16 PM

Lubos Motl - string vacua and pheno

Weinstein's view on quanta, geometry is upside down
Big Think has posted the following 10-minute monologue by Eric Weinstein, a trader working for Peter Thiel, the guy who is not the author of the Wolfram Mathworld (just a similar name, thanks psi_bar), a guy who promised us a theory of everything but all we got so far was some incoherent babbling, and the brother of a far left ex-professor who has nevertheless become a target of some of his approximate comrades, namely fanatical reverse racists in the Academia.



Weinstein says that in the recent 40 years, we've made a big progress in "mathematics of field theory" which was good for quantum field theory and general relativity. OK, one could perhaps summarize the progress in this way although I wouldn't. But in the following sentence, he complains that
we ended up geometrizing the quantum rather than quantizing gravity which we had wanted
and that's supposed to be "disappointing" because physicists only got a "golden age of mathematics of theoretical physics" rather than "golden age of theoretical physics". Wait a minute, this is quite a statement that deserves some commentary.




First of all, the opinion that "what we want is to quantize geometry" is a naive view of a mediocre undergraduate student or a crackpot who was told by his instructor: Sit down, be quiet, be obedient, and simply mechanically place a hat above every classical degree of freedom. If you place the hat really nicely and symmetrically, you will arrive at the quantum theory of gravity which is obtained from the classical theory by this straightforward procedure of "quantization".

"If you place the hat really nicely," the instructor continued, "you may not only earn an A but also become a professional theoretical physicist and maybe a new Einstein."




As Weinstein has correctly observed, this is not what happened. The people who have this idea about quantum gravity – who believe that this should have happened and it would have been better – are unimaginative, misguided, unimpressive individuals who have misunderstood basically all the important lessons about the relationship between gravity and the quantum that we have learned in recent decades, indeed. They belong to the broader community of loop quantum gravity, spin foam, causal dynamical triangulation, and Garrett Lisi cranks who simply aren't good physicists – they just love to star as them in front of the totally misinformed laymen reading the low-quality contemporary mainstream media.

Some quantum theories may be obtained by the process of quantization. For example, the quantum harmonic oscillator, hydrogen atom, other atoms, and most perturbative quantum field theories are more or less "uniquely" obtained from the combination of their classical limit and the universal principles of quantum mechanics. In some sense, the special information needed to define the particular quantum theory is already obtained in the classical theory. The quantum theory is an appendix, a derivative of its classical counterpart.

But that's only good for the situations from undergraduate textbooks.

Quantum theory isn't just some decoration of a classical theory. A quantum mechanical theory is "the main" player, the standalone and self-sufficient entity, and the right way to describe the relationship between the corresponding classical and quantum mechanical theory is that
the classical theory is a classical, \(\hbar\to 0\) limit, of the quantum theory.
The quantum mechanical theory is the main theory and, by the way, you may sometimes find the \(\hbar\to 0\) classical limit in the parameter space of the quantum mechanical theory – and/or the situations that it describes. I said sometimes because it's not even true that every quantum mechanical theory has a classical limit. Also, a quantum mechanical theory may have several classical limits. QED may be said to have the classical limit in the form of classical electrodynamics; but it also has a classical limit of mechanics of high-energy photons (that no longer exhibit the interference phenomena in practice because their wavelength is too tiny).

The (2,0) theory in six dimensions is an example of a theory that can't be obtained as a quantization of a classical theory, at least not in a known way that would know about all the properties of the (2,0) theory that have been established. In the realm of quantum gravity, 11-dimensional M-theory is the most far-reaching example of a theory that isn't any "quantization of a classical theory". Also, the Seiberg-Witten treatment of \(\NNN=2\) supersymmetric gauge theories in four dimensions involves monodromies on the moduli space that don't exist in the classical theory. Because weakly coupled regimes are exchanged with others under the monodromy, one may say that it's impossible to take any universal \(g\to 0\) classical limit of the quantum theory.

And this is just the beginning. The actual insights about the role of quantum mechanics in many theories that have been uncovered in recent 40 years are much deeper, more diverse, and more numerous. The idea that a quantum mechanical theory is just an appendix of a classical theory, and the task for a theoretical physicist is just to "place hats" i.e. "quantize" a classical theory in a nice way, has been proven to be utterly childish. I insist that everyone who still imagines the relationship between classical and quantum theories in this way hasn't gotten beyond basic undergraduate courses that depend on quantum mechanics and he has misunderstood the profound lessons of theoretical physics of recent 40 years completely and entirely.

Now, Weinstein has complained that we learned something I described above – instead of learning what he had wanted to learn, namely that it's enough to place hats on some heads and you may consider yourself a genius. It's "disappointing", we hear from Weinstein. Whom is it disappointing for? It's surely disappointing for mediocre physics students who just believed that their narrow-minded undergraduate ideas "what is enough to pass a course" are also enough to learn about the deepest secrets of the Universe. Sorry, Eric, but those aren't enough.

More generally, the very idea that "one should have learned exactly what he had wanted to learn from the beginning, otherwise he is disappointed" shows Weinstein's fundamentally anti-scientific approach to the truth. It's the whole point of the research, especially scientific research, that the researchers don't know the answers to start with. If they had already known the answers to all the deep questions, the research would be a meaningless and worthless waste of time. Researchers not only often learn something else than they have expected and they figure out that they were way too naive or overlooked something; true physicists are really happy whenever that happens. Weinstein has neither the brain nor the heart of a physicist because he feels disappointed in these situations when something profoundly new and game-changing is actually learned.

What we have learned about the roles that quantum mechanics plays within particular quantum mechanical theories is amazing. And every person who has both talent and heart for theoretical physics is absolutely stunned by these advances and realizes that they have changed our view of the Universe forever and irreversibly. It's too bad that Eric Weinstein doesn't belong among these natural talents but there's no easy fix here.

To focus on another sentence by Weinstein, he says that it's "disappointing" that the progress occurred "only in mathematics" and "not in physics". I am sorry but by such propositions, and Weinstein has repeated them many times, he shows that he's very close to pseudointellectual garbage like Sabine Hossenfelder's "lost in math". In fact, the main point – of Ms Hossenfelder, Mr Horgan, Mr Weinstein, and even Mr Smolin, and probably others – seems to be exactly the same. Mathematics cannot be deep and it cannot matter for the truly important questions. Those must be decided and settled by some philosophical powers that transcend mathematics. So it's right to sling mud on everything that requires quasi-rigorous, careful, mathematical thinking. Hossenfelder, Horgan, Weinstein, Smolin, and others don't like it. Mathematics cannot become a king. It's at most a slave whose duty is to design hats of the nice shape for the real bosses who don't even have to know the relevant mathematics well.

I am sorry but this is complete hogwash. Mathematics is absolutely essential in theoretical physics. It's been essential since the very beginning – when Isaac Newton established physics as we know it and had to invent some branches of mathematics, especially calculus, to make it possible. And the importance of mathematics has only increased since the times of Isaac Newton. Of course mathematics plays an even more decisive role in theoretical physics than it did during Newton's life. It is the real king of the occupation – mathematics is both the creative and bold genius that looks for new ideas and fresh ways to organize them and connect them; as well as the final impartial arbiter that decides which ideas have to be eliminated.

If you want to look for important ideas in theoretical physics without mathematics, you're guaranteed to fail, Eric. If you want to separate the right ideas from the wrong ones without mathematics, you're bound to be a failed, unfair judge whose verdicts are uncorrelated to the truth – or worse. Equally importantly, the "progress in theoretical physics" and the "progress in the understanding what mathematics means for theoretical physics" are really exactly the same thing. If you don't get it, you're simply trying to do some theoretical physics that has nothing to do with mathematics – i.e. nothing to do with the careful thinking and the sophisticated delicate structures that have dominated theoretical physics for a very long time. You simply cannot get anywhere with this layman's attitude.

Mathematics is ultimately a vital language in theoretical physics that only expresses some principles, ideas, or laws in a rigorous and reliable way. But the principles, ideas, or laws have to be there in physics – so they are physical in character. Mathematics is needed to say what those exactly are; and it's needed to decide whether they're right, too. But as long as we talk about principles, ideas, and laws of physics, they are physical even if we need an arbitrary hardcore mathematical language for them to be described, commented upon, or predicted.

A simple example. The \(\NNN=4\) gauge theory in four dimensions has the \(SL(2,\ZZ)\) S-duality group. I chose this example because it uses some mathematical objects and notation such as the symbol \(SL(2,\ZZ)\) for a discrete group. So this duality group – found almost exactly 40 years ago – is an example of the progress in "mathematics of field theory" that Weinstein has acknowledged at the very beginning. But at the same moment, this group does immensely physical things with physical objects. It exchanges electrically charged particles with magnetic monopoles. Or it adds a multiple of the electric charge to the magnetic charge in order to redefine the latter. It maps limiting situations – simplified ways to discuss real situations – on each other, and so on.

The whole idea that there is a gap between mathematics and physics within theoretical physics and if you're on the mathematical side of this gap, you should be disappointed, is utterly idiotic. There is absolutely no gap at all. There cannot be any such gap in physics. A discipline with such a gap separating mathematics from the rest couldn't be physics and shouldn't be called physics. Mathematics is penetrating theoretical physics – it's absolutely everywhere. Mathematics is what the veins of theoretical physics are made of, and the blood that circulates through these veins of theoretical physics is made of mathematics, too. What other veins and bloods inside theoretical physics would you prefer, Mr Weinstein? Some P.R. blood? It surely looks so. Weinstein apparently prefers the role of mathematics that it plays in junk sciences (like the climate hysteria): authors of junk science paper love to include some mathematical symbols and jargon to make their paper look more scientific in the eyes of those who have no chance to understand anything, but this mathematical masturbation and related jargon has actually no impact on any of the key conclusions. This is not science.

Everything else that Weinstein says in the portion I have watched is wrong, too. At 2:10, we learn that the infinities in quantum field theory of the 1940s etc. were just a technical problem and led to no revolution. Sorry but this is also a delusion. The infinities were a proof that physicists really didn't understand what the theory was – and Eric Weinstein still misunderstands it, as I mentioned. In the 1970s, it was realized that the infinite terms are artifacts of some short-distance physics that may be undetermined but whose long-distance limit may be universal or determined by several parameters. This understanding of the so-called "Renormalization Group" was a revolution. It was a revolution showing the urgent need to think deeply about the mathematical objects and operations we do in physics, a revolution showing the inseparability of mathematics of physics. It was a revolution both in "mathematics of theoretical physics" and in "theoretical physics" itself – I have already mentioned that those are really the same – and Ken Wilson got a well-deserved 1982 Nobel prize for the Renormalization Group that has deeply affected both statistical physics (and thermodynamics) as well as quantum field theory, and showed their kinship, too. Many deep physicists quote the Renormalization Group as the most revolutionary insight of physics of the last 50 years. Weinstein's claim that "this was just a technicality and nothing revolutionary was there to learn" is both immensely ignorant and unbelievably arrogant. It's wrong and profoundly unethical for people who are this clueless about modern physics to pretend that they know something about modern physics.

The bulk of the actual value of theoretical physics as we know it today is composed of such profound principles and he seems to dismiss as well as misunderstand every single one of them.

Around 3:00, he suggests that string theory (and related insights) is "more sociological than physics". Holy cow, your brain is just a pile of šit. You're exactly on par of Ms Hossenfelder now. She got her physics PhD for her vagina and one is tempted to believe that Peter Thiel has teamed up with you for an analogous reason (believe me, this theory passes the first consistency check because two minuses give you a plus). Sorry but by saying something like that, you are falling to the deepest cesspools out there. If you believe that string theorists have been a secret cabal that actually doesn't do any solid physics and instead is playing P.R. games, you get it upside down. It's probably more comfortable for the likes of Mr Weinstein to believe such brutal conspiracy theories instead of accepting that he wouldn't have a chance to make it among the top theoretical physicists today – because not only he can't do research on string theory; he doesn't have a sufficient IQ to figure out whether the theory exists or is just a fairy-tale. I don't really believe you believe that conspiracy theory, Mr Weinstein, because you can't be that dumb. You know that it's complete nonsense – and on the contrary, it is šitty physics haters like you who are just doing cheap P.R. and impressing lots of extremely stupid people. This abuse of Big Think is an example of that.

Šit about contacts with experimental reality. String theory has understood everything we have experimentally observed – the GR and gauge theory limits of any viable stringy vacuum are enough for that. To go deeper, one simply needs to discuss things that cannot be reasonably expected to be observed in a foreseeable future. This fact is absolutely obvious – at least I had no doubts about it already when I was 15 or so – and if you have a psychological problem with the understanding of this trivial fact, you are the same kind of a moron as all those Horgans and others. I just hate when morons like that pretend to be smart.

I won't finish watching the last 70% of his rant because the first 30% has been more than enough for my adrenaline level. According to the headline in the HTML page at Big Think, I guess he must be saying that he's a new Einstein who finds a theory of everything by emitting these toxic, hostile, and idiotic lies for the moronic viewers, and I just don't have any duty to deal with this trash in its entirety. You suck, Eric.

We live in the era of the post-truth. I may know and every competent theoretical physicist probably knows that Eric Weinstein is just a clueless P.R. figure who is completely clueless about everything he comments upon – summary of last 40 years in physics and conceptual foundations of modern theoretical physics. But I can't change the fact that whole powerful corporations keep on brainwashing millions of people with anti-scientific vitriol by fake physicists such as Mr Weinstein. And I think that because the theoretical physics big shots have been so decoupled from these wars, they became almost completely incapable to influence the discourse, too – I guess that their influence is even well below that of my blog.

by Luboš Motl (noreply@blogger.com) at November 13, 2017 10:08 PM

John Baez - Azimuth

Applied Category Theory at UCR (Part 3)

We had a special session on applied category theory here at UCR:

Applied category theory, Fall Western Sectional Meeting of the AMS, 4-5 November 2017, U.C. Riverside.

A bunch of people stayed for a few days afterwards, and we had a lot of great discussions. I wish I could explain everything that happened, but I’m too busy right now. Luckily, even if you couldn’t come here, you can now see slides of almost all the talks… and videos of many!

Click on talk titles to see abstracts. For multi-author talks, the person whose name is in boldface is the one who gave the talk. For videos, go here: I haven’t yet created links to all the videos.

Saturday November 4, 2017

9:00 a.m.A higher-order temporal logic for dynamical systemstalk slides.
David I. Spivak, MIT.

10:00 a.m.
Algebras of open dynamical systems on the operad of wiring diagramstalk slides.
Dmitry Vagner, Duke University
David I. Spivak, MIT
Eugene Lerman, University of Illinois at Urbana-Champaign

10:30 a.m.
Abstract dynamical systemstalk slides.
Christina Vasilakopoulou, UCR
David Spivak, MIT
Patrick Schultz, MIT

3:00 p.m.
Decorated cospanstalk slides.
Brendan Fong, MIT

4:00 p.m.
Compositional modelling of open reaction networkstalk slides.
Blake S. Pollard, UCR
John C. Baez, UCR

4:30 p.m.
A bicategory of coarse-grained Markov processestalk slides.
Kenny Courser, UCR

5:00 p.m.
A bicategorical syntax for pure state qubit quantum mechanicstalk slides.
Daniel M. Cicala, UCR

5:30 p.m.
Open systems in classical mechanicstalk slides.
Adam Yassine, UCR

Sunday November 5, 2017

9:00 a.m.
Controllability and observability: diagrams and dualitytalk slides.
Jason Erbele, Victor Valley College

9:30 a.m.
Frobenius monoids, weak bimonoids, and corelationstalk slides.
Brandon Coya, UCR

10:00 a.m.
Compositional design and tasking of networks.
John D. Foley, Metron, Inc.
John C. Baez, UCR
Joseph Moeller, UCR
Blake S. Pollard, UCR

10:30 a.m.
Operads for modeling networkstalk slides.
Joseph Moeller, UCR
John Foley, Metron Inc.
John C. Baez, UCR
Blake S. Pollard, UCR

2:00 p.m.
Reeb graph smoothing via cosheavestalk slides.
Vin de Silva, Department of Mathematics, Pomona College

3:00 p.m.
Knowledge representation in bicategories of relationstalk slides.
Evan Patterson, Stanford University, Statistics Department

3:30 p.m.
The multiresolution analysis of flow graphstalk slides.
Steve Huntsman, BAE Systems

4:00 p.m.
Data modeling and integration using the open source tool Algebraic Query Language (AQL)talk slides.
Peter Y. Gates, Categorical Informatics
Ryan Wisnesky, Categorical Informatics


by John Baez at November 13, 2017 01:00 AM

November 12, 2017

Christian P. Robert - xi'an's og

10 great ideas about chance [book preview]

[As I happened to be a reviewer of this book by Persi Diaconis and Brian Skyrms, I had the opportunity (and privilege!) to go through its earlier version. Here are the [edited] comments I sent back to PUP and the authors about this earlier version. All in  all, a terrific book!!!]

The historical introduction (“measurement”) of this book is most interesting, especially its analogy of chance with length. I would have appreciated a connection earlier than Cardano, like some of the Greek philosophers even though I gladly discovered there that Cardano was not only responsible for the closed form solutions to the third degree equation. I would also have liked to see more comments on the vexing issue of equiprobability: we all spend (if not waste) hours in the classroom explaining to (or arguing with) students why their solution is not correct. And they sometimes never get it! [And we sometimes get it wrong as well..!] Why is such a simple concept so hard to explicit? In short, but this is nothing but a personal choice, I would have made the chapter more conceptual and less chronologically historical.

“Coherence is again a question of consistent evaluations of a betting arrangement that can be implemented in alternative ways.” (p.46)

The second chapter, about Frank Ramsey, is interesting, if only because it puts this “man of genius” back under the spotlight when he has all but been forgotten. (At least in my circles.) And for joining probability and utility together. And for postulating that probability can be derived from expectations rather than the opposite. Even though betting or gambling has a (negative) stigma in many cultures. At least gambling for money, since most of our actions involve some degree of betting. But not in a rational or reasoned manner. (Of course, this is not a mathematical but rather a psychological objection.) Further, the justification through betting is somewhat tautological in that it assumes probabilities are true probabilities from the start. For instance, the Dutch book example on p.39 produces a gain of .2 only if the probabilities are correct.

> gain=rep(0,1e4)
> for (t in 1:1e4){
+ p=rexp(3);p=p/sum(p)
+ gain[t]=(p[1]*(1-.6)+p[2]*(1-.2)+p[3]*(.9-1))/sum(p)}
> hist(gain)

As I made it clear at the BFF4 conference last Spring, I now realise I have never really adhered to the Dutch book argument. This may be why I find the chapter somewhat unbalanced with not enough written on utilities and too much on Dutch books.

“The force of accumulating evidence made it less and less plausible to hold that subjective probability is, in general, approximate psychology.” (p.55)

A chapter on “psychology” may come as a surprise, but I feel a posteriori that it is appropriate. Most of it is about the Allais paradox. Plus entries on Ellesberg’s distinction between risk and uncertainty, with only the former being quantifiable by “objective” probabilities. And on Tversky’s and Kahneman’s distinction between heuristics, and the framing effect, i.e., how the way propositions are expressed impacts the choice of decision makers. However, it is leaving me unclear about the conclusion that the fact that people behave irrationally should not prevent a reliance on utility theory. Unclear because when taking actions involving other actors their potentially irrational choices should also be taken into account. (This is mostly nitpicking.)

“This is Bernoulli’s swindle. Try to make it precise and it falls apart. The conditional probabilities go in different directions, the desired intervals are of different quantities, and the desired probabilities are different probabilities.” (p.66)

The next chapter (“frequency”) is about Bernoulli’s Law of Large numbers and the stabilisation of frequencies, with von Mises making it the basis of his approach to probability. And Birkhoff’s extension which is capital for the development of stochastic processes. And later for MCMC. I like the notions of “disreputable twin” (p.63) and “Bernoulli’s swindle” about the idea that “chance is frequency”. The authors call the identification of probabilities as limits of frequencies Bernoulli‘s swindle, because it cannot handle zero probability events. With a nice link with the testing fallacy of equating rejection of the null with acceptance of the alternative. And an interesting description as to how Venn perceived the fallacy but could not overcome it: “If Venn’s theory appears to be full of holes, it is to his credit that he saw them himself.” The description of von Mises’ Kollectiven [and the welcome intervention of Abraham Wald] clarifies my previous and partial understanding of the notion, although I am unsure it is that clear for all potential readers. I also appreciate the connection with the very notion of randomness which has not yet found I fear a satisfactory definition. This chapter asks more (interesting) questions than it brings answers (to those or others). But enough, this is a brilliant chapter!

“…a random variable, the notion that Kac found mysterious in early expositions of probability theory.” (p.87)

Chapter 5 (“mathematics”) is very important [from my perspective] in that it justifies the necessity to associate measure theory with probability if one wishes to evolve further than urns and dices. To entitle Kolmogorov to posit his axioms of probability. And to define properly conditional probabilities as random variables (as my third students fail to realise). I enjoyed very much reading this chapter, but it may prove difficult to read for readers with no or little background in measure (although some advanced mathematical details have vanished from the published version). Still, this chapter constitutes a strong argument for preserving measure theory courses in graduate programs. As an aside, I find it amazing that mathematicians (even Kac!) had not at first realised the connection between measure theory and probability (p.84), but maybe not so amazing given the difficulty many still have with the notion of conditional probability. (Now, I would have liked to see some description of Borel’s paradox when it is mentioned (p.89).

“Nothing hangs on a flat prior (…) Nothing hangs on a unique quantification of ignorance.” (p.115)

The following chapter (“inverse inference”) is about Thomas Bayes and his posthumous theorem, with an introduction setting the theorem at the centre of the Hume-Price-Bayes triangle. (It is nice that the authors include a picture of the original version of the essay, as the initial title is much more explicit than the published version!) A short coverage, in tune with the fact that Bayes only contributed a twenty-plus paper to the field. And to be logically followed by a second part [formerly another chapter] on Pierre-Simon Laplace, both parts focussing on the selection of prior distributions on the probability of a Binomial (coin tossing) distribution. Emerging into a discussion of the position of statistics within or even outside mathematics. (And the assertion that Fisher was the Einstein of Statistics on p.120 may be disputed by many readers!)

“So it is perfectly legitimate to use Bayes’ mathematics even if we believe that chance does not exist.” (p.124)

The seventh chapter is about Bruno de Finetti with his astounding representation of exchangeable sequences as being mixtures of iid sequences. Defining an implicit prior on the side. While the description sticks to binary events, it gets quickly more advanced with the notion of partial and Markov exchangeability. With the most interesting connection between those exchangeabilities and sufficiency. (I would however disagree with the statement that “Bayes was the father of parametric Bayesian analysis” [p.133] as this is extrapolating too much from the Essay.) My next remark may be non-sensical, but I would have welcomed an entry at the end of the chapter on cases where the exchangeability representation fails, for instance those cases when there is no sufficiency structure to exploit in the model. A bonus to the chapter is a description of Birkhoff’s ergodic theorem “as a generalisation of de Finetti” (p..134-136), plus half a dozen pages of appendices on more technical aspects of de Finetti’s theorem.

“We want random sequences to pass all tests of randomness, with tests being computationally implemented”. (p.151)

The eighth chapter (“algorithmic randomness”) comes (again!) as a surprise as it centres on the character of Per Martin-Löf who is little known in statistics circles. (The chapter starts with a picture of him with the iconic Oberwolfach sculpture in the background.) Martin-Löf’s work concentrates on the notion of randomness, in a mathematical rather than probabilistic sense, and on the algorithmic consequences. I like very much the section on random generators. Including a mention of our old friend RANDU, the 16 planes random generator! This chapter connects with Chapter 4 since von Mises also attempted to define a random sequence. To the point it feels slightly repetitive (for instance Jean Ville is mentioned in rather similar terms in both chapters). Martin-Löf’s central notion is computability, which forces us to visit Turing’s machine. And its role in the undecidability of some logical statements. And Church’s recursive functions. (With a link not exploited here to the notion of probabilistic programming, where one language is actually named Church, after Alonzo Church.) Back to Martin-Löf, (I do not see how his test for randomness can be implemented on a real machine as the whole test requires going through the entire sequence: since this notion connects with von Mises’ Kollektivs, I am missing the point!) And then Kolmororov is brought back with his own notion of complexity (which is also Chaitin’s and Solomonov’s). Overall this is a pretty hard chapter both because of the notions it introduces and because I do not feel it is completely conclusive about the notion(s) of randomness. A side remark about casino hustlers and their “exploitation” of weak random generators: I believe Jeff Rosenthal has a similar if maybe simpler story in his book about Canadian lotteries.

“Does quantum mechanics need a different notion of probability? We think not.” (p.180)

The penultimate chapter is about Boltzmann and the notion of “physical chance”. Or statistical physics. A story that involves Zermelo and Poincaré, And Gibbs, Maxwell and the Ehrenfests. The discussion focus on the definition of probability in a thermodynamic setting, opposing time frequencies to space frequencies. Which requires ergodicity and hence Birkhoff [no surprise, this is about ergodicity!] as well as von Neumann. This reaches a point where conjectures in the theory are yet open. What I always (if presumably naïvely) find fascinating in this topic is the fact that ergodicity operates without requiring randomness. Dynamical systems can enjoy ergodic theorem, while being completely deterministic.) This chapter also discusses quantum mechanics, which main tenet requires probability. Which needs to be defined, from a frequency or a subjective perspective. And the Bernoulli shift that brings us back to random generators. The authors briefly mention the Einstein-Podolsky-Rosen paradox, which sounds more metaphysical than mathematical in my opinion, although they get to great details to explain Bell’s conclusion that quantum theory leads to a mathematical impossibility (but they lost me along the way). Except that we “are left with quantum probabilities” (p.183). And the chapter leaves me still uncertain as to why statistical mechanics carries the label statistical. As it does not seem to involve inference at all.

“If you don’t like calling these ignorance priors on the ground that they may be sharply peaked, call them nondogmatic priors or skeptical priors, because these priors are quite in the spirit of ancient skepticism.” (p.199)

And then the last chapter (“induction”) brings us back to Hume and the 18th Century, where somehow “everything” [including statistics] started! Except that Hume’s strong scepticism (or skepticism) makes induction seemingly impossible. (A perspective with which I agree to some extent, if not to Keynes’ extreme version, when considering for instance financial time series as stationary. And a reason why I do not see the criticisms contained in the Black Swan as pertinent because they savage normality while accepting stationarity.) The chapter rediscusses Bayes’ and Laplace’s contributions to inference as well, challenging Hume’s conclusion of the impossibility to finer. Even though the representation of ignorance is not unique (p.199). And the authors call again for de Finetti’s representation theorem as bypassing the issue of whether or not there is such a thing as chance. And escaping inductive scepticism. (The section about Goodman’s grue hypothesis is somewhat distracting, maybe because I have always found it quite artificial and based on a linguistic pun rather than a logical contradiction.) The part about (Richard) Jeffrey is quite new to me but ends up quite abruptly! Similarly about Popper and his exclusion of induction. From this chapter, I appreciated very much the section on skeptical priors and its analysis from a meta-probabilist perspective.

There is no conclusion to the book, but to end up with a chapter on induction seems quite appropriate. (But there is an appendix as a probability tutorial, mentioning Monte Carlo resolutions. Plus notes on all chapters. And a commented bibliography.) Definitely recommended!

[Disclaimer about potential self-plagiarism: this post or an edited version will eventually appear in my Books Review section in CHANCE. As appropriate for a book about Chance!]


Filed under: Books, pictures, Statistics, University life Tagged: Abraham Wald, Alan Turing, Allais' paradox, Alonzo Church, Andrei Kolmogorov, BFF4, book review, Borel-Kolmogorov paradox, Brian Skyrms, Bruno de Finetti, Cardano's formula, CHANCE, David Hume, Dutch book argument, equiprobability, exchangeability, Frank Ramsey, gambling, Gerolamo Cardano, Henri Poincaré, heuristics, Jakob Bernoulli, John Maynard Keynes, John von Neumann, Karl Popper, Martin-Löf, measure theory, p-values, Persi Diaconis, Pierre Simon Laplace, PUP, Radon-Nikodym Theorem, randomness, Richard von Mises, sufficiency, Thomas Bayes, Venn diagram

by xi'an at November 12, 2017 11:17 PM

November 11, 2017

ZapperZ - Physics and Physicists

Lorentz Gamma Factor
Don Lincoln has another video related to Relativity. This time, he's diving into more details on the Lorentz Gamma factor. At the beginning of the video, he's referring to another video he made on the misleading concept of relativistic mass, which I've linked to.



Zz.

by ZapperZ (noreply@blogger.com) at November 11, 2017 05:09 PM

Tommaso Dorigo - Scientificblogging

Anomaly Reviewed On Physics Today
Another quite positive review of my book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab"  (which these days is 40% off at the World Scientific site I am linking) has appeared on Physics Today this month.

read more

by Tommaso Dorigo at November 11, 2017 01:09 PM

The n-Category Cafe

Topology Puzzles

Let’s say the closed unit interval <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics> maps onto a metric space <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics> if there is a continuous map from <semantics>[0,1]<annotation encoding="application/x-tex">[0,1]</annotation></semantics> onto <semantics>X<annotation encoding="application/x-tex">X</annotation></semantics>. Similarly for the Cantor set.

Puzzle 0. Does the Cantor set map onto the closed unit interval, and/or vice versa?

Puzzle 1. Which metric spaces does the closed unit interval map onto?

Puzzle 2. Which metric spaces does the Cantor set map onto?

The first one is easy; the second two are well-known… but still, perhaps, not well-known enough!

The answers to Puzzles 1 and 2 can be seen as ‘versal’ properties of the closed unit interval and Cantor set — like universal properties, but without the uniqueness clause.

by john (baez@math.ucr.edu) at November 11, 2017 07:52 AM

The n-Category Cafe

Applied Category Theory Papers

In preparation for the Applied Category Theory special session at U.C. Riverside this weekend, my crew dropped three papers on the arXiv.

My student Adam Yassine has been working on Hamiltonian and Lagrangian mechanics from an ‘open systems’ point of view:

  • Adam Yassine, Open systems in classical mechanics.

    Abstract. Using the framework of category theory, we formalize the heuristic principles that physicists employ in constructing the Hamiltonians for open classical systems as sums of Hamiltonians of subsystems. First we construct a category where the objects are symplectic manifolds and the morphisms are spans whose legs are surjective Poisson maps. Using a slight variant of Fong’s theory of decorated cospans, we then decorate the apices of our spans with Hamiltonians. This gives a category where morphisms are open classical systems, and composition allows us to build these systems from smaller pieces.

He also gets a functor from a category of Lagrangian open systems to this category of Hamiltonian systems.

Kenny Courser and I have been continuing my work with Blake Pollard and Brendan Fong on open Markov processes, bringing 2-morphisms into the game. It seems easiest to use a double category:

Abstract. Coarse-graining is a standard method of extracting a simple Markov process from a more complicated one by identifying states. Here we extend coarse-graining to open Markov processes. An ‘open’ Markov process is one where probability can flow in or out of certain states called ‘inputs’ and ‘outputs’. One can build up an ordinary Markov process from smaller open pieces in two basic ways: composition, where we identify the outputs of one open Markov process with the inputs of another, and tensoring, where we set two open Markov processes side by side. In previous work, Fong, Pollard and the first author showed that these constructions make open Markov processes into the morphisms of a symmetric monoidal category. Here we go further by constructing a symmetric monoidal double category where the 2-morphisms are ways of coarse-graining open Markov processes. We also extend the already known ‘black-boxing’ functor from the category of open Markov processes to our double category. Black-boxing sends any open Markov process to the linear relation between input and output data that holds in steady states, including nonequilibrium steady states where there is a nonzero flow of probability through the process. To extend black-boxing to a functor between double categories, we need to prove that black-boxing is compatible with coarse-graining.

Finally, the Complex Adaptive Systems Composition and Design Environment project with John Foley of Metron Scientific Solutions and my students Joseph Moeller and Blake Pollard has finally given birth to a paper! I hope this is just the first; it starts laying down the theoretical groundwork for designing networked systems. John is here now and we’re coming up with a bunch of new ideas:

  • John Baez, John Foley, Joseph Moeller and Blake Pollard, Network models.

Abstract. Networks can be combined in many ways, such as overlaying one on top of another or setting two side by side. We introduce network models to encode these ways of combining networks. Different network models describe different kinds of networks. We show that each network model gives rise to an operad, whose operations are ways of assembling a network of the given kind from smaller parts. Such operads, and their algebras, can serve as tools for designing networks. Technically, a network model is a lax symmetric monoidal functor from the free symmetric monoidal category on some set to Cat, and the construction of the corresponding operad proceeds via a symmetric monoidal version of the Grothendieck construction.

I blogged about this last one here:

by john (baez@math.ucr.edu) at November 11, 2017 07:49 AM

The n-Category Cafe

The Polycategory of Multivariable Adjunctions

Adjunctions are well-known and fundamental in category theory. Somewhat less well-known are two-variable adjunctions, consisting of functors <semantics>f:A×BC<annotation encoding="application/x-tex">f:A\times B\to C</annotation></semantics>, <semantics>g:A op×CB<annotation encoding="application/x-tex">g:A^{op}\times C\to B</annotation></semantics>, and <semantics>h:B op×CA<annotation encoding="application/x-tex">h:B^{op}\times C\to A</annotation></semantics> and natural isomorphisms

<semantics>C(f(a,b),c)B(b,g(a,c))A(a,h(b,c)).<annotation encoding="application/x-tex"> C(f(a,b),c) \cong B(b,g(a,c)) \cong A(a,h(b,c)).</annotation></semantics>

These are also ubiquitous in mathematics, for instance in the notion of closed monoidal category, or in the hom-power-copower situation of an enriched category. But it seems that only fairly recently has there been a wider appreciation that it is worth defining and studying them in their own right (rather than simply as a pair of parametrized adjunctions <semantics>f(a,)g(a,)<annotation encoding="application/x-tex">f(a,-)\dashv g(a,-)</annotation></semantics> and <semantics>f(,b)h(b,)<annotation encoding="application/x-tex">f(-,b) \dashv h(b,-)</annotation></semantics>).

Now, ordinary adjunctions are the morphisms of a 2-category <semantics>Adj<annotation encoding="application/x-tex">Adj</annotation></semantics> (with an arbitrary choice of direction, say pointing in the direction of the left adjoint), whose 2-cells are compatible pairs of natural transformations (a fundamental result being that either uniquely determines the other). It’s obvious to guess that two-variable adjunctions should be the binary morphisms in a multicategory of “<semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-ary adjunctions”, and this is indeed the case. In fact, Eugenia, Nick, and Emily showed that multivariable adjunctions form a cyclic multicategory, and indeed even a cyclic double multicategory.

In this post, however, I want to argue that it’s even better to regard multivariable adjunctions as forming a slightly different structure called a polycategory.

What is a polycategory? The first thing to say about it is that it’s like a multicategory, but it allows the codomain of a morphism to contain multiple objects, as well as the domain. Thus we have morphisms like <semantics>f:(A,B)(C,D)<annotation encoding="application/x-tex">f: (A,B) \to (C,D)</annotation></semantics>. However, this description is incomplete, even informally, because it doesn’t tell us how we are allowed to compose such morphisms. Indeed, there are many different structures that admit this same description, but differ in the ways that morphisms can be composed.

One such structure is a prop, which John and his students have been writing a lot about recently. In a prop, we compose by simply matching domains and codomains as lists — given <semantics>f:(A,B)(C,D)<annotation encoding="application/x-tex">f: (A,B) \to (C,D)</annotation></semantics> and <semantics>g:(C,D)(E,F)<annotation encoding="application/x-tex">g:(C,D) \to (E,F)</annotation></semantics> we get <semantics>gf:(A,B)(E,F)<annotation encoding="application/x-tex">g\circ f : (A,B) \to (E,F)</annotation></semantics> — and we can also place morphisms side by side — given <semantics>f:(A,B)(C,D)<annotation encoding="application/x-tex">f:(A,B) \to (C,D)</annotation></semantics> and <semantics>f:(A,B)(C,D)<annotation encoding="application/x-tex">f':(A',B') \to (C',D')</annotation></semantics> we get <semantics>(f,f):(A,B,A,B)(C,D,C,D)<annotation encoding="application/x-tex">(f,f') : (A,B,A',B') \to (C,D,C',D')</annotation></semantics>.

A polycategory is different: in a polycategory we can only “compose along single objects”, with the “leftover” objects in the codomain of <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics> and the domain of <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> surviving into the codomain and domain of <semantics>gf<annotation encoding="application/x-tex">g\circ f</annotation></semantics>. For instance, given <semantics>f:(A,B)(C,D)<annotation encoding="application/x-tex">f: (A,B) \to (C,D)</annotation></semantics> and <semantics>g:(E,C)(F,G)<annotation encoding="application/x-tex">g:(E,C) \to (F,G)</annotation></semantics> we get <semantics>g Cf:(E,A,B)(F,G,D)<annotation encoding="application/x-tex">g\circ_C f : (E,A,B) \to (F,G,D)</annotation></semantics>. This may seem a little weird at first, and the usual examples (semantics for two-sided sequents in linear logic) are rather removed from the experience of most mathematicians. But in fact it’s exactly what we need for multivariable adjunctions!

I claim there is a polycategory <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics> whose objects are categories and whose “poly-arrows” are multivariable adjunctions. What is a multivariable adjunction <semantics>(A,B)(C,D)<annotation encoding="application/x-tex">(A,B) \to (C,D)</annotation></semantics>? There’s really only one possible answer, once you think to ask the question: it consists of four functors

<semantics>f:C op×A×BDg:A×B×D opCh:A op×C×DBk:C×D×B opA<annotation encoding="application/x-tex"> f:C^{op}\times A\times B \to D \quad g:A \times B \times D^{op} \to C \quad h : A^{op}\times C\times D\to B \quad k : C\times D \times B^{op}\to A </annotation></semantics>

and natural isomorphisms

<semantics>D(f(c,a,b),d)C(g(a,b,d),c)B(b,h(a,c,d))A(a,k(c,d,b)).<annotation encoding="application/x-tex"> D(f(c,a,b),d) \cong C(g(a,b,d),c) \cong B(b,h(a,c,d)) \cong A(a,k(c,d,b)). </annotation></semantics>

I find this definition quite illuminating already. One of the odd things about a two-variable adjunction, as usually defined, is the asymmetric placement of opposites. (Indeed, I suspect this oddness may have been a not insignificant inhibitor to their formal study.) The polycategorical perspective reveals that this arises simply from the asymmetry of having a 2-ary domain but a 1-ary codomain: a “<semantics>(2,2)<annotation encoding="application/x-tex">(2,2)</annotation></semantics>-variable adjunction” as above looks much more symmetrical.

At this point it’s an exercise for the reader to write down the general notion of <semantics>(n,m)<annotation encoding="application/x-tex">(n,m)</annotation></semantics>-variable adjunction. Of course, a <semantics>(1,1)<annotation encoding="application/x-tex">(1,1)</annotation></semantics>-variable adjunction is an ordinary adjunction, and a <semantics>(2,1)<annotation encoding="application/x-tex">(2,1)</annotation></semantics>-variable adjunction is a two-variable adjunction in the usual sense. It’s also a nice exercise to convince yourself that polycategory-style composition “along one object” is also exactly right for multivariable adjunctions. For instance, suppose in addition to <semantics>(f,g,h,k):(A,B)(C,D)<annotation encoding="application/x-tex">(f,g,h,k) : (A,B) \to (C,D)</annotation></semantics> as above, we have a two-variable adjunction <semantics>(,m,n):(D,E)Z<annotation encoding="application/x-tex">(\ell,m,n) : (D,E)\to Z</annotation></semantics> with <semantics>Z((d,e),z)D(d,m(e,z))E(e,n(d,z))<annotation encoding="application/x-tex">Z(\ell(d,e),z) \cong D(d,m(e,z)) \cong E(e,n(d,z))</annotation></semantics>. Then we have a composite multivariable adjunction <semantics>(A,B,E)(C,Z)<annotation encoding="application/x-tex">(A,B,E) \to (C,Z)</annotation></semantics> defined by <semantics>C(g(a,b,m(e,z)),c)Z((f(c,a,b),e),z)A(a,k(c,m(e,z),b))<annotation encoding="application/x-tex"> C(g(a,b,m(e,z)),c) \cong Z(\ell(f(c,a,b),e),z) \cong A(a,k(c,m(e,z),b)) \cong \cdots </annotation></semantics>

It’s also interesting to consider what happens when the domain or codomain is empty. For instance, a <semantics>(0,2)<annotation encoding="application/x-tex">(0,2)</annotation></semantics>-variable adjunction <semantics>()(A,B)<annotation encoding="application/x-tex">() \to (A,B)</annotation></semantics> consists of functors <semantics>f:A opB<annotation encoding="application/x-tex">f:A^{op}\to B</annotation></semantics> and <semantics>g:B opA<annotation encoding="application/x-tex">g:B^{op}\to A</annotation></semantics> and a natural isomorphism <semantics>B(b,f(a))A(a,g(b))<annotation encoding="application/x-tex">B(b,f(a)) \cong A(a,g(b))</annotation></semantics>. This is sometimes called a mutual right adjunction or dual adjunction, and such things do arise in plenty of examples. Many Galois connections are mutual right adjunctions between posets, and also for instance the contravariant powerset functor is mutually right adjoint to itself. Similarly, a <semantics>(2,0)<annotation encoding="application/x-tex">(2,0)</annotation></semantics>-variable adjunction <semantics>(A,B)()<annotation encoding="application/x-tex">(A,B) \to ()</annotation></semantics> is a mutual left adjunction <semantics>B(f(a),b)A(g(b),a)<annotation encoding="application/x-tex">B(f(a),b) \cong A(g(b),a)</annotation></semantics>. Of course a mutual right or left adjunction can also be described as an ordinary adjunction between <semantics>A op<annotation encoding="application/x-tex">A^{op}</annotation></semantics> and <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>, or between <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> and <semantics>B op<annotation encoding="application/x-tex">B^{op}</annotation></semantics>, but the choice of which category to oppositize is arbitrary; the polycategory <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics> respects mutual right and left adjunctions as independent objects rather than forcing them into the mold of ordinary adjunctions.

More generally, a <semantics>(0,n)<annotation encoding="application/x-tex">(0,n)</annotation></semantics>-variable adjunction <semantics>()(A 1,,A n)<annotation encoding="application/x-tex">() \to (A_1,\dots,A_n)</annotation></semantics> is a “mutual right multivariable adjunction” between <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> contravariant functors <semantics>f i:A i+1××A n×A 1××A i1A i op.<annotation encoding="application/x-tex">f_i : A_{i+1}\times \cdots \times A_n \times A_1 \times \cdots \times A_{i-1}\to A_i^{op}.</annotation></semantics> Just as a <semantics>(0,2)<annotation encoding="application/x-tex">(0,2)</annotation></semantics>-variable adjunction can be forced into the mold of a <semantics>(1,1)<annotation encoding="application/x-tex">(1,1)</annotation></semantics>-variable adjunction by oppositizing one category, an <semantics>(n,1)<annotation encoding="application/x-tex">(n,1)</annotation></semantics>-variable adjunction can be forced into the mold of a <semantics>(0,n)<annotation encoding="application/x-tex">(0,n)</annotation></semantics>-variable adjunction by oppositizing all but one of the categories — Eugenia, Nick, and Emily found this helpful in describing the cyclic action. But the polycategory <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics> again treats them as independent objects.

What role, then, do opposite categories play in the polycategory <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>? Or put differently, what happened to the cyclic action on the multicategory? The answer is once again quite beautiful: opposite categories are duals. The usual notion of dual pair <semantics>(A,B)<annotation encoding="application/x-tex">(A,B)</annotation></semantics> in a monoidal category consists of a unit and counit <semantics>η:IAB<annotation encoding="application/x-tex">\eta : I \to A\otimes B</annotation></semantics> and <semantics>ε:BAI<annotation encoding="application/x-tex">\varepsilon :B \otimes A \to I</annotation></semantics> satisfying the triangle identities. This cannot be phrased in a mere multicategory, because <semantics>η<annotation encoding="application/x-tex">\eta</annotation></semantics> involves two objects in its codomain (and <semantics>ε<annotation encoding="application/x-tex">\varepsilon</annotation></semantics> involves zero), whereas in a multicategory every morphism has exactly one object in its codomain. But in a polycategory, with this restriction lifted, we can write <semantics>η:()(A,B)<annotation encoding="application/x-tex">\eta : () \to (A, B)</annotation></semantics> and <semantics>ε:(B,A)()<annotation encoding="application/x-tex">\varepsilon : (B,A)\to ()</annotation></semantics>, and it turns out that the composition rule of a polycategory is exactly what we need for the triangle identities to make sense: <semantics>ε Aη=1 B<annotation encoding="application/x-tex">\varepsilon \circ_A \eta = 1_{B}</annotation></semantics> and <semantics>ε Bη=1 A<annotation encoding="application/x-tex">\varepsilon \circ_{B} \eta = 1_A</annotation></semantics>.

What is a dual pair in <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>? As we saw above, <semantics>η<annotation encoding="application/x-tex">\eta</annotation></semantics> is a mutual right adjunction <semantics>B(b,η 1(a))A(a,η 2(b))<annotation encoding="application/x-tex">B(b,\eta_1(a)) \cong A(a,\eta_2(b))</annotation></semantics>, and <semantics>ε<annotation encoding="application/x-tex">\varepsilon</annotation></semantics> is a mutual left adjunction <semantics>B(ε 1(a),b)A(ε 2(b),a)<annotation encoding="application/x-tex">B(\varepsilon_1(a),b) \cong A(\varepsilon_2(b),a)</annotation></semantics>. The triangle identities (suitably weakened up to isomorphism) say that <semantics>ε 2η 11 A<annotation encoding="application/x-tex">\varepsilon_2 \circ \eta_1 \cong 1_A</annotation></semantics> and <semantics>η 2ε 11 A<annotation encoding="application/x-tex">\eta_2 \circ \varepsilon_1 \cong 1_A</annotation></semantics> and <semantics>ε 1η 21 B<annotation encoding="application/x-tex">\varepsilon_1 \circ \eta_2 \cong 1_B</annotation></semantics> and <semantics>η 1ε 21 B<annotation encoding="application/x-tex">\eta_1 \circ \varepsilon_2 \cong 1_B</annotation></semantics>; thus these two adjunctions are actually both the same dual equivalence <semantics>BA op<annotation encoding="application/x-tex">B\simeq A^{op}</annotation></semantics>. In particular, there is a canonical dual pair <semantics>(A,A op)<annotation encoding="application/x-tex">(A,A^{op})</annotation></semantics>, and any other dual pair is equivalent to this one.

Let me say that again: in the polycategory <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>, opposite categories are duals. I find this really exciting: opposite categories are one of the more mysterious parts of category theory to me, largely because they don’t have a universal property in <semantics>Cat<annotation encoding="application/x-tex">Cat</annotation></semantics>; but in <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>, they do! To be sure, they also have universal properties in other places. In 1606.05058 I noted that you can give them a universal property as a representing object for contravariant functors; but this is fairly tautological. And it’s also well-known that they are duals in the usual monoidal sense (not our generalized polycategory sense) in the monoidal bicategory Prof; but this characterizes them only up to Morita equivalence, whereas the duality in <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics> characterizes them up to ordinary equivalence of categories. Of course, we did already use opposite categories in defining the notion of multivariable adjunction, so it’s not as if this produces them out of thin air; but I do feel that it does give an important insight into what they are.

In particular, the dual pair <semantics>(A,A op)<annotation encoding="application/x-tex">(A,A^{op})</annotation></semantics> allows us to implement the “cyclic action” on multivariable adjunctions by simple composition. Given a <semantics>(2,1)<annotation encoding="application/x-tex">(2,1)</annotation></semantics>-variable adjunction <semantics>(A,B)C<annotation encoding="application/x-tex">(A,B) \to C</annotation></semantics>, we can compose it polycategorically with <semantics>η:()(A,A op)<annotation encoding="application/x-tex">\eta : () \to (A,A^{op})</annotation></semantics> to obtain a <semantics>(1,2)<annotation encoding="application/x-tex">(1,2)</annotation></semantics>-variable adjunction <semantics>B(A op,C)<annotation encoding="application/x-tex">B \to (A^{op},C)</annotation></semantics>. Then we can compose that with <semantics>ε:(C op,C)()<annotation encoding="application/x-tex">\varepsilon : (C^{op},C)\to ()</annotation></semantics> to obtain another <semantics>(2,1)<annotation encoding="application/x-tex">(2,1)</annotation></semantics>-variable adjunction <semantics>(B,C op)A op<annotation encoding="application/x-tex">(B,C^{op})\to A^{op}</annotation></semantics>. This is exactly the action of the cyclic structure described by Eugenia, Nick, and Emily on our original multivariable adjunction. (In fact, there’s a precise sense in which a cyclic multicategory is “almost” equivalent to a polycategory with duals; for now I’ll leave that as an exercise for the reader.)

Note the similarity to how dual pairs in a monoidal category shift back and forth: <semantics>Hom(AB,C)Hom(B,A *C)Hom(BC *,A *).<annotation encoding="application/x-tex">Hom(A\otimes B, C) \cong Hom(B, A^\ast \otimes C) \cong Hom(B\otimes C^\ast, A^\ast).</annotation></semantics> In string diagram notation, the latter is represented by “turning strings around”, regarding the unit and counit of the dual pair <semantics>(A,A *)<annotation encoding="application/x-tex">(A,A^\ast)</annotation></semantics> as a “cup” and “cap”. Pleasingly, there is also a string diagram notation for polycategories, in which dual pairs behave exactly the same way; we simply restrict the ways that strings are allowed to be connected together — for instance, no two vertices can be joined by more than one string. (More generally, the condition is that the string diagram should be “simply connected”.)

In future posts I’ll explore some other neat things related to the polycategory <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>. For now, let me leave you with some negative thinking puzzles:

  • What is a <semantics>(0,1)<annotation encoding="application/x-tex">(0,1)</annotation></semantics>-variable adjunction?
  • How about a <semantics>(1,0)<annotation encoding="application/x-tex">(1,0)</annotation></semantics>-variable adjunction?
  • How about a <semantics>(0,0)<annotation encoding="application/x-tex">(0,0)</annotation></semantics>-variable adjunction?

by shulman (viritrilbia@gmail.com) at November 11, 2017 04:36 AM

The n-Category Cafe

The 2-Chu Construction

Last time I told you that multivariable adjunctions (“polyvariable adjunctions”?) form a polycategory <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>, a structure like a multicategory but in which codomains as well as domains can involve multiple objects. This time I want to convince you that <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics> is actually (a subcategory of) an instance of an exceedingly general notion, called the Chu construction.

As I remarked last time, in defining multivariable adjunctions we used opposite categories. However, we didn’t need to know very much about the opposite of a category <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>; essentially all we needed is the existence of a hom-functor <semantics>hom A:A op×ASet<annotation encoding="application/x-tex">hom_A : A^{op}\times A \to Set</annotation></semantics>. This enabled us to define the representable functors corresponding to multivariable morphisms, so that we could then ask them to be isomorphic to obtain a multivariable adjunction. We didn’t need any special properties of the category <semantics>Set<annotation encoding="application/x-tex">Set</annotation></semantics> or the hom-functor <semantics>hom A<annotation encoding="application/x-tex">hom_A</annotation></semantics>, only that each <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> comes equipped with a map <semantics>hom A:A op×ASet<annotation encoding="application/x-tex">hom_A : A^{op}\times A \to Set</annotation></semantics>. (Note that this is sort of “half” of a counit for the hoped-for dual pair <semantics>(A,A op)<annotation encoding="application/x-tex">(A,A^{op})</annotation></semantics>, or it would be if <semantics>Set<annotation encoding="application/x-tex">Set</annotation></semantics> were the unit object; the other half doesn’t exist in <semantics>Cat<annotation encoding="application/x-tex">Cat</annotation></semantics>, but it does once we pass to <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>.)

Furthermore, we didn’t need any cartesian properties of the product <semantics>×<annotation encoding="application/x-tex">\times</annotation></semantics>; it could just as well have been any monoidal structure, or even any multicategory structure! Finally, if we’re willing to end up with a somewhat larger category, we can give up the idea that each <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> should be equipped with <semantics>A op<annotation encoding="application/x-tex">A^{op}</annotation></semantics> and <semantics>hom A<annotation encoding="application/x-tex">hom_A</annotation></semantics>, and instead allow each objects of our “generalized <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>” to make a free choice of its “opposite” and “hom-functor”.

This leads to the following construction. Let <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> be a 2-multicategory, equipped with a chosen object called <semantics><annotation encoding="application/x-tex">\bot</annotation></semantics>. (We don’t need to assume anything about <semantics><annotation encoding="application/x-tex">\bot</annotation></semantics>.) We define a 2-polycategory <semantics>Chu(M,)<annotation encoding="application/x-tex">Chu(M,\bot)</annotation></semantics> as follows. Its objects are pairs <semantics>(A +,A )<annotation encoding="application/x-tex">(A^+,A^-)</annotation></semantics> of objects of <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> equipped with a map <semantics>hom A:(A +,A )<annotation encoding="application/x-tex">\hom_A : (A^+,A^-) \to \bot</annotation></semantics>. I won’t write out the general form of a poly-arrow, but here are a couple special cases to get the idea:

  • A (1,1)-ary morphism <semantics>AB<annotation encoding="application/x-tex">A\to B</annotation></semantics> consists of morphisms <semantics>f:A +B +<annotation encoding="application/x-tex">f:A^+ \to B^+</annotation></semantics> and <semantics>g:B A <annotation encoding="application/x-tex">g:B^-\to A^-</annotation></semantics> in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>, together with an isomorphism <semantics>hom B B +fhom A A g<annotation encoding="application/x-tex">\hom_B \circ_{B^+} f \cong \hom_A\circ_{A^-} g</annotation></semantics> of morphisms <semantics>(A +,B )<annotation encoding="application/x-tex">(A^+,B^-) \to \bot</annotation></semantics>.
  • A (2,1)-ary morphism <semantics>(A,B)C<annotation encoding="application/x-tex">(A,B) \to C</annotation></semantics> consists of morphisms <semantics>f:(A +,B +)C +<annotation encoding="application/x-tex">f:(A^+,B^+)\to C^+</annotation></semantics>, <semantics>g:(A +,C )B <annotation encoding="application/x-tex">g:(A^+,C^-) \to B^-</annotation></semantics>, and <semantics>h:(C ,B +)A <annotation encoding="application/x-tex">h:(C^-,B^+) \to A^-</annotation></semantics> along with isomorphisms <semantics>hom C C +fhom B B ghom A A h<annotation encoding="application/x-tex">\hom_C \circ_{C^+} f \cong \hom_B \circ_{B^-} g \cong \hom_A \circ_{A^-} h</annotation></semantics> of morphisms <semantics>(A +,B +,C )<annotation encoding="application/x-tex">(A^+,B^+,C^-)\to \bot</annotation></semantics>.

In other words, we take the notion of <semantics>(n,m)<annotation encoding="application/x-tex">(n,m)</annotation></semantics>-variable adjunction, apply <semantics>() op<annotation encoding="application/x-tex">(-)^{op}</annotation></semantics> to all the left adjoints, then replace each occurrence of an opposite category <semantics>A op<annotation encoding="application/x-tex">A^{op}</annotation></semantics> with <semantics>A +<annotation encoding="application/x-tex">A^+</annotation></semantics> and each occurrence of a non-opposite category <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> with <semantics>A <annotation encoding="application/x-tex">A^-</annotation></semantics>; then write out the representable functors like <semantics>B(f(a),b)<annotation encoding="application/x-tex">B(f(a),b)</annotation></semantics> and <semantics>A(a,g(b))<annotation encoding="application/x-tex">A(a,g(b))</annotation></semantics> as composites of multi-arrows in <semantics>Cat<annotation encoding="application/x-tex">Cat</annotation></semantics> with target <semantics>Set<annotation encoding="application/x-tex">Set</annotation></semantics> and finally interpret them in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> but with <semantics>Set<annotation encoding="application/x-tex">Set</annotation></semantics> replaced by <semantics><annotation encoding="application/x-tex">\bot</annotation></semantics>. Composition is defined just like in <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>. This is the Chu construction <semantics>Chu(M,)<annotation encoding="application/x-tex">Chu(M,\bot)</annotation></semantics> (or perhaps the “2-Chu construction”, since we’re doing it 2-categorically with isomorphisms instead of equalities).

It should now be clear, by construction, that <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics> embeds into <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics> by sending each category <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics> to its pair <semantics>(A op,A)<annotation encoding="application/x-tex">(A^{op},A)</annotation></semantics> with its “actual” opposite and its “actual” hom-functor <semantics>hom A:A op×ASet<annotation encoding="application/x-tex">\hom_A : A^{op}\times A \to Set</annotation></semantics>. (Actually, we have to be a little careful with the directions of the 2-cells; the naive definition with this approach would lead to them pointing in the opposite direction from the usual. One option is to just redefine them by hand; another is to use <semantics>Chu(Cat,Set op)<annotation encoding="application/x-tex">Chu(Cat,Set^{op})</annotation></semantics> instead; a third is to define multivariable adjunctions to point in the direction of their right adjoints rather than their left ones.) This embedding is 2-polycategorically fully-faithful, i.e. induces an equivalence on categories of <semantics>(n,m)<annotation encoding="application/x-tex">(n,m)</annotation></semantics>-ary morphisms for all <semantics>n,m<annotation encoding="application/x-tex">n,m</annotation></semantics>.

Why is this interesting? Lots of reasons.

Firstly, it means that the notion of multivariable adjunction is not something we just wrote down because we saw it arising in examples. Not that there’s anything wrong with writing down a definition because we have examples of it, but it’s always reassuring if the definition also has a universal property. The Chu construction may seem ad hoc at first, but actually it is the “cofree” way to make a pointed symmetric multicategory into a polycategory with duals and counit (in the representable case, this is due to Dusko Pavlovic). And <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics> is (part of) what we get by applying this to <semantics>Cat<annotation encoding="application/x-tex">Cat</annotation></semantics> pointed with <semantics>Set<annotation encoding="application/x-tex">Set</annotation></semantics>; one might say that <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics> is what’s left of the construction of <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics> when we take away the mysterious fact that every category has an assigned opposite.

Secondly, if, like me, you never really understood the Chu construction (or maybe never even heard of it), but you are familiar with multivariable adjunctions, then now you have a new way to understand the Chu construction: it’s just an abstract generalization of <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics>.

Thirdly, the objects of <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics> that aren’t in <semantics>MVar<annotation encoding="application/x-tex">MVar</annotation></semantics> are not uninteresting: they are a kind of “polarized category”, with a “specified opposite” that may differ from their honest opposite. Related structures have been studied by Cockett-Seely and Mellies, among others, with semantics of “polarized linear logic” in mind. I haven’t seen anyone define before the exact sort of “polarized adjunction” that arise as the morphisms in <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics>; and not really understanding polarized logic myself, I don’t know whether they are directly relevant to it. But they do seem quite suggestive to me. (This connection also explains my perhaps-odd-looking choice of notation <semantics>A +=A op<annotation encoding="application/x-tex">A^+ = A^{op}</annotation></semantics> and <semantics>A =A<annotation encoding="application/x-tex">A^- = A</annotation></semantics> rather than the other way around; the objects in <semantics>A +<annotation encoding="application/x-tex">A^+</annotation></semantics> are the “positive types” and those in <semantics>A <annotation encoding="application/x-tex">A^-</annotation></semantics> are the “negative types”.)

Fourthly, Chu constructions are often more than just polycategories. If <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is not just a multicategory but a closed monoidal category, which moreover has pullbacks, then <semantics>Chu(M,)<annotation encoding="application/x-tex">Chu(M,\bot)</annotation></semantics> is a representable polycategory — the analogue for polycategories of a multicategory that is representable by a tensor product (hence is a monoidal category). Surprisingly (at least, surprisingly if you’re used to thinking about PROPs instead of polycategories), a “representable polycategory” has two tensor products: poly-arrows <semantics>(A 1,,A n)(B 1,,B m)<annotation encoding="application/x-tex">(A_1,\dots,A_n) \to (B_1,\dots,B_m)</annotation></semantics> correspond to ordinary arrows <semantics>A 1A nB 1B m<annotation encoding="application/x-tex">A_1\otimes \cdots\otimes A_n \to B_1 \parr \cdots \parr B_m</annotation></semantics>, where <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics> and <semantics><annotation encoding="application/x-tex">\parr</annotation></semantics> are two different monoidal structures. The two monoidal structures do have to be related, but the relationship takes the form of some somewhat odd-looking transformations <semantics>A(BC)(AB)C<annotation encoding="application/x-tex">A\otimes (B\parr C) \to (A\otimes B)\parr C</annotation></semantics> and <semantics>(AB)CA(BC)<annotation encoding="application/x-tex">(A\parr B) \otimes C \to A \parr (B\otimes C)</annotation></semantics>; this is called a linearly distributive category.

A linearly distributive category that also has a dual for every object, in the polycategorical sense I mentioned last time, is called a star-autonomous category. Since <semantics>Chu(M,)<annotation encoding="application/x-tex">Chu(M,\bot)</annotation></semantics> always has duals (the dual of <semantics>(A +,A )<annotation encoding="application/x-tex">(A^+,A^-)</annotation></semantics> is <semantics>(A ,A +)<annotation encoding="application/x-tex">(A^-,A^+)</annotation></semantics>), if it is linearly distributive then it is star-autonomous. In particular, this applies to <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics>.

We can figure out what the tensor products in <semantics>Chu(M,)<annotation encoding="application/x-tex">Chu(M,\bot)</annotation></semantics> look like by inspecting the universal property they would have to have. Recall that a (2,1)-ary morphism <semantics>(A,B)C<annotation encoding="application/x-tex">(A,B) \to C</annotation></semantics> in <semantics>Chu(M,)<annotation encoding="application/x-tex">Chu(M,\bot)</annotation></semantics> consists of morphisms <semantics>f:(A +,B +)C +<annotation encoding="application/x-tex">f:(A^+,B^+)\to C^+</annotation></semantics>, <semantics>g:(A +,C )B <annotation encoding="application/x-tex">g:(A^+,C^-) \to B^-</annotation></semantics>, and <semantics>h:(C ,B +)A <annotation encoding="application/x-tex">h:(C^-,B^+) \to A^-</annotation></semantics> in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> along with isomorphisms between the three induced morphisms <semantics>(A +,B +,C )<annotation encoding="application/x-tex">(A^+,B^+,C^-)\to \bot</annotation></semantics>. A putative factorization <semantics>ABC<annotation encoding="application/x-tex">A\otimes B \to C</annotation></semantics>, on the other hand, would consist of morphisms <semantics>k:(AB) +C +<annotation encoding="application/x-tex">k:(A\otimes B)^+ \to C^+</annotation></semantics> and <semantics>:C (AB) <annotation encoding="application/x-tex">\ell : C^-\to (A\otimes B)^-</annotation></semantics> and isomorphisms between two induced morphisms <semantics>((AB) +,C )<annotation encoding="application/x-tex">((A\otimes B)^+,C^-) \to \bot</annotation></semantics>. It is easy to guess we should take <semantics>(AB) +=A +B +<annotation encoding="application/x-tex">(A\otimes B)^+ = A^+ \otimes B^+</annotation></semantics>, so that <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> is uniquely determined by <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>. For the others, note that when <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> is closed, <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> and <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> correspond to maps <semantics>C [A +,B ]<annotation encoding="application/x-tex">C^- \to [A^+,B^-]</annotation></semantics> and <semantics>C [B +,A ]<annotation encoding="application/x-tex">C^- \to [B^+,A^-]</annotation></semantics>, so <semantics>(AB) <annotation encoding="application/x-tex">(A\otimes B)^-</annotation></semantics> should involve <semantics>[A +,B ]<annotation encoding="application/x-tex">[A^+,B^-]</annotation></semantics> and <semantics>[B +,A ]<annotation encoding="application/x-tex">[B^+,A^-]</annotation></semantics> somehow. In fact it should be their pseudopullback over <semantics>[A +B +,]<annotation encoding="application/x-tex">[A^+\otimes B^+,\bot]</annotation></semantics>; the isomorphism involved in this pseudopullback encodes one of the two-variable adjunction isomorphisms, with what’s left being the induced ordinary adjunction isomorphism.

A similar, but simpler, argument shows that the unit object of <semantics>Chu(M,)<annotation encoding="application/x-tex">Chu(M,\bot)</annotation></semantics> is <semantics>1 =(1,,id )<annotation encoding="application/x-tex">\mathbf{1}_\bot = (\mathbf{1},\bot,id_\bot)</annotation></semantics>, where <semantics>1<annotation encoding="application/x-tex">\mathbf{1}</annotation></semantics> is the unit object of <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> (if it exists). This has the universal property that it can be inserted anywhere in the domain list without changing the available morphisms, and in particular <semantics>(0,m)<annotation encoding="application/x-tex">(0,m)</annotation></semantics>-ary morphisms <semantics>()Δ<annotation encoding="application/x-tex">() \to \Delta</annotation></semantics> correspond to <semantics>(1,m)<annotation encoding="application/x-tex">(1,m)</annotation></semantics>-ary morphisms <semantics>1 Δ<annotation encoding="application/x-tex">\mathbf{1}_\bot \to \Delta</annotation></semantics>. Its dual <semantics>1 =(,1,id )<annotation encoding="application/x-tex">\mathbf{1}_\bot^\bullet = (\bot,\mathbf{1},id_\bot)</annotation></semantics> has the dual property, and so in particular (0,0)-ary morphisms <semantics>()()<annotation encoding="application/x-tex">()\to ()</annotation></semantics> correspond to morphisms <semantics>1 1 <annotation encoding="application/x-tex">\mathbf{1}_\bot \to \mathbf{1}_\bot^\bullet</annotation></semantics> in <semantics>Chu(M,)<annotation encoding="application/x-tex">Chu(M,\bot)</annotation></semantics>, which simplify to just morphisms <semantics>1<annotation encoding="application/x-tex">\mathbf{1}\to \bot</annotation></semantics> in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>. This gives another solution to the puzzle at the end of my last post: since the embedding <semantics>MVarChu(Cat,Set)<annotation encoding="application/x-tex">MVar \to Chu(Cat,Set)</annotation></semantics> is fully faithful on <semantics>(n,m)<annotation encoding="application/x-tex">(n,m)</annotation></semantics>-ary morphisms for <semantics>n+m>0<annotation encoding="application/x-tex">n+m\gt 0</annotation></semantics>, we should expect it to be fully faithful on <semantics>(0,0)<annotation encoding="application/x-tex">(0,0)</annotation></semantics>-ary morphisms too; but the <semantics>(0,0)<annotation encoding="application/x-tex">(0,0)</annotation></semantics>-ary morphisms in <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics> are morphisms <semantics>1Set<annotation encoding="application/x-tex">1\to Set</annotation></semantics> in <semantics>Cat<annotation encoding="application/x-tex">Cat</annotation></semantics>.

Personally, I find this explanation of the tensor products in the Chu construction more transparent than any other I’ve heard, since it determines them by a universal property, rather than forcing us to guess what they should be. And it explains the odd condition in the definition of the star-autonomous <semantics>Chu(M,)<annotation encoding="application/x-tex">Chu(M,\bot)</annotation></semantics> that <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> should have pullbacks as well as being closed monoidal; it’s just to ensure that the polycategorical structure is representable. And it makes it much easier to construct the 2-categorical version of the Chu construction: the pseudopullback means that the tensor product in general will be only bicategorical, but by characterizing it by a polycategorical universal property (up to equivalence, rather than isomorphism) we can be guaranteed that it will be sufficiently coherent without having to delve into the definition of monoidal bicategory.

Finally, Chu constructions are interesting for other purposes. In particular, they often provide a “unified home for concrete dualities”. There is a nice explanation of this here for the case of <semantics>Chu(Set,2)<annotation encoding="application/x-tex">Chu(Set,2)</annotation></semantics>. You should read the whole thing, but the short version is that an “ambimorphic set” like <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, which admits the structure of two different concrete categories <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> and <semantics>D<annotation encoding="application/x-tex">D</annotation></semantics>, induces embeddings of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> and <semantics>D<annotation encoding="application/x-tex">D</annotation></semantics> into <semantics>Chu(Set,2)<annotation encoding="application/x-tex">Chu(Set,2)</annotation></semantics>, and often the self-duality of <semantics>Chu(Set,2)<annotation encoding="application/x-tex">Chu(Set,2)</annotation></semantics> restricts to a contravariant equivalence between <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> and <semantics>D<annotation encoding="application/x-tex">D</annotation></semantics>. For instance, Stone duality can be represented in this way, while Pontryagin duality similarly arises from <semantics>Chu(Top,S 1)<annotation encoding="application/x-tex">Chu(Top,S^1)</annotation></semantics>.

The 2-Chu construction <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics> exhibits similar behavior, for instance involving Gabriel-Ulmer duality. Now the category <semantics>Set<annotation encoding="application/x-tex">Set</annotation></semantics> is an ambimorphic object of <semantics>Cat<annotation encoding="application/x-tex">Cat</annotation></semantics>: for instance, it carries the structure of both a category with finite limits and a locally finitely presentable category. This induces two embeddings of the 2-categories <semantics>Lex<annotation encoding="application/x-tex">Lex</annotation></semantics> of finitely complete categories and <semantics>LFP<annotation encoding="application/x-tex">LFP</annotation></semantics> of locally finitely presentable categories into <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics>, and the self-duality of <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics> restricts to a contravariant equivalence <semantics>Lex opLFP<annotation encoding="application/x-tex">Lex^{op}\simeq LFP</annotation></semantics>.

I’m surprised that I’ve never seen the 2-Chu construction <semantics>Chu(Cat,Set)<annotation encoding="application/x-tex">Chu(Cat,Set)</annotation></semantics> written down anywhere; it seems like a very natural categorification of <semantics>Chu(Set,2)<annotation encoding="application/x-tex">Chu(Set,2)</annotation></semantics>, and the latter has been studied a lot by many people. The only reference I’ve been able to find is a discussion on the categories mailing list from 2006 involving Vaughan Pratt, Michael Barr, and our own John Baez; you can comb through the archives to follow the thread here and here. Was this ever pursued further by anyone?

by shulman (viritrilbia@gmail.com) at November 11, 2017 04:35 AM

November 10, 2017

Emily Lakdawalla - The Planetary Society Blog

Reminder: The Giant Magellan Telescope is going to be awesome
The GMT will characterize Earth-size exoplanets' atmospheres, looking for compounds that indicate the presence of life.

November 10, 2017 12:00 PM

Lubos Motl - string vacua and pheno

Japanese planned ILC collider shrinks to half
In 2013, I discussed the Japanese competition choosing the host of the International Linear Collider



The folks in the Sefuri mountains who created this catchy music video lost and Tohoku won instead – those had more credible, respected, and boring physicists behind themselves, not to mention a 5 times longer video with the 20 times smaller number of views. ;-)




Yesterday, Nature announced that physicists shrink plans for next major collider. How much did it shrink? A lot. By 50%.




In March 2013, the electron-positron collider was supposed to collide 250+250=500 GeV beams, and later even 500+500=1000 GeV beams, in a 33.5-kilometer long tunnel.

Well, the length of the tunnel was shrunk to some 20 km now and the particles colliding should only have 125+125=250 GeV. The price would be shrunk by 40% or so, to some $7 billion in total.

The total center-of-mass energy is a special number because this energy is enough for the Higgs boson pair production – the Higgs boson mass happens to be 125 GeV, too. I am not sure whether they pay any attention to this special phenomenon happening at this energy, whether they will try to amplify this particular pair creation event or allow it. The collider should be sufficiently energetic to probe the energies slightly below as well as slightly above this threshold, I think.

Update: Tristan is telling me that the pair production "hh" is only visible at higher energies and this one relies on "Zh" only.

Note that the total energy is small compared to the LHC's 6,500+6,500=13,000 GeV but the electron-positron collisions are much cleaner and new physics should be more visible.

Well, this downward trend is disappointing. Chinese are great but in the grand scheme of things, because of the political systems etc., I would probably still prefer such big projects to be run by the Japanese.

This collider could probe the properties of the Higgs boson in detail and precisely but I am not sure whether this task would make me excited by itself. It sounds like physics of the sixth place of decimals to me.

by Luboš Motl (noreply@blogger.com) at November 10, 2017 09:04 AM

November 09, 2017

John Baez - Azimuth

Biology as Information Dynamics (Part 3)

On Monday I’m giving this talk at Caltech:

Biology as information dynamics, November 13, 2017, 4:00–5:00 pm, General Biology Seminar, Kerckhoff 119, Caltech.

If you’re around, please check it out! I’ll be around all day talking to people, including Erik Winfree, my graduate student host Fangzhou Xiao, and other grad students.

If you can’t make it, you can watch this video! It’s a neat subject, and I want to do more on it:

Abstract. If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher’s fundamental theorem of natural selection.


by John Baez at November 09, 2017 04:14 PM

ZapperZ - Physics and Physicists

SLAC's LCLS Upgrade and What It Might Mean To You
Just in case you don't know what's going on at SLAC's LCLS, and the upcoming upgrade to bring it to LCLS-II, here's a CNET article meant for the general public to tell what what they have been up to, and what they hope to accomplish with the upgrade.

Keep in mind that LCLS is a "light source", albeit it is a very unique, highly-intense x-ray light source. SLAC is also part of the DOE's US National Laboratories, which include Brookhaven, Fermilab, Berkeley, Argonne, Los Alamos, .... etc.

Zz.

by ZapperZ (noreply@blogger.com) at November 09, 2017 03:24 PM

Robert Helling - atdotde

Why is there a supercontinent cycle?
One of the most influential books of my early childhood was my "Kinderatlas"
There were many things to learn about the world (maps were actually only the last third of the book) and for example I blame my fascination for scuba diving on this book. Also last year, when we visited the Mont-Doré in Auvergne and I had to explain how volcanos are formed to my kids to make them forget how many stairs were still ahead of them to the summit, I did that while mentally picturing the pages in that book about plate tectonics.


But there is one thing I about tectonics that has been bothering me for a long time and I still haven't found a good explanation for (or at least an acknowledgement that there is something to explain): Since the days of Alfred Wegener we know that the jigsaw puzzle pieces of the continents fit in a way that geologists believe that some hundred million years ago they were all connected as a supercontinent Pangea.
Pangea animation 03.gif
By Original upload by en:User:Tbower - USGS animation A08, Public Domain, Link

In fact, that was only the last in a series of supercontinents, that keep forming and breaking up in the "supercontinent cycle".
Platetechsimple.png
By SimplisticReps - Own work, CC BY-SA 4.0, Link

So here is the question: I am happy with the idea of several (say $N$) plates roughly containing a continent each that a floating around on the magma driven by all kinds of convection processes in the liquid part of the earth. They are moving around in a pattern that looks to me to be pretty chaotic (in the non-technical sense) and of course for random motion you would expect that from time to time two of those collide and then maybe stick for a while.

Then it would be possible that also a third plate collides with the two but that would be a coincidence (like two random lines typically intersect but if you have three lines they would typically intersect in pairs but typically not in a triple intersection). But to form a supercontinent, you need all $N$ plates to miraculously collide at the same time. This order-$N$ process seems to be highly unlikely when random let alone the fact that it seems to repeat. So this motion cannot be random (yes, Sabine, this is a naturalness argument). This needs an explanation.

So, why, every few hundred million years, do all the land masses of the earth assemble on side of the earth?

One explanation could for example be that during those tines, the center of mass of the earth is not in the symmetry center so the water of the oceans flow to one side of the earth and reveals the seabed on the opposite side of the earth. Then you would have essentially one big island. But this seems not to be the case as the continents (those parts that are above sea-level) appear to be stable on much longer time scales. It is not that the seabed comes up on one side and the land on the other goes under water but the land masses actually move around to meet on one side.

I have already asked this question whenever I ran into people with a geosciences education but it is still open (and I have to admit that in a non-zero number of cases I failed to even make the question clear that an $N$-body collision needs an explanation). But I am sure, you my readers know the answer or even better can come up with one.

by Robert Helling (noreply@blogger.com) at November 09, 2017 09:35 AM

November 08, 2017

Clifford V. Johnson - Asymptotia

A Sighting!

I went a bit crazy on social media earlier today. I posted this picture and: There’s been a first sighting!! Aaaaaaaaaarrrrrrrrgh! It EXISTS! It actually exists! In a bookstore (Cellar Door Books in Riverside)! (But believe it or not a copy has not got to me yet. Long story.) http://thedialoguesbook.com … Click to continue reading this post

The post A Sighting! appeared first on Asymptotia.

by Clifford at November 08, 2017 06:05 AM

November 07, 2017

Emily Lakdawalla - The Planetary Society Blog

Planetary Society asteroid hunter snags picture of interstellar visitor ʻOumuamua
Asteroid hunters named the first-known interstellar asteroid ʻOumuamua as a nod to its scout-like traits.

November 07, 2017 07:29 PM

CERN Bulletin

Staff Association membership is free of charge for the rest of 2017

Staff Association membership is free of charge for the rest of 2017

Starting from September 1st, membership of the Staff Association is free for all new members for the period up to the end of 2017.

This is to allow you to participate in the Staff Council elections, by voting and electing your representatives.

Do not hesitate any longer; join now!

November 07, 2017 10:11 AM

CERN Bulletin

2017 Elections to Staff Council

Make your voice heard, support your candidates!

After verification by the Electoral Commission, all candidates for the elections to the Staff Council have been registered.

It is now up to you, members of the Staff Association, to vote for the candidate(s) of your choice.

We hope that you will be many to vote and to elect the new Staff Council! By doing so, you can support and encourage the women and men, who will represent you over the next two years.

We are using an electronic voting system; all you need to do is click the link below and follow the instructions on the screen.

https://ap-vote.web.cern.ch/elections-2017

The deadline for voting is Monday, 13 November at midday (12 pm).

Elections Timetable

Monday 13 November, at noon
Closing date for voting

Tuesday 21 November and Tuesday 5 December
Publication of the results in Echo

Monday 27 and Tuesday 28 November
Staff Association Assizes

Tuesday 5 December (afternoon)
First meeting of the new Staff Council and election of the new Executive Committee

The voting procedure will be monitored by the Election Committee, which is also in charge of announcing the results in Echo on 21 November and 5 December.

Candidates for the 2017 Elections

November 07, 2017 10:11 AM

CERN Bulletin

Conference

Les richesses naturelles du Vuache
de la connaissance à une gestion pour une conservation durable

Jeudi 9 novembre à 18h30
CERN Meyrin, Salle du Conseil (503-1-001)
Conférence suivie d’un apéro

Jacques Bordon
Vice-président du Syndicat du Vuache

Le Vuache est une modeste montagne de l’avant pays savoyard, aux portes de la grande agglomération franco-valdo-genevoise. Parcourue depuis deux siècles par les naturalistes, elle se révèle d’une grande richesse floristique et faunistique. Utilisée longtemps par les forestiers, les agriculteurs, convoitée par les promoteurs du tourisme, elle a fait face à de nombreux changements des pratiques humaines. Les milieux naturels ont beaucoup évolué avec des reculs puis des avancées de la forêt.

C’est un lieu de découverte et de ressourcement pour les citadins voisins. Depuis bientôt une trentaine d’années, les associations et collectivités locales ont pris conscience de la richesse mais aussi  de la fragilité de ces écosystèmes et de nombreuses mesures de protection et gestion conservatoire  ont été mises en œuvre, en particulier par le Syndicat du Vuache. Il s’agira lors de cette conférence, de faire le point sur les évolutions constatées (espèces nouvelles et disparues) et sur quelques résultats des actions entreprises en vue d’une conservation et d’une bio-diversité inscrites dans la durée.

Pour plus d’informations et demandes d’accès au site du CERN, veuillez contacter l’Association du personnel du CERN : staff.association@cern.ch / +41 22 766 37 38

November 07, 2017 10:11 AM

CERN Bulletin

And after the Elections!

What happens to the newly elected, and the re-elected delegates after the election of the new Staff Council?

It is the outgoing Staff Council which is responsible for preparing the new staff representatives to take on their new roles. To do this, information days are organized in the form of assizes. This year they will take place on November 27 in the morning; as well as on November 28, bringing together the new Staff Council.

These days mainly aim to inform delegates about the role of the Staff Association (SA) at CERN, the bodies, committees, forums, etc.; with whom the SA interacts, how the work of the SA is organized, the issues on which it works. These days are like a kind of "induction".

Inform, but not only!

Assizes are also aiming to integrate the newcomers, inviting them to discover the various internal committees of the SA, explaining to them the challenges ahead as well as defining the action plan for 2018. They offer new delegates, if they wish, a godparent (a kind of mentor). In short, assizes help them to find their place.

Amongst all this information and the presentations of the internal committees, training possibilities are also mentioned; for example the internal training "How to live one’s mission as delegate" whose main objective is to help and prepare new delegates to perform their mission and to assume their new status.

Finally, the question of forming the new Executive Committee (EC) will be dealt with, in view of the election of the new EC in early December. But this will be the topic of an upcoming article in which the Executive Committee will be presented together with its action plan for 2018, as well as the challenges that the Staff Association will face in the coming year.

The Staff Council, what is it?

Staff Council is the supreme representative body of the CERN staff and pensioners. It is composed of staff delegates who represent ordinary and associated members and retired delegates who represent retired members. These representatives are elected by the members of the Staff Association for a mandate of two years.

The Council is competent, amongst others, to:

  • Determine the broad lines of the Association’s policy;
  • Supervise its implementation by the Executive Committee;
  • Elect the Executive Committee;
  • Put to a referendum a decision by the General Assembly or any matter of general interest to the staff;
  • Appoint staff representatives to bodies in which such representation is foreseen;
  • Appoint Association representatives in each Department;
  • Set up commissions and working groups;

More information on: http://staff-association.web.cern.ch/bodies/staffcouncil

November 07, 2017 10:11 AM

November 06, 2017

CERN Bulletin

Orienteering Club

COURSE ORIENTATION
Finale de la coupe d’automne

Le club d’orientation du CERN (COC Genève) a organisé sa dernière course populaire de la saison samedi 4 novembre au lieu-dit Les Terrasses de Genève (74). Cette 9e épreuve qui se courait sous la forme d’un One-Man-Relay, clôturait ainsi la coupe genevoise d’automne dont les lauréats sont :

Circuit technique long : 1. Julien Vuitton (COC Genève), 2. Berni Wehrle (COC Genève), 3. Christophe Vuitton (COC Genève).

Circuit technique moyen : 1. Vladimir Kuznetsov (Lausanne-Jorat), 2. J.-Bernard Zosso (COC Genève), 3. Laurent Merat (O’Jura).

Circuit technique court : 1. Thibault Rouiller (COC Genève), 2. exæquo Lennart Jirden (COC Genève) et Katya Kuznetsova (Lausanne-Jorat).

Circuit facile moyen : 1. Tituan Barge (COC Genève), 2. Tatiana Kuznetsova (Lausanne-Jorat), 3. Claire Rousselot (Lausanne-Jorat).

Circuit facile court : 1. Loic Barge (COC Genève), 2. Lucia Bobin (O’Jura), 3. Hypolite Bobin (O’Jura).

Le Président du Club, Lennart Jirden, a félicité les vainqueurs ainsi que les participants lors de la remise des prix sans oublier de remercier les organisateurs des diverses épreuves. Des encouragements étaient transmis aux plus jeunes afin de poursuivre sur les bons résultats réalisés lors de cette coupe. Le président a également salué les excellentes performances réalisées cette année par Julien Vuitton : champion d’Europe cadet en relais et 4e à la longue distance (EYOC Slovaquie).

Un sympathique buffet canadien clôturait cette dernière course.

http://club-orienteering.web.cern.ch/fr

November 06, 2017 04:11 PM

Clifford V. Johnson - Asymptotia

Almost Time…

In another universe, this post has me holding the physical book, finally, after 18 years. In this universe however, there have been delays, and I'm holding this card showing the cover instead. But in 11 days let's see! Pre-orders are enormously helpful. If you've already got a copy, thanks. But it's gift-giving season coming up, so... Or just please share this post to others who might be interested in science and/or graphic books! Thanks. Ordering info, a trailer, and ten sample pages are here: http://thedialoguesbook.com

-cvj Click to continue reading this post

The post Almost Time… appeared first on Asymptotia.

by Clifford at November 06, 2017 03:33 PM

Tommaso Dorigo - Scientificblogging

Things That Can Decay To Boson Pairs
Writing a serious review of research in particle physics is a refreshing job - all the things that you already knew on that specific topic once sat on a fuzzy cloud somewhere in your brain, and now find their place in a tidily organized space, with clear interdependence among them. That's what I am experiencing as I progress with a 60-pageish thing on hadron collider searches for diboson resonances, which will appear sometime next year in a very high impact factor journal.

read more

by Tommaso Dorigo at November 06, 2017 12:59 PM

Emily Lakdawalla - The Planetary Society Blog

Sharing Space in Australia
The Planetary Society’s 2017 journey to Australia expanded our perspective, advocacy and global community. It was rich with reminders close to Carl Sagan’s heart: We are all connected through time, humankind, and our origins in the stars.

November 06, 2017 12:00 PM

November 04, 2017

Lubos Motl - string vacua and pheno

Allanach, You apply for a $50 collider to find \(Z'\) or leptoquarks
Assertive implications of an LHCb beauty-muon deficit

How many articles about flavor physics have you published in the Grauniad? Well, it turns out that Dr Allanach and You have written the essay
Anomalous bottoms at CERN and the case for a new collider
in which they derive an appealing interpretation from an anomaly seen by the LHCb Collaboration. As I discussed in March and April, the LHCb detector insists on a deficit of \(B\) mesons decaying to \(K^* \mu^+\mu^-\). My previous texts are somewhat technical, Allanach and You are a bit less technical, and Futurism.com is arguably even more popular.



As Allanach and You put it, if you build 16,000 LHC colliders, you not only pay $160 trillion but you also get approximately one collider in which the agreement with the Standard Model in this single quantity is as bad as this actual single LHC collider of ours actually shows (or worse). When I mentioned the money, I can't resist to mention that the money that will evaporate when the Bitcoin bubble bursts are enough for a dozen of LHC colliders – and even more if there will be additional growth before it bursts. ;-)

OK, there's some 4-sigma deficit.




Well, why did they write this text 3 days ago and not in March and April as this blog? It's because they also promote their 2-week-old hep-ph preprint (with Gripaios)
The Case for Future Hadron Colliders From \(B \to K^{(*)} \mu^+ \mu^-\) Decays
What do they conclude?




They study two most obvious explanations for the anomaly they can think of – assuming that the deviation is more than a random fluke or bad luck: the new, \(Z'\)-bosons similar to the \(Z\)-boson but heavier; and leptoquarks, new elementary particles whose quantum numbers coincide with a bound state of a lepton and a quark.

In their paper, some of the parameter space is covered by luminosity-upgraded LHC. For some other pieces of the parameter space, they want you to upgrade the LHC to \(33\TeV\), a more than doubled energy. But they really want you to pay for a \(100\TeV\) collider such as the FCC-\(hh\) which would be nice. You really need to produce the \(Z'\)-bosons or leptoquarks directly to know who is reponsible for the deviation.



How do the LHC girls understand the bottom quark? It's the fourth event of its kind that they experience with a boy. I hope no one misinterprets this comment as sexist; it's supposed to be sexual. ;-)

In the Guardian, Allanach and You say that the situation could be analogous to the year 2011 when some hints began to emerge that the quasi-light Higgs boson existed but before it was discovered. Well, maybe. A big difference is that the Higgs boson had to exist – because of solid arguments needed for our theories (explaining well-established important particle physics phenomena) to be consistent. So far fishy LHCb deviations are the only reasons to believe that \(Z'\)-bosons or leptoquarks should exist.

But if you have spare $50 billion or a few million Bitcoins (hello, Satoshi) for a new collider, you can use the donation buttons in the right column. If I had to pay it purely because of this LHCb anomaly, I would at least wait whether the deviation as the number of standard deviations will grow when the data is doubled after 2017. But maybe they already know that it will grow – the papers aren't out yet, I think.

by Luboš Motl (noreply@blogger.com) at November 04, 2017 05:34 PM

November 03, 2017

ZapperZ - Physics and Physicists

Muons, The Little Particles That Could
These muons are becoming the fashionable particles of the moment.

I mentioned at the beginning of this year (2017) of the use of muon tomography to image the damaged core at Fukushima. Now, muons are making headlines in two separate applications.

The first is the use of cosmic muons imaging that discovered hidden chambers inside Khufu's Pyramid at Giza. The second is more use of muons to probe the status of nuclear waste safely.

The comment I wrote in the first link still stands. We needed to know the fundamental properties of muons FIRST before we could actually use then to all these applications. And that fundamental knowledge came from high-energy/elementary particle physics.

So chalk this up to another application of such an esoteric field of study.

Zz.

by ZapperZ (noreply@blogger.com) at November 03, 2017 02:22 PM

Lubos Motl - string vacua and pheno

HEP: what was written, cited in 2017
TV: Don't forget that aside from S11E06 episode of The Big Bang Theory, the S01E02 episode of Young Sheldon finally aired yesterday – it's full of cool boy genius stuff – Sheldon was using Carnegie science to find friends.
If you search INSPIRE, a particle physics database, for find topcite 50+ and date 2017, you will get 102 hits – papers timestamped as 2017 that have already earned at least 50 followups. An unusually high percentage are experimental papers.

Various papers were published by the LHC collaborations – ATLAS, CMS, LHCb (various properties of mesons) – as well as LIGO and direct searches for dark matter such as XENON1T. LIGO has found the gravitational waves – from black holes and kilonovae – but otherwise the results of all these experiments have confirmed the null hypotheses.

The number of papers submitted to hep-th (pure particle physics theory) in this list is just 15. They include some papers about the microscopic information of black holes, soft hair, matrices in them, as well as the SYK model – a microrevolution of recent years – and Erik Verlinde's irritating abolition of dark matter. Except for SYK, these or similar papers have been covered in various TRF blog posts.




There are 26 papers in the list labeled as hep-ph. Some of those are still about the \(750\GeV\) diphoton excess that disappeared in the previous year. Most of the others focus on fermions' mass matrices and mixing – neutrino mass matrices, lepton universality and flavor physics in general, anomalies seen in mesons. Some of them are about radions, others propose new experiments – search for dark matter and colliders.




I think that the apparent suppression of the brilliant and creative theoretical work is disappointing and worrisome. There is nothing wrong about experiments. We've been looking forward to many of these experiments and most of us expected some new physics which really hasn't emerged. But the confirmations of the status quo were likely enough and it's clear that people must refocus much of their attention on theory where the expenses for "more sophisticated models" don't grow as rapidly as the costs of increasingly penetrating experiments.

If you search for find topcite 500+ and date 2007, i.e. 10-year-older search with a 10 times higher requirement for the citations, you find 85 papers. Among them 16 papers – a higher percentage than in the 2017 search at the top – are denoted as hep-th. Those are papers on AdS/CFT hydrodynamics, membrane minirevolution, Higgs as inflaton, transcendentality in amplitudes, flux compactifications, and also some MOND. Although the black hole hair papers of recent years are deep, I feel that the 10-year-old papers are more intellectually diverse and generally used more sophisticated and beautiful mathematics. 24 hep-ph papers in that list are about jets and conventional topics but some of those were about MSSM, a topic that isn't seen in the 2017 search.

At many old moments I remember, there would be some beacons, papers that were famous enough and that e.g. young people could elaborate upon and get interesting enough work – with some chance of surpassing the original paper they started with. The beacons were uncountable e.g. during the superstring revolutions but even afterwards, one went through the AdS/CFT wave and subwaves, BMN and \(pp\)-waves, cosmological constant in string theory, old matrix models, twistor minirevolution with various subwaves and amplitude industry ramifications, and others. I am not sure it's still the case. The "cutting-edge" with its fashionable topics almost universally agreed to be "cool" has almost disappeared. I was trying to imagine how a brilliant graduate student actually feels these days.

She does some physics and may do so. But imagine she wants to show that she is really on par with the likes of Witten, or perhaps just a level or two beneath Witten. Witten has accumulated over 130,000 citations. How much should you expect before your PhD defense? 50? 100? Isn't it too much to ask? It seems extremely difficult these days for a new person to write a paper that gets over 10 citations in a year. Is one supposed to write 10,000 papers then?

I would bet that citation counts like Witten's 130,000 won't be beaten by anybody for a very, very long time if ever. It would be great to be proven wrong – and soon.

If one accepts the observation that things have soured, what is the reason? Is it completely natural? Have people run out of ideas and/or excitement naturally? I don't really believe it. A reason why I don't believe it is that there have been lots of these minirevolutions that have still left some interesting enough projects that should be continued. I do think that the fading of the cutting-edge waiting for the brilliant graduate student is a result of the ideological atmosphere in the environment that surrounds the HEP research community.

To simplify things just a little bit, I do think that brilliance has de facto become politically incorrect and it is not being rewarded, at least not among the generation of people who are graduate students today. This blog has recently – and often – discussed some insanities that are far from the HEP research community. Like the claims that the Pythagorean theorem and \(\pi\) are examples of white supremacy. Similar lunacies are extreme and they're signs of what's wrong in the broader society, not the HEP research community per se.

But there are lots of similar, much less extreme, but still brutally harmful changes to the atmosphere that are going on within universities, even their physics departments, and sometimes even fundamental-particle physics groups. Many of those have lost their self-confidence and their natural happiness about the insights and about their and their colleagues' natural brilliance. Some top people have been silent about – tolerating – crap like Sabine Hossenfelder's "physics is lost in math and gone astray" that they have really surrendered. I think that pseudointellectual junk such as Sabine Hossenfelder – I don't mean her specifically, she's just a very particular example that's been discussed here – are really in charge of important things such as "what you may be loudly excited about". And that's the main reason why the creative work rooted in remarkable math has weakened. People who want to do such things are discouraged by their nearly heretical status – or at least the shortage of rewards for this kind of work.

The result is that much of the work is dull and doesn't differ much from what could have been done in the 1980s if not 1960s. The cutting edge has been obfuscated and apparently disappeared. For a hypothetical #1 ingenious teenager, it's hard to show that he is #1. Maybe even if he discovered something amazing, no one seems to be waiting for that, and that may be why he doesn't even try. Some billionaires or other influential people will have to create some artificial environment where "things are alright again", where brilliant people don't have to be ashamed of their brilliance.

by Luboš Motl (noreply@blogger.com) at November 03, 2017 09:18 AM

November 01, 2017

ZapperZ - Physics and Physicists

Are University Admission Biased?
This is a rather interesting Minute Physics video. It is tackling what is known as the Simpson Paradox. What is interesting is that it is applying it to an example where on first glance, there appears to be no form of statistical bias, but when viewed another way, it seems that there is.



What is interesting here is that several years ago, I mentioned of an AIP study examining universities in the US that have very small number of physics faculty and how many of those that do not have a single female faculty member. The result found that, statistically, this is what is expected based on the number of female physics PhDs, meaning that we can't simply accuse these schools (and hiring of female physicist in general) of bias against female physicists. This Minute Physics video appears to provide an illustration of what is expected statistically without imposing even any bias to the sample.

Again, I'm not saying that female physicists and faculty members do not face unfair or more challenges in their career when compared to male physicists. But illustrations such as these should also be considered so that we tackle problems that are real and meaningful and not chase something is not the source of the problem.

Zz.

by ZapperZ (noreply@blogger.com) at November 01, 2017 02:18 AM

October 31, 2017

John Baez - Azimuth

Complex Adaptive Systems (Part 6)

I’ve been slacking off on writing this series of posts… but for a good reason: I’ve been busy writing a paper on the same topic! In the process I caught a couple of mistakes in what I’ve said so far. But more importantly, there’s a version out now, that you can read:

• John Baez, John Foley, Blake Pollard and Joseph Moeller, Network models.

There will be two talks about this at the AMS special session on Applied Category Theory this weekend at U. C. Riverside: one by John Foley of Metron Inc., and one by my grad student Joseph Moeller. I’ll try to get their talk slides someday. But for now, here’s the basic idea.

Our goal is to build operads suited for designing networks. These could be networks where the vertices represent fixed or moving agents and the edges represent communication channels. More generally, they could be networks where the vertices represent entities of various types, and the edges represent relationships between these entities—for example, that one agent is committed to take some action involving the other. This paper arose from an example where the vertices represent planes, boats and drones involved in a search and rescue mission in the Caribbean. However, even for this one example, we wanted a flexible formalism that can handle networks of many kinds, described at a level of detail that the user is free to adjust.

To achieve this flexibility, we introduced a general concept of ‘network model’. Simply put, a network model is a kind of network. Any network model gives an operad whose operations are ways to build larger networks of this kind by gluing smaller ones. This operad has a ‘canonical’ algebra where the operations act to assemble networks of the given kind. But it also has other algebras, where it acts to assemble networks of this kind equipped with extra structure and properties. This flexibility is important in applications.

What exactly is a ‘kind of network’? That’s the question we had to answer. We started with some examples, At the crudest level, we can model networks as simple graphs. If the vertices are agents of some sort and the edges represent communication channels, this means we allow at most one channel between any pair of agents.

However, simple graphs are too restrictive for many applications. If we allow multiple communication channels between a pair of agents, we should replace simple graphs with ‘multigraphs’. Alternatively, we may wish to allow directed channels, where the sender and receiver have different capabilities: for example, signals may only be able to flow in one direction. This requires replacing simple graphs with ‘directed graphs’. To combine these features we could use ‘directed multigraphs’.

But none of these are sufficiently general. It’s also important to consider graphs with colored vertices, to specify different types of agents, and colored edges, to specify different types of channels. This leads us to ‘colored directed multigraphs’.

All these are examples of what we mean by a ‘kind of network’, but none is sufficiently general. More complicated kinds, such as hypergraphs or Petri nets, are likely to become important as we proceed.

Thus, instead of separately studying all these kinds of networks, we introduced a unified notion that subsumes all these variants: a ‘network model’. Namely, given a set C of ‘vertex colors’, a network model is a lax symmetric monoidal functor

F: \mathbf{S}(C) \to \mathbf{Cat}

where \mathbf{S}(C) is the free strict symmetric monoidal category on C and \mathbf{Cat} is the category of small categories.

Unpacking this somewhat terrifying definition takes a little work. It simplifies in the special case where F takes values in \mathbf{Mon}, the category of monoids. It simplifies further when C is a singleton, since then \mathbf{S}(C) is the groupoid \mathbf{S}, where objects are natural numbers and morphisms from m to n are bijections

\sigma: \{1,\dots,m\} \to \{1,\dots,n\}

If we impose both these simplifying assumptions, we have what we call a one-colored network model: a lax symmetric monoidal functor

F : \mathbf{S} \to \mathbf{Mon}

As we shall see, the network model of simple graphs is a one-colored network model, and so are many other motivating examples. If you like André Joyal’s theory of ‘species’, then one-colored network models should be pretty fun, since they’re species with some extra bells and whistles.

But if you don’t, there’s still no reason to panic. In relatively down-to-earth terms, a one-colored network model amounts to roughly this. If we call elements of F(n) ‘networks with n vertices’, then:

• Since F(n) is a monoid, we can overlay two networks with the same number of vertices and get a new one. We call this operation

\cup \colon F(n) \times F(n) \to F(n)

• Since F is a functor, the symmetric group S_n acts on the monoid F(n). Thus, for each \sigma \in S_n, we have a monoid automorphism that we call simply

\sigma \colon F(n) \to F(n)

• Since F is lax monoidal, we also have an operation

\sqcup \colon F(m) \times F(n) \to F(m+n)

We call this operation the disjoint union of networks. In examples like simple graphs, it looks just like what it sounds like.

Unpacking the abstract definition further, we see that these operations obey some equations, which we list in Theorem 11 of our paper. They’re all obvious if you draw pictures of examples… and don’t worry, our paper has a few pictures. (We plan to add more.) For example, the ‘interchange law’

(g \cup g') \sqcup (h \cup h') = (g \sqcup h) \cup (g' \sqcup h')

holds whenever g,g' \in F(m) and h, h' \in F(n). This is a nice relationship between overlaying networks and taking their disjoint union.

In Section 2 of our apper we study one-colored network models, and give lots of examples. In Section 3 we describe a systematic procedure for getting one-colored network models from monoids. In Section 4 we study general network models and give examples of these. In Section 5 we describe a category \mathbf{NetMod} of network models, and show that the procedure for getting network models from monoids is functorial. We also make \mathbf{NetMod} into a symmetric monoidal category, and give examples of how to build new networks models by tensoring old ones.

Our main result is that any network model gives a typed operad, also known as a ‘colored operad’. This operad has operations that describe how to stick networks of the given kind together to form larger networks of this kind. This operad has a ‘canonical algebra’, where it acts on networks of the given kind—but the real point is that it has lots of other algebra, where it acts on networks of the given kind equipped with extra structure and properties.

The technical heart of our paper is Section 6, mainly written by Joseph Moeller. This provides the machinery to construct operads from network models in a functorial way. Category theorists should find this section interesting, because because it describes enhancements of the well-known ‘Grothendieck construction’ of the category of elements \int F of a functor

F: \mathbf{C} \to \mathbf{Cat}

where \mathbf{C} is any small category. For example, if \mathbf{C} is symmetric monoidal and F : \mathbf{C} \to (\mathbf{Cat}, \times) is lax symmetric monoidal, then we show \int F is symmetric monoidal. Moreover, we show that the construction sending the lax symmetric monoidal functor F to the symmetric monoidal category \int F is functorial.

In Section 7 we apply this machinery to build operads from network models. In Section 8 we describe some algebras of these operads, including an algebra whose elements are networks of range-limited communication channels. In future work we plan to give many more detailed examples, and to explain how these algebras, and the homomorphisms between them, can be used to design and optimize networks.

I want to explain all this in more detail—this is a pretty hasty summary, since I’m busy this week. But for now you can read the paper!


by John Baez at October 31, 2017 07:11 PM

October 30, 2017

Tommaso Dorigo - Scientificblogging

The Future Of The LHC, And The Human Factor
Today at CERN a workshop started on the physics of the High-Luminosity and High-Energy phases of Large Hadron Collider operations. This is a three-days event meant at preparing the ground for the decision on which, among several possible scenarios that have been pictured for the future of particle physics in Europe, will be the one on which the European Community will invest in the next few decades. The so-called "European Strategy for particle physics" will be decided in a couple of years, but getting the hard data on which to base that crucial decision is today's job. 

Some context

read more

by Tommaso Dorigo at October 30, 2017 10:09 PM

October 26, 2017

Clifford V. Johnson - Asymptotia

Here and There

[caption id="attachment_18854" align="aligncenter" width="499"] Kent Devereaux @NHIAPres took this at Poptech[/caption]

I've been a bit pulled hither and thither this last ten days or so. I was preparing and then giving a couple of talks. One was at (En)Lightning Talks LA, and the other was at PopTech (in Camden, Maine). I was therefore a bit absent from here, the blog, but very present on social media at various points (especially at PopTech) so do check out the various social media options in the sidebar.

In both cases, the talks were about my work on my familiar (to many of you) theme: Working to put science back into the general culture where it belongs. The longer talk (at PopTech in Camden Maine) was 15 minutes long or so, and I gave some introduction and motivation to this mission, and then used two examples. The first was my work on science advising for movies and TV, and I gave examples of what I consider good practice in terms of how [...] Click to continue reading this post

The post Here and There appeared first on Asymptotia.

by Clifford at October 26, 2017 10:04 PM

Tommaso Dorigo - Scientificblogging

A Simple Two-Mover
My activity as a chessplayer has seen a steady decline in the past three years, due to overwhelming work obligations. To play in chess tournaments at a decent level, you not only need to be physically fit and well trained for the occasion, but also have your mind free from other thoughts. Alas, I have been failing miserably in the second and third of the above requirements. So I have essentially retired from competitive chess, and my only connection to the chess world is through the occasional 5-minute blitz game over the internet.

read more

by Tommaso Dorigo at October 26, 2017 08:02 PM

Lubos Motl - string vacua and pheno

Cross sections: visualizations are possible but reality is their generalization
In particle physics and similar disciplines, cross sections are quantities that determine the probability that a collision of two objects with a "particular desired outcome" is successful. The Symmetry Magazine wrote an article promoting the concept and saying how wonderful it is that it's used in several disciplines of science.



I want to use the example of the "cross section" concept to show in what sense quantum mechanics "builds upon" the pictures we could draw in classical physics; but it isn't quite one of the classical pictures. So yes, this is a blog post in the "foundations of quantum mechanics" category.

"Cross sections" play the very same role as general "probabilities" of some evolution, transition, or process except that they're optimized for a special class of initial states – initial states that look like two particles or objects heading for a collision.




Let's start with the simple classical picture. Imagine that you're shooting bullets at something or somebody. To have a particular example in mind, shoot bullets from a CZ 75 at a Slovak communist agent. Note that I am not mentioning any specific target by name – we're just fighting pure abstract evil. A bullet that at least touches the agent is considered a "success".




OK, you stand at some distance from him, aim at him, and shoot many times. How many times will you be successful?

It depends on the "size" of the communist agent in some way because the "larger" he is, the easier it is to hit him; and on the precision of your aiming because the more precisely you aim, the higher chances you have to neutralize him. What are the equations? If you shoot \(W\) bullets per second, we can call it a rate. What's more directly relevant is the "flux" \(\Phi\), i.e. the number of bullets per squared meter per second that are flying "somewhere near" the agent. Once again, the units of \(\Phi\) are "one [projectile] per squared meter and per second". A projectile is dimensionless – we just count them. In some rough counting, \(\Phi = W / A\) where \(W=N/t\), the number of bullets per second, and \(A\) is the effective area at the same distance where the flux of the projectiles gets dissolved.

The more precisely you're able to aim, i.e. the "smaller the area" \(A\) in the denominator is, and the higher is the flux \(\Phi = W / A\). Now, to count the number of "successful" bullets, you multiply the rate by a characteristic constant, the cross section \(\sigma\),\[

W_{\rm success} = \Phi \cdot \sigma,

\] which has the units of area. This \(W_{\rm success}\) is the number of projectiles that hit the agent each second. You see that it's proportional to all the natural factors – to the flux \(\Phi\) and to some "size" \(\sigma\) of the agent. This way to express the success rate is natural because it separates all the quantities that matter to two groups: the flux \(\Phi\) takes care of all the parameters describing your gun, its speed, your aiming, the distance etc. while \(\sigma\) includes all the information about the agent himself, as seen from the direction of the projectiles, and his local interactions with the bullet.

It's not hard to see that in the purely classical, geometric case, \(\sigma\) is just the area of the projection of the agent to a two-dimensional plane orthogonal to the direction of the projectiles, by the rays whose direction coincides with the direction of the projectiles. The cross section is literally the cross-sectional area of the agent or his projection.



Note that one can also "refine" the concept of the cross section according to the final state. For example, you may want to know the rate of the successful bullets that hit a squared centimeter of his brain at some location; or events in which the bullets hit him and recoil to hit another region in space afterwards. One may therefore divide the final state to a whole spectrum and describe the "success rate" as an integral over some variables such as angles or a coordinate describing a location in his brain. The integrand – usually in an integral over angles describing the direction of motion of some final objects – is called the "differential cross section".

Great. What I presented above is a pedagogic model of the cross section. You can learn how to think about the cross section, flux, and various rates if you have a particular projectile and a particular target in mind. However, the point I want to make is that the same mathematics applies more generally. The analogous quantities and equations may be helpful even if you shoot at a different target, e.g. a Japanese immigrant who fights against immigrants too assertively. Or a pirate. Or another de facto winner of the elections in a Central European country. ;-) The equations also work when you replace CZ 75 with a Vz. 58, the Czechoslovak superior answer to the Kalashnikov.

Some generalizations are obvious. The shape of the gun, projectile, and target may be anything. But when you're talking about the proton-proton collisions or other collisions in particle physics, you need to generalize all the ideas above a little bit more dramatically. First, when two protons collide, they don't have a sharp shape. One of the "slightly harder" but still easy generalizations allows you to make the boundary of the target (or projectile) fuzzy.

Imagine that you're colliding protons according to the laws of classical physics. They repel by the Coulomb force. What is the probability that the proton-projectile's direction of motion will change at least a tiny little bit? Well, it's actually 100%. You would need an infinitely unlikely conspiracy for all the effects of the other proton to cancel exactly. For example, you would need to shoot one proton precisely at the center of the other, and it's impossible to aim this precisely.

In fact, in classical physics, you change the direction of the projection at least a little bit, regardless of the impact factor – the cross section is therefore infinite – for any potential in classical physics that is nonzero at any distance. Even if the repulsive potential decreases as quickly as \(\exp(-Kr)/r\), the Yukawa potential, the probability is zero that the direction of the particle is precisely unchanged. The force is simply nonzero and an arbitrarily small nonzero force changes the direction of the motion of the projectiles.

As you may have guessed, my aim – figuratively, in this case – is to discuss the quantum mechanical generalization. What is the actual change or generalization that quantum mechanics requires? And what are the new consequences?

In quantum mechanics, you may still talk about "similar" initial states and "similar" final states. Also, you may calculate\[

W_{\rm success} = \Phi \cdot \sigma,

\] according to the same equation that we understood classically. What quantum mechanics changes is the character of the intermediate states. In classical physics, you could mentally trace the projectile at every moment of its trajectory and objectively say what was happening with the bullet, how the size of a hole in the communist agent's skull was increasing in a particular millisecond, and so on. Maybe you didn't rent a high-speed camera to record what was happening with the agent. But classical physics allows you to imagine that someone could have monitored it and this monitoring wouldn't change anything substantial about the assassination. Some pictures of the intermediate state "objectively exist" even if no one knows what they are, classical physics allows you to assume.

That's not the case in quantum mechanics. The intermediate states – when they're not observed – don't have any objective properties. And if you wanted to observe what some quantities or pictures are during the collision itself, you would generically change the experiment and its outcome. In quantum mechanics, something happens in the intermediate state – some black box is operational in the middle – but you may talk about the transformation of initial states (analogous to those in classical physics) to final states (also analogous to classical physics) and quantum mechanics gives you directly a novel prescription to calculate the intrinsic probability of the process, e.g. the cross section of a collision. The quantum mechanical prescription is built on the Born's rule (squared absolute value of some complex probability amplitude) involving complex amplitudes extracted as matrix elements and products of operators acting on a complex Hilbert space.

Let's ask: Does the need to abandon the intermediate pictures violate some well-known facts? Is it consistent? Should it have been expected?

Well, quantum mechanics is at least as consistent as classical physics. It talks about "analogous observable" initial and final states. It just replaces the hard work in between with a "black box". For consistency, the quantum mechanical probabilities need to obey laws such as\[

P(A\text{ or }B) = P(A)+P(B) - P(A\text{ and } B)

\] and so on. And one may prove that all these rules are obeyed by the Born-rule-based quantum mechanical formulae for the probabilities and the cross sections. You need the orthogonality of the post-measurement states; or the consistency condition for the "consistent histories" in the interpretation of quantum mechanics based on consistent histories. These conditions are morally equivalent and when you do things right, they just work. So all the rules for probabilities that guarantee the consistency of mathematical logic within classical physics are still obeyed although you need to play with operators and vectors in a complex Hilbert space to prove them in quantum mechanics. But once again, what matters is that these formulae work – the theory is consistent.

Is there some "need" for the intermediate pictures? Not really. By definition, the intermediate states of a transition e.g. collision are not observed. So if they're not observed, you can't have any empirical evidence about them – and about their existence. The empirical evidence is one that matters in science. Because you don't have the evidence for the existence of any particular properties of these hypothetical intermediate states (analogous to the growing hole in the skull of the agent etc.), you shouldn't claim that they exist or that they "should" exist. Quantum mechanics doesn't really allow them to exist, they don't exist, and there is absolutely nothing "wrong" about this novel statement. Instead, it's a deep discovery of the 20th century science.

I started with the cross sections but the same comments about the intermediate state – and the absence of particular pictures – holds for any process in quantum mechanics.

What are the differences in the predicted cross sections?

Because the character of the "intermediate history" is conceptually different in quantum mechanics, quantum mechanics tells us to calculate the cross sections and probabilities using very different formulae. They're not just a version of the classical "shooting at agents". You need operators on complex linear spaces. Are the resulting cross sections the same as in a classical theory?

Not quite. Some of them may be very similar and there exist classical limits of most quantum mechanical theories – extreme parts of some parameter spaces – where the quantum mechanical answers mimic the classical ones. Sometimes, e.g. exactly in the Coulomb scattering, the cross sections calculated quantum mechanically have exactly the same form in classical physics – these situations are rare and special, however.

In general, the quantum mechanical answers may differ so deeply that one may prove that no classical scattering – with particular pictures in between – could produce exactly the same answers. But there are some easy-to-understand and rather general differences in the quantum mechanical cross sections. One of them is that finite cross sections are omnipresent in quantum mechanics. What do I mean?

Do you remember that I argued that the "total cross section" is infinite for any potential in classical physics because the force is nonzero and always causes "at least a slight" change of the direction of the projectile? Well, so this conclusion is basically wrong for "one-half" of the potentials in quantum mechanics. The potentials with a finite cross section are known as resulting from the "short-range forces" while the potentials with the infinite cross section, like in classical physics, follow from "long-range forces".

First, examples. The Coulomb \(1/r\) potential is a long-range force. Classically, the force always changes the direction of the charged particle at least a little bit. That's why you don't need to aim at all if your only goal is to change the direction at least a little bit: you always succeed. So it's like hitting an infinitely fat agent. You can't miss him. I also mentioned that the formula for the Coulomb cross section happens to be the same in quantum mechanics. So the total cross section is infinite in quantum mechanics, too.

What about the Yukawa potential, \(\exp(-Kr)/r\)? In classical physics, it's still true that the force is nonzero at any distance, so it's virtually guaranteed that the direction of the projectile changes even if you don't aim precisely at all. It's different in quantum mechanics. Why? In quantum mechanics,
arbitrarily weak but guaranteed effects are replaced with finite (not so tiny) changes that have a small probability.
And for this reason, the cross section of the scattering governed by the Yukawa potential is finite! The point is that you may calculate the probability that the direction changes at all, given some "impact factor" (distance between the axis of the initial motion from the center of the target-object). And the probability is strictly in between 0 and 100 percent. It's because the tiny force acts differently than it did in classical physics. In classical physics, a tiny force created a tiny change of the velocity of the projectile.

In quantum mechanics, the tiny force changes the probabilities that the direction of the motion is some vector or another vector. But the probability that the velocity remains precisely the same as it was in the beginning remains nonzero. One suppresses the probability amplitude for that default velocity a little bit but it doesn't instantly drop to zero. It drops by a finite amount – and the probabilities for different directions rise from zero to a finite amount.

This is a characteristic quantum mechanical behavior. You may see it in related situations and descriptions. You know, when a particle changes its velocity, you may understand the change as the emission of some electromagnetic waves. But in quantum mechanics, one emits photons and at a given frequency – and the frequency may be "approximately given" in the case of scattering – one can't have arbitrarily weak electromagnetic waves. One photon is the minimum amount of energy that electromagnetic waves of that frequency can carry!

So instead of emitting \(N\) photons where \(N\) is an arbitrary positive real number, you may only emit \(N\in\ZZ\) photons, an integer, and the small number resulting from the tiny Yukawa potential manifests itself as the tiny probability that the number of emitted photons changes from \(N=0\) to \(N=1\) at all! When the force is really weak, the probability of \(N=2,3,\dots\) is negligible.

The most general lesson I want to convey is that the particular classical theories and "pictures" may be good to visualize the basic concepts such as probabilities and cross sections – they may be useful for pedagogic reasons. But physics – and quantum mechanics – may recycle many of the formulae and rules while completely denying the pictures that were used to "visualize" the cross section and other things. These pictures aren't really needed and, if you study things carefully, you find out that they don't exist according to the laws of Nature as we've known them for almost a century.

So quantum mechanics generalizes classical physics with its picture in a similar way as one axiomatic system in mathematics generalizes another – when some of the axioms are simply dropped. The main axiom of classical physics that is dropped is that "there exists an objective, particular picture of reality at every moment of the intermediate evolution". This axiom is not only dropped; one can indeed prove that this classical assumption is false within the framework of quantum mechanics. It's an example of a revolution in physics but there's nothing wrong about the new, quantum framework because the new framework is as internally consistent as the classical framework was; and it doesn't contradict any empirically proven facts, either. It's just new and may be philosophically challenging for many people – but that's not a rational reason to refuse a physical theory.

And that's the memo.

by Luboš Motl (noreply@blogger.com) at October 26, 2017 11:38 AM

October 24, 2017

Andrew Jaffe - Leaves on the Line

The Chandrasekhar Mass and the Hubble Constant

The

first direct detection of gravitational waves was announced in February of 2015 by the LIGO team, after decades of planning, building and refining their beautiful experiment. Since that time, the US-based LIGO has been joined by the European Virgo gravitational wave telescope (and more are planned around the globe).

The first four events that the teams announced were from the spiralling in and eventual mergers of pairs of black holes, with masses ranging from about seven to about forty times the mass of the sun. These masses are perhaps a bit higher than we expect to by typical, which might raise intriguing questions about how such black holes were formed and evolved, although even comparing the results to the predictions is a hard problem depending on the details of the statistical properties of the detectors and the astrophysical models for the evolution of black holes and the stars from which (we think) they formed.

Last week, the teams announced the detection of a very different kind of event, the collision of two neutron stars, each about 1.4 times the mass of the sun. Neutron stars are one possible end state of the evolution of a star, when its atoms are no longer able to withstand the pressure of the gravity trying to force them together. This was first understood by S Chandrasekhar in the early years of the 20th Century, who realised that there was a limit to the mass of a star held up simply by the quantum-mechanical repulsion of the electrons at the outskirts of the atoms making up the star. When you surpass this mass, known, appropriately enough, as the Chandrasekhar mass, the star will collapse in upon itself, combining the electrons and protons into neutrons and likely releasing a vast amount of energy in the form of a supernova explosion. After the explosion, the remnant is likely to be a dense ball of neutrons, whose properties are actually determined fairly precisely by similar physics to that of the Chandrasekhar limit (discussed for this case by Oppenheimer, Volkoff and Tolman), giving us the magic 1.4 solar mass number.

(Last week also coincidentally would have seen Chandrasekhar’s 107th birthday, and Google chose to illustrate their home page with an animation in his honour for the occasion. I was a graduate student at the University of Chicago, where Chandra, as he was known, spent most of his career. Most of us students were far too intimidated to interact with him, although it was always seen as an auspicious occasion when you spotted him around the halls of the Astronomy and Astrophysics Center.)

This process can therefore make a single 1.4 solar-mass neutron star, and we can imagine that in some rare cases we can end up with two neutron stars orbiting one another. Indeed, the fact that LIGO saw one, but only one, such event during its year-and-a-half run allows the teams to constrain how often that happens, albeit with very large error bars, between 320 and 4740 events per cubic gigaparsec per year; a cubic gigaparsec is about 3 billion light-years on each side, so these are rare events indeed. These results and many other scientific inferences from this single amazing observation are reported in the teams’ overview paper.

A series of other papers discuss those results in more detail, covering the physics of neutron stars to limits on departures from Einstein’s theory of gravity (for more on some of these other topics, see this blog, or this story from the NY Times). As a cosmologist, the most exciting of the results were the use of the event as a “standard siren”, an object whose gravitational wave properties are well-enough understood that we can deduce the distance to the object from the LIGO results alone. Although the idea came from Bernard Schutz in 1986, the term “Standard siren” was coined somewhat later (by Sean Carroll) in analogy to the (heretofore?) more common cosmological standard candles and standard rulers: objects whose intrinsic brightness and distances are known and so whose distances can be measured by observations of their apparent brightness or size, just as you can roughly deduce how far away a light bulb is by how bright it appears, or how far away a familiar object or person is by how big how it looks.

Gravitational wave events are standard sirens because our understanding of relativity is good enough that an observation of the shape of gravitational wave pattern as a function of time can tell us the properties of its source. Knowing that, we also then know the amplitude of that pattern when it was released. Over the time since then, as the gravitational waves have travelled across the Universe toward us, the amplitude has gone down (further objects look dimmer sound quieter); the expansion of the Universe also causes the frequency of the waves to decrease — this is the cosmological redshift that we observe in the spectra of distant objects’ light.

Unlike LIGO’s previous detections of binary-black-hole mergers, this new observation of a binary-neutron-star merger was also seen in photons: first as a gamma-ray burst, and then as a “nova”: a new dot of light in the sky. Indeed, the observation of the afterglow of the merger by teams of literally thousands of astronomers in gamma and x-rays, optical and infrared light, and in the radio, is one of the more amazing pieces of academic teamwork I have seen.

And these observations allowed the teams to identify the host galaxy of the original neutron stars, and to measure the redshift of its light (the lengthening of the light’s wavelength due to the movement of the galaxy away from us). It is most likely a previously unexceptional galaxy called NGC 4993, with a redshift z=0.009, putting it about 40 megaparsecs away, relatively close on cosmological scales.

But this means that we can measure all of the factors in one of the most celebrated equations in cosmology, Hubble’s law: cz=Hd, where c is the speed of light, z is the redshift just mentioned, and d is the distance measured from the gravitational wave burst itself. This just leaves H₀, the famous Hubble Constant, giving the current rate of expansion of the Universe, usually measured in kilometres per second per megaparsec. The old-fashioned way to measure this quantity is via the so-called cosmic distance ladder, bootstrapping up from nearby objects of known distance to more distant ones whose properties can only be calibrated by comparison with those more nearby. But errors accumulate in this process and we can be susceptible to the weakest rung on the chain (see recent work by some of my colleagues trying to formalise this process). Alternately, we can use data from cosmic microwave background (CMB) experiments like the Planck Satellite (see here for lots of discussion on this blog); the typical size of the CMB pattern on the sky is something very like a standard ruler. Unfortunately, it, too, needs to calibrated, implicitly by other aspects of the CMB pattern itself, and so ends up being a somewhat indirect measurement. Currently, the best cosmic-distance-ladder measurement gives something like 73.24 ± 1.74 km/sec/Mpc whereas Planck gives 67.81 ± 0.92 km/sec/Mpc; these numbers disagree by “a few sigma”, enough that it is hard to explain as simply a statistical fluctuation.

Unfortunately, the new LIGO results do not solve the problem. Because we cannot observe the inclination of the neutron-star binary (i.e., the orientation of its orbit), this blows up the error on the distance to the object, due to the Bayesian marginalisation over this unknown parameter (just as the Planck measurement requires marginalization over all of the other cosmological parameters to fully calibrate the results). Because the host galaxy is relatively nearby, the teams must also account for the fact that the redshift includes the effect not only of the cosmological expansion but also the movement of galaxies with respect to one another due to the pull of gravity on relatively large scales; this so-called peculiar velocity has to be modelled which adds further to the errors.

This procedure gives a final measurement of 70.0+12-8.0, with the full shape of the probability curve shown in the Figure, taken directly from the paper. Both the Planck and distance-ladder results are consistent with these rather large error bars. But this is calculated from a single object; as more of these events are seen these error bars will go down, typically by something like the square root of the number of events, so it might not be too long before this is the best way to measure the Hubble Constant.

GW H0

[Apologies: too long, too technical, and written late at night while trying to get my wonderful not-quite-three-week-old daughter to sleep through the night.]

by Andrew at October 24, 2017 10:44 AM

October 23, 2017

Clifford V. Johnson - Asymptotia

Podcast Appreciation, 1

This is the first in a short series of posts about some favourite podcasts I've been listening to over the last year and a half or so.

This episode I'll mention Comics Alternative, Saturday Review and Desi Geek Girls.

But first, why am I doing this? The final six months of work on the book was a very intense period of effort. That's actually an understatement. There has been no comparable period of work in my life in terms of the necessary discipline, delicious intensity, steep learning curve, and so much more that is needed to do about 200 pages of the remaining final art needed to complete the (248 page) book. (While still doing my professoring gig and being a new dad.) I absolutely loved it - such challenges are just a delight to me.

I listened to music a lot, and discovered a lot of old parts of my music listening habits, which was fun (I'd have days where I'd listen to (and sing along to) all of Kate Bush's albums in order, then maybe same for Sting, or Lee Morgan.... or scream along to Jeff Wayne's awesome "War of the Worlds" Rock musical.) But then I got to a certain point in my workflow where I wanted voices, and I reached for radio, and podcast.

Since I was a child, listening to spoken word radio has been a core part of[...] Click to continue reading this post

The post Podcast Appreciation, 1 appeared first on Asymptotia.

by Clifford at October 23, 2017 03:22 AM

October 22, 2017

John Baez - Azimuth

Applied Category Theory 2018 — Adjoint School

The deadline for applying to this ‘school’ on applied category theory is Wednesday November 1st.

Applied Category Theory: Adjoint School: online sessions starting in January 2018, followed by a meeting 23–27 April 2018 at the Lorentz Center in Leiden, the Netherlands. Organized by Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford).

The name ‘adjoint school’ is a bad pun, but the school should be great. Here’s how it works:

Overview

The Workshop on Applied Category Theory 2018 takes place in May 2018. A principal goal of this workshop is to bring early career researchers into the applied category theory community. Towards this goal, we are organising the Adjoint School.

The Adjoint School will run from January to April 2018. By the end of the school, each participant will:

  • be familiar with the language, goals, and methods of four prominent, current research directions in applied category theory;
  • have worked intensively on one of these research directions, mentored by an expert in the field; and
  • know other early career researchers interested in applied category theory.

They will then attend the main workshop, well equipped to take part in discussions across the diversity of applied category theory.

Structure

The Adjoint School comprises (1) an Online Reading Seminar from January to April 2018, and (2) a four day Research Week held at the Lorentz Center, Leiden, The Netherlands, from Monday April 23rd to Thursday April 26th.

In the Online Reading Seminar we will read papers on current research directions in applied category theory. The seminar will consist of eight iterations of a two week block. Each block will have one paper as assigned reading, two participants as co-leaders, and three phases:

  • A presentation (over WebEx) on the assigned reading delivered by the two block co-leaders.
  • Reading responses and discussion on a private forum, facilitated by Brendan Fong and Nina Otter.
  • Publication of a blog post on the n-Category Café written by the co-leaders.

Each participant will be expected to co-lead one block.

The Adjoint School is taught by mentors John Baez, Aleks Kissinger, Martha Lewis, and Pawel Sobocinski. Each mentor will mentor a working group comprising four participants. During the second half of the Online Reading Seminar, these working groups will begin to meet with their mentor (again over video conference) to learn about open research problems related to their reading.

In late April, the participants and the mentors will convene for a four day Research Week at the Lorentz Center. After opening lectures by the mentors, the Research Week will be devoted to collaboration within the working groups. Morning and evening reporting sessions will keep the whole school informed of the research developments of each group.

The following week, participants will attend Applied Category Theory 2018, a discussion-based 60-attendee workshop at the Lorentz Center. Here they will have the chance to meet senior members across the applied category theory community and learn about ongoing research, as well as industry applications.

Following the school, successful working groups will be invited to contribute to a new, to be launched, CUP book series.

Reading list

Meetings will be on Mondays; we will determine a time depending on the locations of the chosen participants.

Research projects

John Baez: Semantics for open Petri nets and reaction networks
Petri nets and reaction networks are widely used to describe systems of interacting entities in computer science, chemistry and other fields, but the study of open Petri nets and reaction networks is new, and raise many new questions connected to Lawvere’s “functorial semantics”.
Reading: Fong; Baez and Pollard.

Aleks Kissinger: Unification of the logic of causality
Employ the framework of (pre-)causal categories to unite notions of causality and techniques for causal reasoning which occur in classical statistics, quantum foundations, and beyond.
Reading: Kissinger and Uijlen; Henson, Lal, and Pusey.

Martha Lewis: Compositional approaches to linguistics and cognition
Use compact closed categories to integrate compositional models of meaning with distributed, geometric, and other meaning representations in linguistics and cognitive science.
Reading: Coecke, Sadrzadeh, and Clark; Bolt, Coecke, Genovese, Lewis, Marsden, and Piedeleu.

Pawel Sobocinski: Modelling of open and interconnected systems
Use Carboni and Walters’ bicategories of relations as a multidisciplinary algebra of open and interconnected systems.
Reading: Carboni and Walters; Willems.

Applications

We hope that each working group will comprise both participants who specialise in category theory and in the relevant application field. As a prerequisite, those participants specialising in category theory should feel comfortable with the material found in Categories for the Working Mathematician or its equivalent; those specialising in applications should have a similar graduate-level introduction.

To apply, please fill out the form here. You will be asked to upload a single PDF file containing the following information:

  • Your contact information and educational history.
  • A brief paragraph explaining your interest in this course.
  • A paragraph or two describing one of your favorite topics in category theory, or your application field.
  • A ranked list of the papers you would most like to present, together with an explanation of your preferences. Note that the paper you present determines which working group you will join.

You may add your CV if you wish.

Anyone is welcome to apply, although preference may be given to current graduate students and postdocs. Women and members of other underrepresented groups within applied category theory are particularly encouraged to apply.

Some support will be available to help with the costs (flights, accommodation, food, childcare) of attending the Research Week and the Workshop on Applied Category Theory; please indicate in your application if you would like to be considered for such support.

If you have any questions, please feel free to contact Brendan Fong (bfo at mit dot edu) or Nina Otter (otter at maths dot ox dot ac dot uk).

Application deadline: November 1st, 2017.


by John Baez at October 22, 2017 09:09 PM

October 21, 2017

Tommaso Dorigo - Scientificblogging

What Is Statistical Significance?
Yesterday, October 20, was the international day of Statistics. I took inspiration from it to select a clip from chapter 7 of my book "Anomaly! Collider physics and the quest for new phenomena at Fermilab" which attempts to explain how physicists use the concept of statistical significance to give a quantitative meaning to their measurements of new effects. I hope you will enjoy it....

----

read more

by Tommaso Dorigo at October 21, 2017 10:36 AM

October 17, 2017

Matt Strassler - Of Particular Significance

The Significance of Yesterday’s Gravitational Wave Announcement: an FAQ

Yesterday’s post on the results from the LIGO/VIRGO network of gravitational wave detectors was aimed at getting information out, rather than providing the pedagogical backdrop.  Today I’m following up with a post that attempts to answer some of the questions that my readers and my personal friends asked me.  Some wanted to understand better how to visualize what had happened, while others wanted more clarity on why the discovery was so important.  So I’ve put together a post which  (1) explains what neutron stars and black holes are and what their mergers are like, (2) clarifies why yesterday’s announcement was important — and there were many reasons, which is why it’s hard to reduce it all to a single soundbite.  And (3) there are some miscellaneous questions at the end.

First, a disclaimer: I am *not* an expert in the very complex subject of neutron star mergers and the resulting explosions, called kilonovas.  These are much more complicated than black hole mergers.  I am still learning some of the details.  Hopefully I’ve avoided errors, but you’ll notice a few places where I don’t know the answers … yet.  Perhaps my more expert colleagues will help me fill in the gaps over time.

Please, if you spot any errors, don’t hesitate to comment!!  And feel free to ask additional questions whose answers I can add to the list.

BASIC QUESTIONS ABOUT NEUTRON STARS, BLACK HOLES, AND MERGERS

What are neutron stars and black holes, and how are they related?

Every atom is made from a tiny atomic nucleus, made of neutrons and protons (which are very similar), and loosely surrounded by electrons. Most of an atom is empty space, so it can, under extreme circumstances, be crushed — but only if every electron and proton convert to a neutron (which remains behind) and a neutrino (which heads off into outer space.) When a giant star runs out of fuel, the pressure from its furnace turns off, and it collapses inward under its own weight, creating just those extraordinary conditions in which the matter can be crushed. Thus: a star’s interior, with a mass one to several times the Sun’s mass, is all turned into a several-mile(kilometer)-wide ball of neutrons — the number of neutrons approaching a 1 with 57 zeroes after it.

If the star is big but not too big, the neutron ball stiffens and holds its shape, and the star explodes outward, blowing itself to pieces in a what is called a core-collapse supernova. The ball of neutrons remains behind; this is what we call a neutron star. It’s a ball of the densest material that we know can exist in the universe — a pure atomic nucleus many miles(kilometers) across. It has a very hard surface; if you tried to go inside a neutron star, your experience would be a lot worse than running into a closed door at a hundred miles per hour.

If the star is very big indeed, the neutron ball that forms may immediately (or soon) collapse under its own weight, forming a black hole. A supernova may or may not result in this case; the star might just disappear. A black hole is very, very different from a neutron star. Black holes are what’s left when matter collapses irretrievably upon itself under the pull of gravity, shrinking down endlessly. While a neutron star has a surface that you could smash your head on, a black hole has no surface — it has an edge that is simply a point of no return, called a horizon. In Einstein’s theory, you can just go right through, as if passing through an open door. You won’t even notice the moment you go in. [Note: this is true in Einstein’s theory. But there is a big controversy as to whether the combination of Einstein’s theory with quantum physics changes the horizon into something novel and dangerous to those who enter; this is known as the firewall controversy, and would take us too far afield into speculation.]  But once you pass through that door, you can never return.

Black holes can form in other ways too, but not those that we’re observing with the LIGO/VIRGO detectors.

Why are their mergers the best sources for gravitational waves?

One of the easiest and most obvious ways to make gravitational waves is to have two objects orbiting each other.  If you put your two fists in a pool of water and move them around each other, you’ll get a pattern of water waves spiraling outward; this is in rough (very rough!) analogy to what happens with two orbiting objects, although, since the objects are moving in space, the waves aren’t in a material like water.  They are waves in space itself.

To get powerful gravitational waves, you want objects each with a very big mass that are orbiting around each other at very high speed. To get the fast motion, you need the force of gravity between the two objects to be strong; and to get gravity to be as strong as possible, you need the two objects to be as close as possible (since, as Isaac Newton already knew, gravity between two objects grows stronger when the distance between them shrinks.) But if the objects are large, they can’t get too close; they will bump into each other and merge long before their orbit can become fast enough. So to get a really fast orbit, you need two relatively small objects, each with a relatively big mass — what scientists refer to as compact objects. Neutron stars and black holes are the most compact objects we know about. Fortunately, they do indeed often travel in orbiting pairs, and do sometimes, for a very brief period before they merge, orbit rapidly enough to produce gravitational waves that LIGO and VIRGO can observe.

Why do we find these objects in pairs in the first place?

Stars very often travel in pairs… they are called binary stars. They can start their lives in pairs, forming together in large gas clouds, or even if they begin solitary, they can end up pairing up if they live in large densely packed communities of stars where it is common for multiple stars to pass nearby. Perhaps surprisingly, their pairing can survive the collapse and explosion of either star, leaving two black holes, two neutron stars, or one of each in orbit around one another.

What happens when these objects merge?

Not surprisingly, there are three classes of mergers which can be detected: two black holes merging, two neutron stars merging, and a neutron star merging with a black hole. The first class was observed in 2015 (and announced in 2016), the second was announced yesterday, and it’s a matter of time before the third class is observed. The two objects may orbit each other for billions of years, very slowly radiating gravitational waves (an effect observed in the 70’s, leading to a Nobel Prize) and gradually coming closer and closer together. Only in the last day of their lives do their orbits really start to speed up. And just before these objects merge, they begin to orbit each other once per second, then ten times per second, then a hundred times per second. Visualize that if you can: objects a few dozen miles (kilometers) across, a few miles (kilometers) apart, each with the mass of the Sun or greater, orbiting each other 100 times each second. It’s truly mind-boggling — a spinning dumbbell beyond the imagination of even the greatest minds of the 19th century. I don’t know any scientist who isn’t awed by this vision. It all sounds like science fiction. But it’s not.

How do we know this isn’t science fiction?

We know, if we believe Einstein’s theory of gravity (and I’ll give you a very good reason to believe in it in just a moment.) Einstein’s theory predicts that such a rapidly spinning, large-mass dumbbell formed by two orbiting compact objects will produce a telltale pattern of ripples in space itself — gravitational waves. That pattern is both complicated and precisely predicted. In the case of black holes, the predictions go right up to and past the moment of merger, to the ringing of the larger black hole that forms in the merger. In the case of neutron stars, the instants just before, during and after the merger are more complex and we can’t yet be confident we understand them, but during tens of seconds before the merger Einstein’s theory is very precise about what to expect. The theory further predicts how those ripples will cross the vast distances from where they were created to the location of the Earth, and how they will appear in the LIGO/VIRGO network of three gravitational wave detectors. The prediction of what to expect at LIGO/VIRGO thus involves not just one prediction but many: the theory is used to predict the existence and properties of black holes and of neutron stars, the detailed features of their mergers, the precise patterns of the resulting gravitational waves, and how those gravitational waves cross space. That LIGO/VIRGO have detected the telltale patterns of these gravitational waves. That these wave patterns agree with Einstein’s theory in every detail is the strongest evidence ever obtained that there is nothing wrong with Einstein’s theory when used in these combined contexts.  That then in turn gives us confidence that our interpretation of the LIGO/VIRGO results is correct, confirming that black holes and neutron stars really exist and really merge. (Notice the reasoning is slightly circular… but that’s how scientific knowledge proceeds, as a set of detailed consistency checks that gradually and eventually become so tightly interconnected as to be almost impossible to unwind.  Scientific reasoning is not deductive; it is inductive.  We do it not because it is logically ironclad but because it works so incredibly well — as witnessed by the computer, and its screen, that I’m using to write this, and the wired and wireless internet and computer disk that will be used to transmit and store it.)

THE SIGNIFICANCE(S) OF YESTERDAY’S ANNOUNCEMENT OF A NEUTRON STAR MERGER

What makes it difficult to explain the significance of yesterday’s announcement is that it consists of many important results piled up together, rather than a simple takeaway that can be reduced to a single soundbite. (That was also true of the black hole mergers announcement back in 2016, which is why I wrote a long post about it.)

So here is a list of important things we learned.  No one of them, by itself, is earth-shattering, but each one is profound, and taken together they form a major event in scientific history.

First confirmed observation of a merger of two neutron stars: We’ve known these mergers must occur, but there’s nothing like being sure. And since these things are too far away and too small to see in a telescope, the only way to be sure these mergers occur, and to learn more details about them, is with gravitational waves.  We expect to see many more of these mergers in coming years as gravitational wave astronomy increases in its sensitivity, and we will learn more and more about them.

New information about the properties of neutron stars: Neutron stars were proposed almost a hundred years ago and were confirmed to exist in the 60’s and 70’s.  But their precise details aren’t known; we believe they are like a giant atomic nucleus, but they’re so vastly larger than ordinary atomic nuclei that can’t be sure we understand all of their internal properties, and there are debates in the scientific community that can’t be easily answered… until, perhaps, now.

From the detailed pattern of the gravitational waves of this one neutron star merger, scientists already learn two things. First, we confirm that Einstein’s theory correctly predicts the basic pattern of gravitational waves from orbiting neutron stars, as it does for orbiting and merging black holes. Unlike black holes, however, there are more questions about what happens to neutron stars when they merge. The question of what happened to this pair after they merged is still out — did the form a neutron star, an unstable neutron star that, slowing its spin, eventually collapsed into a black hole, or a black hole straightaway?

But something important was already learned about the internal properties of neutron stars. The stresses of being whipped around at such incredible speeds would tear you and I apart, and would even tear the Earth apart. We know neutron stars are much tougher than ordinary rock, but how much more? If they were too flimsy, they’d have broken apart at some point during LIGO/VIRGO’s observations, and the simple pattern of gravitational waves that was expected would have suddenly become much more complicated. That didn’t happen until perhaps just before the merger.   So scientists can use the simplicity of the pattern of gravitational waves to infer some new things about how stiff and strong neutron stars are.  More mergers will improve our understanding.  Again, there is no other simple way to obtain this information.

First visual observation of an event that produces both immense gravitational waves and bright electromagnetic waves: Black hole mergers aren’t expected to create a brilliant light display, because, as I mentioned above, they’re more like open doors to an invisible playground than they are like rocks, so they merge rather quietly, without a big bright and hot smash-up.  But neutron stars are big balls of stuff, and so the smash-up can indeed create lots of heat and light of all sorts, just as you might naively expect.  By “light” I mean not just visible light but all forms of electromagnetic waves, at all wavelengths (and therefore at all frequencies.)  Scientists divide up the range of electromagnetic waves into categories. These categories are radio waves, microwaves, infrared light, visible light, ultraviolet light, X-rays, and gamma rays, listed from lowest frequency and largest wavelength to highest frequency and smallest wavelength.  (Note that these categories and the dividing lines between them are completely arbitrary, but the divisions are useful for various scientific purposes.  The only fundamental difference between yellow light, a radio wave, and a gamma ray is the wavelength and frequency; otherwise they’re exactly the same type of thing, a wave in the electric and magnetic fields.)

So if and when two neutron stars merge, we expect both gravitational waves and electromagnetic waves, the latter of many different frequencies created by many different effects that can arise when two huge balls of neutrons collide.  But just because we expect them doesn’t mean they’re easy to see.  These mergers are pretty rare — perhaps one every hundred thousand years in each big galaxy like our own — so the ones we find using LIGO/VIRGO will generally be very far away.  If the light show is too dim, none of our telescopes will be able to see it.

But this light show was plenty bright.  Gamma ray detectors out in space detected it instantly, confirming that the gravitational waves from the two neutron stars led to a collision and merger that produced very high frequency light.  Already, that’s a first.  It’s as though one had seen lightning for years but never heard thunder; or as though one had observed the waves from hurricanes for years but never observed one in the sky.  Seeing both allows us a whole new set of perspectives; one plus one is often much more than two.

Over time — hours and days — effects were seen in visible light, ultraviolet light, infrared light, X-rays and radio waves.  Some were seen earlier than others, which itself is a story, but each one contributes to our understanding of what these mergers are actually like.

Confirmation of the best guess concerning the origin of “short” gamma ray bursts:  For many years, bursts of gamma rays have been observed in the sky.  Among them, there seems to be a class of bursts that are shorter than most, typically lasting just a couple of seconds.  They come from all across the sky, indicating that they come from distant intergalactic space, presumably from distant galaxies.  Among other explanations, the most popular hypothesis concerning these short gamma-ray bursts has been that they come from merging neutron stars.  The only way to confirm this hypothesis is with the observation of the gravitational waves from such a merger.  That test has now been passed; it appears that the hypothesis is correct.  That in turn means that we have, for the first time, both a good explanation of these short gamma ray bursts and, because we know how often we observe these bursts, a good estimate as to how often neutron stars merge in the universe.

First distance measurement to a source using both a gravitational wave measure and a redshift in electromagnetic waves, allowing a new calibration of the distance scale of the universe and of its expansion rate:  The pattern over time of the gravitational waves from a merger of two black holes or neutron stars is complex enough to reveal many things about the merging objects, including a rough estimate of their masses and the orientation of the spinning pair relative to the Earth.  The overall strength of the waves, combined with the knowledge of the masses, reveals how far the pair is from the Earth.  That by itself is nice, but the real win comes when the discovery of the object using visible light, or in fact any light with frequency below gamma-rays, can be made.  In this case, the galaxy that contains the neutron stars can be determined.

Once we know the host galaxy, we can do something really important.  We can, by looking at the starlight, determine how rapidly the galaxy is moving away from us.  For distant galaxies, the speed at which the galaxy recedes should be related to its distance because the universe is expanding.

How rapidly the universe is expanding has been recently measured with remarkable precision, but the problem is that there are two different methods for making the measurement, and they disagree.   This disagreement is one of the most important problems for our understanding of the universe.  Maybe one of the measurement methods is flawed, or maybe — and this would be much more interesting — the universe simply doesn’t behave the way we think it does.

What gravitational waves do is give us a third method: the gravitational waves directly provide the distance to the galaxy, and the electromagnetic waves directly provide the speed of recession.  There is no other way to make this type of joint measurement directly for distant galaxies.  The method is not accurate enough to be useful in just one merger, but once dozens of mergers have been observed, the average result will provide important new information about the universe’s expansion.  When combined with the other methods, it may help resolve this all-important puzzle.

Best test so far of Einstein’s prediction that the speed of light and the speed of gravitational waves are identical: Since gamma rays from the merger and the peak of the gravitational waves arrived within two seconds of one another after traveling 130 million years — that is, about 5 thousand million million seconds — we can say that the speed of light and the speed of gravitational waves are both equal to the cosmic speed limit to within one part in 2 thousand million million.  Such a precise test requires the combination of gravitational wave and gamma ray observations.

Efficient production of heavy elements confirmed:  It’s long been said that we are star-stuff, or stardust, and it’s been clear for a long time that it’s true.  But there’s been a puzzle when one looks into the details.  While it’s known that all the chemical elements from hydrogen up to iron are formed inside of stars, and can be blasted into space in supernova explosions to drift around and eventually form planets, moons, and humans, it hasn’t been quite as clear how the other elements with heavier atoms — atoms such as iodine, cesium, gold, lead, bismuth, uranium and so on — predominantly formed.  Yes they can be formed in supernovas, but not so easily; and there seem to be more atoms of heavy elements around the universe than supernovas can explain.  There are many supernovas in the history of the universe, but the efficiency for producing heavy chemical elements is just too low.

It was proposed some time ago that the mergers of neutron stars might be a suitable place to produce these heavy elements.  Even those these mergers are rare, they might be much more efficient, because the nuclei of heavy elements contain lots of neutrons and, not surprisingly, a collision of two neutron stars would produce lots of neutrons in its debris, suitable perhaps for making these nuclei.   A key indication that this is going on would be the following: if a neutron star merger could be identified using gravitational waves, and if its location could be determined using telescopes, then one would observe a pattern of light that would be characteristic of what is now called a “kilonova” explosion.   Warning: I don’t yet know much about kilonovas and I may be leaving out important details. A kilonova is powered by the process of forming heavy elements; most of the nuclei produced are initially radioactive — i.e., unstable — and they break down by emitting high energy particles, including the particles of light (called photons) which are in the gamma ray and X-ray categories.  The resulting characteristic glow would be expected to have a pattern of a certain type: it would be initially bright but would dim rapidly in visible light, with a long afterglow in infrared light.  The reasons for this are complex, so let me set them aside for now.  The important point is that this pattern was observed, confirming that a kilonova of this type occurred, and thus that, in this neutron star merger, enormous amounts of heavy elements were indeed produced.  So we now have a lot of evidence, for the first time, that almost all the heavy chemical elements on and around our planet were formed in neutron star mergers.  Again, we could not know this if we did not know that this was a neutron star merger, and that information comes only from the gravitational wave observation.

MISCELLANEOUS QUESTIONS

Did the merger of these two neutron stars result in a new black hole, a larger neutron star, or an unstable rapidly spinning neutron star that later collapsed into a black hole?

We don’t yet know, and maybe we won’t know.  Some scientists involved appear to be leaning toward the possibility that a black hole was formed, but others seem to say the jury is out.  I’m not sure what additional information can be obtained over time about this.

If the two neutron stars formed a black hole, why was there a kilonova?  Why wasn’t everything sucked into the black hole?

Black holes aren’t vacuum cleaners; they pull things in via gravity just the same way that the Earth and Sun do, and don’t suck things in some unusual way.  The only crucial thing about a black hole is that once you go in you can’t come out.  But just as when trying to avoid hitting the Earth or Sun, you can avoid falling in if you orbit fast enough or if you’re flung outward before you reach the edge.

The point in a neutron star merger is that the forces at the moment of merger are so intense that one or both neutron stars are partially ripped apart.  The material that is thrown outward in all directions, at an immense speed, somehow creates the bright, hot flash of gamma rays and eventually the kilonova glow from the newly formed atomic nuclei.  Those details I don’t yet understand, but I know they have been carefully studied both with approximate equations and in computer simulations such as this one and this one.  However, the accuracy of the simulations can only be confirmed through the detailed studies of a merger, such as the one just announced.  It seems, from the data we’ve seen, that the simulations did a fairly good job.  I’m sure they will be improved once they are compared with the recent data.

 

 

 


Filed under: Astronomy, Gravitational Waves Tagged: black holes, Gravitational Waves, LIGO, neutron stars

by Matt Strassler at October 17, 2017 04:03 PM

October 16, 2017

Sean Carroll - Preposterous Universe

Standard Sirens

Everyone is rightly excited about the latest gravitational-wave discovery. The LIGO observatory, recently joined by its European partner VIRGO, had previously seen gravitational waves from coalescing black holes. Which is super-awesome, but also a bit lonely — black holes are black, so we detect the gravitational waves and little else. Since our current gravitational-wave observatories aren’t very good at pinpointing source locations on the sky, we’ve been completely unable to say which galaxy, for example, the events originated in.

This has changed now, as we’ve launched the era of “multi-messenger astronomy,” detecting both gravitational and electromagnetic radiation from a single source. The event was the merger of two neutron stars, rather than black holes, and all that matter coming together in a giant conflagration lit up the sky in a large number of wavelengths simultaneously.

Look at all those different observatories, and all those wavelengths of electromagnetic radiation! Radio, infrared, optical, ultraviolet, X-ray, and gamma-ray — soup to nuts, astronomically speaking.

A lot of cutting-edge science will come out of this, see e.g. this main science paper. Apparently some folks are very excited by the fact that the event produced an amount of gold equal to several times the mass of the Earth. But it’s my blog, so let me highlight the aspect of personal relevance to me: using “standard sirens” to measure the expansion of the universe.

We’re already pretty good at measuring the expansion of the universe, using something called the cosmic distance ladder. You build up distance measures step by step, determining the distance to nearby stars, then to more distant clusters, and so forth. Works well, but of course is subject to accumulated errors along the way. This new kind of gravitational-wave observation is something else entirely, allowing us to completely jump over the distance ladder and obtain an independent measurement of the distance to cosmological objects. See this LIGO explainer.

The simultaneous observation of gravitational and electromagnetic waves is crucial to this idea. You’re trying to compare two things: the distance to an object, and the apparent velocity with which it is moving away from us. Usually velocity is the easy part: you measure the redshift of light, which is easy to do when you have an electromagnetic spectrum of an object. But with gravitational waves alone, you can’t do it — there isn’t enough structure in the spectrum to measure a redshift. That’s why the exploding neutron stars were so crucial; in this event, GW170817, we can for the first time determine the precise redshift of a distant gravitational-wave source.

Measuring the distance is the tricky part, and this is where gravitational waves offer a new technique. The favorite conventional strategy is to identify “standard candles” — objects for which you have a reason to believe you know their intrinsic brightness, so that by comparing to the brightness you actually observe you can figure out the distance. To discover the acceleration of the universe, for example,  astronomers used Type Ia supernovae as standard candles.

Gravitational waves don’t quite give you standard candles; every one will generally have a different intrinsic gravitational “luminosity” (the amount of energy emitted). But by looking at the precise way in which the source evolves — the characteristic “chirp” waveform in gravitational waves as the two objects rapidly spiral together — we can work out precisely what that total luminosity actually is. Here’s the chirp for GW170817, compared to the other sources we’ve discovered — much more data, almost a full minute!

So we have both distance and redshift, without using the conventional distance ladder at all! This is important for all sorts of reasons. An independent way of getting at cosmic distances will allow us to measure properties of the dark energy, for example. You might also have heard that there is a discrepancy between different ways of measuring the Hubble constant, which either means someone is making a tiny mistake or there is something dramatically wrong with the way we think about the universe. Having an independent check will be crucial in sorting this out. Just from this one event, we are able to say that the Hubble constant is 70 kilometers per second per megaparsec, albeit with large error bars (+12, -8 km/s/Mpc). That will get much better as we collect more events.

So here is my (infinitesimally tiny) role in this exciting story. The idea of using gravitational-wave sources as standard sirens was put forward by Bernard Schutz all the way back in 1986. But it’s been developed substantially since then, especially by my friends Daniel Holz and Scott Hughes. Years ago Daniel told me about the idea, as he and Scott were writing one of the early papers. My immediate response was “Well, you have to call these things `standard sirens.'” And so a useful label was born.

Sadly for my share of the glory, my Caltech colleague Sterl Phinney also suggested the name simultaneously, as the acknowledgments to the paper testify. That’s okay; when one’s contribution is this extremely small, sharing it doesn’t seem so bad.

By contrast, the glory attaching to the physicists and astronomers who pulled off this observation, and the many others who have contributed to the theoretical understanding behind it, is substantial indeed. Congratulations to all of the hard-working people who have truly opened a new window on how we look at our universe.

by Sean Carroll at October 16, 2017 03:52 PM

Matt Strassler - Of Particular Significance

A Scientific Breakthrough! Combining Gravitational and Electromagnetic Waves

Gravitational waves are now the most important new tool in the astronomer’s toolbox.  Already they’ve been used to confirm that large black holes — with masses ten or more times that of the Sun — and mergers of these large black holes to form even larger ones, are not uncommon in the universe.   Today it goes a big step further.

It’s long been known that neutron stars, remnants of collapsed stars that have exploded as supernovas, are common in the universe.  And it’s been known almost as long that sometimes neutron stars travel in pairs.  (In fact that’s how gravitational waves were first discovered, indirectly, back in the 1970s.)  Stars often form in pairs, and sometimes both stars explode as supernovas, leaving their neutron star relics in orbit around one another.  Neutron stars are small — just ten or so kilometers (miles) across.  According to Einstein’s theory of gravity, a pair of stars should gradually lose energy by emitting gravitational waves into space, and slowly but surely the two objects should spiral in on one another.   Eventually, after many millions or even billions of years, they collide and merge into a larger neutron star, or into a black hole.  This collision does two things.

  1. It makes some kind of brilliant flash of light — electromagnetic waves — whose details are only guessed at.  Some of those electromagnetic waves will be in the form of visible light, while much of it will be in invisible forms, such as gamma rays.
  2. It makes gravitational waves, whose details are easier to calculate and which are therefore distinctive, but couldn’t have been detected until LIGO and VIRGO started taking data, LIGO over the last couple of years, VIRGO over the last couple of months.

It’s possible that we’ve seen the light from neutron star mergers before, but no one could be sure.  Wouldn’t it be great, then, if we could see gravitational waves AND electromagnetic waves from a neutron star merger?  It would be a little like seeing the flash and hearing the sound from fireworks — seeing and hearing is better than either one separately, with each one clarifying the other.  (Caution: scientists are often speaking as if detecting gravitational waves is like “hearing”.  This is only an analogy, and a vague one!  It’s not at all the same as acoustic waves that we can hear with our ears, for many reasons… so please don’t take it too literally.)  If we could do both, we could learn about neutron stars and their properties in an entirely new way.

Today, we learned that this has happened.  LIGO , with the world’s first two gravitational observatories, detected the waves from two merging neutron stars, 130 million light years from Earth, on August 17th.  (Neutron star mergers last much longer than black hole mergers, so the two are easy to distinguish; and this one was so close, relatively speaking, that it was seen for a long while.)  VIRGO, with the third detector, allows scientists to triangulate and determine roughly where mergers have occurred.  They saw only a very weak signal, but that was extremely important, because it told the scientists that the merger must have occurred in a small region of the sky where VIRGO has a relative blind spot.  That told scientists where to look.

The merger was detected for more than a full minute… to be compared with black holes whose mergers can be detected for less than a second.  It’s not exactly clear yet what happened at the end, however!  Did the merged neutron stars form a black hole or a neutron star?  The jury is out.

At almost exactly the moment at which the gravitational waves reached their peak, a blast of gamma rays — electromagnetic waves of very high frequencies — were detected by a different scientific team, the one from FERMI. FERMI detects gamma rays from the distant universe every day, and a two-second gamma-ray-burst is not unusual.  And INTEGRAL, another gamma ray experiment, also detected it.   The teams communicated within minutes.   The FERMI and INTEGRAL gamma ray detectors can only indicate the rough region of the sky from which their gamma rays originate, and LIGO/VIRGO together also only give a rough region.  But the scientists saw those regions overlapped.  The evidence was clear.  And with that, astronomy entered a new, highly anticipated phase.

Already this was a huge discovery.  Brief gamma-ray bursts have been a mystery for years.  One of the best guesses as to their origin has been neutron star mergers.  Now the mystery is solved; that guess is apparently correct. (Or is it?  Probably, but the gamma ray discovery is surprisingly dim, given how close it is.  So there are still questions to ask.)

Also confirmed by the fact that these signals arrived within a couple of seconds of one another, after traveling for over 100 million years from the same source, is that, indeed, the speed of light and the speed of gravitational waves are exactly the same — both of them equal to the cosmic speed limit, just as Einstein’s theory of gravity predicts.

Next, these teams quickly told their astronomer friends to train their telescopes in the general area of the source. Dozens of telescopes, from every continent and from space, and looking for electromagnetic waves at a huge range of frequencies, pointed in that rough direction and scanned for anything unusual.  (A big challenge: the object was near the Sun in the sky, so it could be viewed in darkness only for an hour each night!) Light was detected!  At all frequencies!  The object was very bright, making it easy to find the galaxy in which the merger took place.  The brilliant glow was seen in gamma rays, ultraviolet light, infrared light, X-rays, and radio.  (Neutrinos, particles that can serve as another way to observe distant explosions, were not detected this time.)

And with so much information, so much can be learned!

Most important, perhaps, is this: from the pattern of the spectrum of light, the conjecture seems to be confirmed that the mergers of neutron stars are important sources, perhaps the dominant one, for many of the heavy chemical elements — iodine, iridium, cesium, gold, platinum, and so on — that are forged in the intense heat of these collisions.  It used to be thought that the same supernovas that form neutron stars in the first place were the most likely source.  But now it seems that this second stage of neutron star life — merger, rather than birth — is just as important.  That’s fascinating, because neutron star mergers are much more rare than the supernovas that form them.  There’s a supernova in our Milky Way galaxy every century or so, but it’s tens of millenia or more between these “kilonovas”, created in neutron star mergers.

If there’s anything disappointing about this news, it’s this: almost everything that was observed by all these different experiments was predicted in advance.  Sometimes it’s more important and useful when some of your predictions fail completely, because then you realize how much you have to learn.  Apparently our understanding of gravity, of neutron stars, and of their mergers, and of all sorts of sources of electromagnetic radiation that are produced in those merges, is even better than we might have thought. But fortunately there are a few new puzzles.  The X-rays were late; the gamma rays were dim… we’ll hear more about this shortly, as NASA is holding a second news conference.

Some highlights from the second news conference:

  • New information about neutron star interiors, which affects how large they are and therefore how exactly they merge, has been obtained
  • The first ever visual-light image of a gravitational wave source, from the Swope telescope, at the outskirts of a distant galaxy; the galaxy’s center is the blob of light, and the arrow points to the explosion.

  • The theoretical calculations for a kilonova explosion suggest that debris from the blast should rather quickly block the visual light, so the explosion dims quickly in visible light — but infrared light lasts much longer.  The observations by the visible and infrared light telescopes confirm this aspect of the theory; and you can see evidence for that in the picture above, where four days later the bright spot is both much dimmer and much redder than when it was discovered.
  • Estimate: the total mass of the gold and platinum produced in this explosion is vastly larger than the mass of the Earth.
  • Estimate: these neutron stars were formed about 10 or so billion years ago.  They’ve been orbiting each other for most of the universe’s history, and ended their lives just 130 million years ago, creating the blast we’ve so recently detected.
  • Big Puzzle: all of the previous gamma-ray bursts seen up to now have always had shone in ultraviolet light and X-rays as well as gamma rays.   But X-rays didn’t show up this time, at least not initially.  This was a big surprise.  It took 9 days for the Chandra telescope to observe X-rays, too faint for any other X-ray telescope.  Does this mean that the two neutron stars created a black hole, which then created a jet of matter that points not quite directly at us but off-axis, and shines by illuminating the matter in interstellar space?  This had been suggested as a possibility twenty years ago, but this is the first time there’s been any evidence for it.
  • One more surprise: it took 16 days for radio waves from the source to be discovered, with the Very Large Array, the most powerful existing radio telescope.  The radio emission has been growing brighter since then!  As with the X-rays, this seems also to support the idea of an off-axis jet.
  • Nothing quite like this gamma-ray burst has been seen — or rather, recognized — before.  When a gamma ray burst doesn’t have an X-ray component showing up right away, it simply looks odd and a bit mysterious.  Its harder to observe than most bursts, because without a jet pointing right at us, its afterglow fades quickly.  Moreover, a jet pointing at us is bright, so it blinds us to the more detailed and subtle features of the kilonova.  But this time, LIGO/VIRGO told scientists that “Yes, this is a neutron star merger”, leading to detailed study from all electromagnetic frequencies, including patient study over many days of the X-rays and radio.  In other cases those observations would have stopped after just a short time, and the whole story couldn’t have been properly interpreted.

 

 


Filed under: Astronomy, Gravitational Waves

by Matt Strassler at October 16, 2017 03:10 PM

October 13, 2017

Sean Carroll - Preposterous Universe

Mind-Blowing Quantum Mechanics

Trying to climb out from underneath a large pile of looming (and missed) deadlines, and in the process I’m hoping to ramp back up the real blogging. In the meantime, here are a couple of videos to tide you over.

First, an appearance a few weeks ago on Joe Rogan’s podcast. Rogan is a professional comedian and mixed-martial arts commentator, but has built a great audience for his wide-ranging podcast series. One of the things that makes him a good interviewer is his sincere delight in the material, as evidenced here by noting repeatedly that his mind had been blown. We talked for over two and a half hours, covering cosmology and quantum mechanics but also some bits about AI and pop culture.

And here’s a more straightforward lecture, this time at King’s College in London. The topic was “Extracting the Universe from the Wave Function,” which I’ve used for a few talks that ended up being pretty different in execution. This one was aimed at undergraduate physics students, some of whom hadn’t even had quantum mechanics. So the first half is a gentle introduction to many-worlds theory and why it’s the best version of quantum mechanics, and the second half tries to explain our recent efforts to emerge space itself out of quantum entanglement.

I was invited to King’s by Eugene Lim, one of my former grad students and now an extremely productive faculty member in his own right. It’s always good to see your kids grow up to do great things!

by Sean Carroll at October 13, 2017 03:01 PM

October 09, 2017

Alexey Petrov - Symmetry factor

Non-linear teaching

I wanted to share some ideas about a teaching method I am trying to develop and implement this semester. Please let me know if you’ve heard of someone doing something similar.

This semester I am teaching our undergraduate mechanics class. This is the first time I am teaching it, so I started looking into a possibility to shake things up and maybe apply some new method of teaching. And there are plenty offered: flipped classroom, peer instruction, Just-in-Time teaching, etc.  They all look to “move away from the inefficient old model” where there the professor is lecturing and students are taking notes. I have things to say about that, but not in this post. It suffices to say that most of those approaches are essentially trying to make students work (both with the lecturer and their peers) in class and outside of it. At the same time those methods attempt to “compartmentalize teaching” i.e. make large classes “smaller” by bringing up each individual student’s contribution to class activities (by using “clickers”, small discussion groups, etc). For several reasons those approaches did not fit my goal this semester.

Our Classical Mechanics class is a gateway class for our physics majors. It is the first class they take after they are done with general physics lectures. So the students are already familiar with the (simpler version of the) material they are going to be taught. The goal of this class is to start molding physicists out of students: they learn to simplify problems so physics methods can be properly applied (that’s how “a Ford Mustang improperly parked at the top of the icy hill slides down…” turns into “a block slides down the incline…”), learn to always derive the final formula before plugging in numbers, look at the asymptotics of their solutions as a way to see if the solution makes sense, and many other wonderful things.

So with all that I started doing something I’d like to call non-linear teaching. The gist of it is as follows. I give a lecture (and don’t get me wrong, I do make my students talk and work: I ask questions, we do “duels” (students argue different sides of a question), etc — all of that can be done efficiently in a class of 20 students). But instead of one homework with 3-4 problems per week I have two types of homework assignments for them: short homeworks and projects.

Short homework assignments are single-problem assignments given after each class that must be done by the next class. They are designed such that a student need to re-derive material that we discussed previously in class with small new twist added. For example, in the block-down-to-incline problem discussed in class I ask them to choose coordinate axes in a different way and prove that the result is independent of the choice of the coordinate system. Or ask them to find at which angle one should throw a stone to get the maximal possible range (including air resistance), etc.  This way, instead of doing an assignment in the last minute at the end of the week, students have to work out what they just learned in class every day! More importantly, I get to change how I teach. Depending on how they did on the previous short homework, I adjust the material (both speed and volume) discussed in class. I also  design examples for the future sections in such a way that I can repeat parts of the topic that was hard for the students previously. Hence, instead of a linear propagation of the course, we are moving along something akin to helical motion, returning and spending more time on topics that students find more difficult. That’t why my teaching is “non-linear”.

Project homework assignments are designed to develop understanding of how topics in a given chapter relate to each other. There are as many project assignments as there are chapters. Students get two weeks to complete them.

Overall, students solve exactly the same number of problems they would in a normal lecture class. Yet, those problems are scheduled in a different way. In my way, students are forced to learn by constantly re-working what was just discussed in a lecture. And for me, I can quickly react (by adjusting lecture material and speed) using constant feedback I get from students in the form of short homeworks. Win-win!

I will do benchmarking at the end of the class by comparing my class performance to aggregate data from previous years. I’ll report on it later. But for now I would be interested to hear your comments!

 


by apetrov at October 09, 2017 09:45 PM

October 06, 2017

John Baez - Azimuth

Vladimir Voevodsky, 1966 — 2017



Vladimir Voevodsky died last week. He won the Fields Medal in 2002 for proving the Milnor conjecture in a branch of algebra known as algebraic K-theory. He continued to work on this subject until he helped prove the more general Bloch–Kato conjecture in 2010.

Proving these results—which are too technical to easily describe to nonmathematicians!—required him to develop a dream of Grothendieck: the theory of motives. Very roughly, this is a way of taking the space of solutions of some polynomial equations and chopping it apart into building blocks. But this process of ‘chopping’ and also these building blocks, called ‘motives’, are very abstract—nothing easy to visualize.

It’s a bit like how a proton is made of quarks. You never actually see a quark in isolation, so you have to think very hard to realize they are there at all. But once you know this, a lot of things become clear.

This is wonderful, profound mathematics. But in the process of proving the Bloch-Kato conjecture, Voevodsky became tired of this stuff. He wanted to do something more useful… and more ambitious. He later said:

It was very difficult. In fact, it was 10 years of technical work on a topic that did not interest me during the last 5 of these 10 years. Everything was done only through willpower.

Since the autumn of 1997, I already understood that my main contribution to the theory of motives and motivic cohomology was made. Since that time I have been very conscious and actively looking for. I was looking for a topic that I would deal with after I fulfilled my obligations related to the Bloch-Kato hypothesis.

I quickly realized that if I wanted to do something really serious, then I should make the most of my accumulated knowledge and skills in mathematics. On the other hand, seeing the trends in the development of mathematics as a science, I realized that the time is coming when the proof of yet another conjecture won’t have much of an effect. I realized that mathematics is on the verge of a crisis, or rather, two crises.

The first is connected with the separation of “pure” and applied mathematics. It is clear that sooner or later there will be a question about why society should pay money to people who are engaged in things that do not have any practical applications.

The second, less obvious, is connected with the complication of pure mathematics, which leads to the fact that, sooner or later, the articles will become too complicated for detailed verification and the process of accumulating undetected errors will begin. And since mathematics is a very deep science, in the sense that the results of one article usually depend on the results of many and many previous articles, this accumulation of errors for mathematics is very dangerous.

So, I decided, you need to try to do something that will help prevent these crises. For the first crisis, this meant that it was necessary to find an applied problem that required for its solution the methods of pure mathematics developed in recent years or even decades.

He looked for such a problem. He studied biology and found an interesting candidate. He worked on it very hard, but then decided he’d gone down a wrong path:

Since childhood I have been interested in natural sciences (physics, chemistry, biology), as well as in the theory of computer languages, and since 1997, I have read a lot on these topics, and even took several student and post-graduate courses. In fact, I “updated” and deepened the knowledge that had to a very large extent. All this time I was looking for that I recognized open problems that would be of interest to me and to which I could apply modern mathematics.

As a result, I chose, I now understand incorrectly, the problem of recovering the history of populations from their modern genetic composition. I took on this task for a total of about two years, and in the end, already by 2009, I realized that what I was inventing was useless. In my life, so far, it was, perhaps, the greatest scientific failure. A lot of work was invested in the project, which completely failed. Of course, there was some benefit, of course—I learned a lot of probability theory, which I knew badly, and also learned a lot about demography and demographic history.

But he bounced back! He came up with a new approach to the foundations of mathematics, and helped organize a team at the Institute of Advanced Studies at Princeton to develop it further. This approach is now called homotopy type theory or univalent foundations. It’s fundamentally different from set theory. It treats the fundamental concept of equality in a brand new way! And it’s designed to be done with the help of computers.

It seems he started down this new road when the mathematician Carlos Simpson pointed out a serious mistake in a paper he’d written.

I think it was at this moment that I largely stopped doing what is called “curiosity-driven research” and started to think seriously about the future. I didn’t have the tools to explore the areas where curiosity was leading me and the areas that I considered to be of value and of interest and of beauty.

So I started to look into what I could do to create such tools. And it soon became clear that the only long-term solution was somehow to make it possible for me to use computers to verify my abstract, logical, and mathematical constructions. The software for doing this has been in development since the sixties. At the time, when I started to look for a practical proof assistant around 2000, I could not find any. There were several groups developing such systems, but none of them was in any way appropriate for the kind of mathematics for which I needed a system.

When I first started to explore the possibility, computer proof verification was almost a forbidden subject among mathematicians. A conversation about the need for computer proof assistants would invariably drift to Gödel’s incompleteness theorem (which has nothing to do with the actual problem) or to one or two cases of verification of already existing proofs, which were used only to demonstrate how impractical the whole idea was. Among the very few mathematicians who persisted in trying to advance the field of computer verification in mathematics during this time were Tom Hales and Carlos Simpson. Today, only a few years later, computer verification of proofs and of mathematical reasoning in general looks completely practical to many people who work on univalent foundations and homotopy type theory.

The primary challenge that needed to be addressed was that the foundations of mathematics were unprepared for the requirements of the task. Formulating mathematical reasoning in a language precise enough for a computer to follow meant using a foundational system of mathematics not as a standard of consistency to establish a few fundamental theorems, but as a tool that can be employed in ­everyday mathematical work. There were two main problems with the existing foundational systems, which made them inadequate. Firstly, existing foundations of mathematics were based on the languages of predicate logic and languages of this class are too limited. Secondly, existing foundations could not be used to directly express statements about such objects as, for example, the ones in my work on 2-theories.

Still, it is extremely difficult to accept that mathematics is in need of a completely new foundation. Even many of the people who are directly connected with the advances in homotopy type theory are struggling with this idea. There is a good reason: the existing foundations of mathematics—ZFC and category theory—have been very successful. Overcoming the appeal of category theory as a candidate for new foundations of mathematics was for me personally the most challenging.

Homotopy type theory is now a vital and exciting area of mathematics. It’s far from done, and to make it live up to Voevodsky’s dreams will require brand new ideas—not just incremental improvements, but actual sparks of genius. For some of the open problems, see Mike Shulman’s comment on the n-Category Café, and some replies to that.

I only met him a few times, but as far as I can tell Voevodsky was a completely unpretentious person. You can see that in the picture here.

He was also a very complex person. For example, you might not guess that he took great wildlife photos:

You also might not guess at this side of him:

In 2006-2007 a lot of external and internal events happened to me, after which my point of view on the questions of the “supernatural” changed significantly. What happened to me during these years, perhaps, can be compared most closely to what happened to Karl Jung in 1913-14. Jung called it “confrontation with the unconscious”. I do not know what to call it, but I can describe it in a few words. Remaining more or less normal, apart from the fact that I was trying to discuss what was happening to me with people whom I should not have discussed it with, I had in a few months acquired a very considerable experience of visions, voices, periods when parts of my body did not obey me, and a lot of incredible accidents. The most intense period was in mid-April 2007 when I spent 9 days (7 of them in the Mormon capital of Salt Lake City), never falling asleep for all these days.

Almost from the very beginning, I found that many of these phenomena (voices, visions, various sensory hallucinations), I could control. So I was not scared and did not feel sick, but perceived everything as something very interesting, actively trying to interact with those “beings” in the auditorial, visual and then tactile spaces that appeared around me (by themselves or by invoking them). I must say, probably, to avoid possible speculations on this subject, that I did not use any drugs during this period, tried to eat and sleep a lot, and drank diluted white wine.

Another comment: when I say “beings”, naturally I mean what in modern terminology are called complex hallucinations. The word “beings” emphasizes that these hallucinations themselves “behaved”, possessed a memory independent of my memory, and reacted to attempts at communication. In addition, they were often perceived in concert in various sensory modalities. For example, I played several times with a (hallucinated) ball with a (hallucinated) girl—and I saw this ball, and felt it with my palm when I threw it.

Despite the fact that all this was very interesting, it was very difficult. It happened for several periods, the longest of which lasted from September 2007 to February 2008 without breaks. There were days when I could not read, and days when coordination of movements was broken to such an extent that it was difficult to walk.

I managed to get out of this state due to the fact that I forced myself to start math again. By the middle of spring 2008 I could already function more or less normally and even went to Salt Lake City to look at the places where I wandered, not knowing where I was, in the spring of 2007.

In short, he was a genius akin to Cantor or Grothendieck, at times teetering on the brink of sanity, yet gripped by an immense desire for beauty and clarity, engaging in struggles that gripped his whole soul. From the fires of this volcano, truly original ideas emerge.

This last quote, and the first few quotes, are from some interviews in Russian, done by Roman Mikhailov, which Mike Stay pointed out to me. I used Google Translate and polished the results a bit:

Интервью Владимира Воеводского (часть 1), 1 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 1).

Интервью Владимира Воеводского (часть 2), 5 July 2012. English version via Google Translate: Interview with Vladimir Voevodsky (Part 2).

The quote about the origins of ‘univalent foundations’ comes from his nice essay here:

• Vladimir Voevodsky, The origins and motivations of univalent foundations, 2014.

There’s also a good obituary of Voevodsky explaining its relation to Grothendieck’s idea in simple terms:

• Institute for Advanced Studies, Vladimir Voevodsky 1966–2017, 4 October 2017.

The photograph of Voevodsky is from Andrej Bauer’s website:

• Andrej Bauer, Photos of mathematicians.

To learn homotopy type theory, try this great book:

Homotopy Type Theory: Univalent Foundations of Mathematics, The Univalent Foundations Program, Institute for Advanced Study.


by John Baez at October 06, 2017 06:41 PM

October 05, 2017

Symmetrybreaking - Fermilab/SLAC

A radio for dark matter

Instead of searching for dark matter particles, a new device will search for dark matter waves.

Header: A radio for dark matter

Researchers are testing a prototype “radio” that could let them listen to the tune of mysterious dark matter particles. 

Dark matter is an invisible substance thought to be five times more prevalent in the universe than regular matter. According to theory, billions of dark matter particles pass through the Earth each second. We don’t notice them because they interact with regular matter only very weakly, through gravity.

So far, researchers have mostly been looking for dark matter particles. But with the dark matter radio, they want to look for dark matter waves.

Direct detection experiments for dark matter particles use large underground detectors. Researchers hope to see signals from dark matter particles colliding with the detector material. However, this only works if dark matter particles are heavy enough to deposit a detectable amount energy in the collision. 

“If dark matter particles were very light, we might have a better chance of detecting them as waves rather than particles,” says Peter Graham, a theoretical physicist at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the Department of Energy’s SLAC National Accelerator Laboratory. “Our device will take the search in that direction.”

The dark matter radio makes use of a bizarre concept of quantum mechanics known as wave-particle duality: Every particle can also behave like a wave. 

Take, for example, the photon: the massless fundamental particle that carries the electromagnetic force. Streams of them make up electromagnetic radiation, or light, which we typically describe as waves—including radio waves. 

The dark matter radio will search for dark matter waves associated with two particular dark matter candidates.  It could find hidden photons—hypothetical cousins of photons with a small mass. Or it could find axions, which scientists think can be produced out of light and transform back into it in the presence of a magnetic field.

“The search for hidden photons will be completely unexplored territory,” says Saptarshi Chaudhuri, a Stanford graduate student on the project. “As for axions, the dark matter radio will close gaps in the searches of existing experiments.”

Intercepting dark matter vibes

A regular radio intercepts radio waves with an antenna and converts them into sound. What sound depends on the station. A listener chooses a station by adjusting an electric circuit, in which electricity can oscillate with a certain resonant frequency. If the circuit’s resonant frequency matches the station’s frequency, the radio is tuned in and the listener can hear the broadcast.

The dark matter radio works the same way. At its heart is an electric circuit with an adjustable resonant frequency. If the device were tuned to a frequency that matched the frequency of a dark matter particle wave, the circuit would resonate. Scientists could measure the frequency of the resonance, which would reveal the mass of the dark matter particle. 

The idea is to do a frequency sweep by slowly moving through the different frequencies, as if tuning a radio from one end of the dial to the other.

The electric signal from dark matter waves is expected to be very weak. Therefore, Graham has partnered with a team led by another KIPAC researcher, Kent Irwin. Irwin’s group is developing highly sensitive magnetometers known as superconducting quantum interference devices, or SQUIDs, which they’ll pair with extremely low-noise amplifiers to hunt for potential signals.

In its final design, the dark matter radio will search for particles in a mass range of trillionths to millionths of an electronvolt. (One electronvolt is about a billionth of the mass of a proton.) This is somewhat problematic because this range includes kilohertz to gigahertz frequencies—frequencies used for over-the-air broadcasting. 

“Shielding the radio from unwanted radiation is very important and also quite challenging,” Irwin says. “In fact, we would need a several-yards-thick layer of copper to do so. Fortunately we can achieve the same effect with a thin layer of superconducting metal.”

One advantage of the dark matter radio is that it does not need to be shielded from cosmic rays. Whereas direct detection searches for dark matter particles must operate deep underground to block out particles falling from space, the dark matter radio can operate in a university basement.

The researchers are now testing a small-scale prototype at Stanford that will scan a relatively narrow frequency range. They plan on eventually operating two independent, full-size instruments at Stanford and SLAC.

“This is exciting new science,” says Arran Phipps, a KIPAC postdoc on the project. “It’s great that we get to try out a new detection concept with a device that is relatively low-budget and low-risk.” 

The dark matter disc jockeys are taking the first steps now and plan to conduct their dark matter searches over the next few years. Stay tuned for future results.

by Manuel Gnida at October 05, 2017 01:23 PM

October 03, 2017

Jon Butterworth - Life and Physics

Symmetrybreaking - Fermilab/SLAC

Nobel recognizes gravitational wave discovery

Scientists Rainer Weiss, Kip Thorne and Barry Barish won the 2017 Nobel Prize in Physics for their roles in creating the LIGO experiment.

Illustration depicting two black holes circling one another and producing gravitational waves

Three scientists who made essential contributions to the LIGO collaboration have been awarded the 2017 Nobel Prize in Physics.

Rainer Weiss will share the prize with Kip Thorne and Barry Barish for their roles in the discovery of gravitational waves, ripples in space-time predicted by Albert Einstein. Weiss and Thorne conceived of LIGO, and Barish is credited with reviving the struggling experiment and making it happen.

“I view this more as a thing that recognizes the work of about 1000 people,” Weiss said during a Q&A after the announcement this morning. “It’s really a dedicated effort that has been going on, I hate to tell you, for as long as 40 years, people trying to make a detection in the early days and then slowly but surely getting the technology together to do it.”

Another founder of LIGO, scientist Ronald Drever, died in March. Nobel Prizes are not awarded posthumously.

According to Einstein’s general theory of relativity, powerful cosmic events release energy in the form of waves traveling through the fabric of existence at the speed of light. LIGO detects these disturbances when they disrupt the symmetry between the passages of identical laser beams traveling identical distances.

The setup for the LIGO experiment looks like a giant L, with each side stretching about 2.5 miles long. Scientists split a laser beam and shine the two halves down the two sides of the L. When each half of the beam reaches the end, it reflects off a mirror and heads back to the place where its journey began.

Normally, the two halves of the beam return at the same time. When there’s a mismatch, scientists know something is going on. Gravitational waves compress space-time in one direction and stretch it in another, giving one half of the beam a shortcut and sending the other on a longer trip. LIGO is sensitive enough to notice a difference between the arms as small as 1000th the diameter of an atomic nucleus.

Scientists on LIGO and their partner collaboration, called Virgo, reported the first detection of gravitational waves in February 2016. The waves were generated in the collision of two black holes with 29 and 36 times the mass of the sun 1.3 billion years ago. They reached the LIGO experiment as scientists were conducting an engineering test.

“It took us a long time, something like two months, to convince ourselves that we had seen something from outside that was truly a gravitational wave,” Weiss said.

LIGO, which stands for Laser Interferometer Gravitational-Wave Observatory, consists of two of these pieces of equipment, one located in Louisiana and another in Washington state.

The experiment is operated jointly by Weiss’s home institution, MIT, and Barish and Thorne’s home institution, Caltech. The experiment has collaborators from more than 80 institutions from more than 20 countries. A third interferometer, operated by the Virgo collaboration, recently joined LIGO to make the first joint observation of gravitational waves.

by Kathryn Jepsen at October 03, 2017 10:42 AM

September 28, 2017

Symmetrybreaking - Fermilab/SLAC

Conjuring ghost trains for safety

A Fermilab technical specialist recently invented a device that could help alert oncoming trains to large vehicles stuck on the tracks.

Photo of a train traveling along the tracks

Browsing YouTube late at night, Fermilab Technical Specialist Derek Plant stumbled on a series of videos that all begin the same way: a large vehicle—a bus, semi or other low-clearance vehicle—is stuck on a railroad crossing. In the end, the train crashes into the stuck vehicle, destroying it and sometimes even derailing the train. According to the Federal Railroad Administration, every year hundreds of vehicles meet this fate by trains, which can take over a mile to stop.

“I was just surprised at the number of these that I found,” Plant says. “For every accident that’s videotaped, there are probably many more.”

Inspired by a workplace safety class that preached a principle of minimizing the impact of accidents, Plant set about looking for solutions to the problem of trains hitting stuck vehicles.

Railroad tracks are elevated for proper drainage, and the humped profile of many crossings can cause a vehicle to bottom out. “Theoretically, we could lower all the crossings so that they’re no longer a hump. But there are 200,000 crossings in the United States,” Plant says. “Railroads and local governments are trying hard to minimize the number of these crossings by creating overpasses, or elevating roadways. That’s cost-prohibitive, and it’s not going to happen soon.”

Other solutions, such as re-engineering the suspension on vehicles likely to get stuck, seemed equally improbable.

After studying how railroad signaling systems work, Plant came up with an idea: to fake the presence of a train. His invention was developed in his spare time using techniques and principles he learned over his almost two decades at Fermilab. It is currently in the patent application process and being prosecuted by Fermilab’s Office of Technology Transfer.

“If you cross over a railroad track and you look down the tracks, you’ll see red or yellow or green lights,” he says. “Trains have traffic signals too.”

These signals are tied to signal blocks—segments of the tracks that range from a mile to several miles in length. When a train is on the tracks, its metal wheels and axle connect both rails, forming an electric circuit through the tracks to trigger the signals. These signals inform other trains not to proceed while one train occupies a block, avoiding pileups.

Plant thought, “What if other vehicles could trigger the same signal in an emergency?” By faking the presence of a train, a vehicle stuck on the tracks could give advanced warning for oncoming trains to stop and stall for time. Hence the name of Plant’s invention: the Ghost Train Generator.

To replicate the train’s presence, Plant knew he had to create a very strong electric current between the rails. The most straightforward way to do this is with massive amounts of metal, as a train does. But for the Ghost Train Generator to be useful in a pinch, it needs to be small, portable and easily applied. The answer to achieving these features lies in strong magnets and special wire.

“Put one magnet on one rail and one magnet on the other and the device itself mimics—electrically—what a train would look like to the signaling system,” he says. “In theory, this could be carried in vehicles that are at high risk for getting stuck on a crossing: semis, tour buses and first-response vehicles,” Plant says. “Keep it just like you would a fire extinguisher—just behind the seat or in an emergency compartment.”

Once the device is deployed, the train would receive the signal that the tracks were obstructed and stop. Then the driver of the stuck vehicle could call for emergency help using the hotline posted on all crossings.

Plant compares the invention to a seatbelt.

“Is it going to save your life 100 percent of the time? Nope, but smart people wear them,” he says. “It’s designed to prevent a collision when a train is more than two minutes from the crossing.”

And like a seatbelt, part of what makes Plant’s invention so appealing is its simplicity.

“The first thing I thought was that this is a clever invention,” says Aaron Sauers from Fermilab’s technology transfer office, who works with lab staff to develop new technologies for market. “It’s an elegant solution to an existing problem. I thought, ‘This technology could have legs.’”

The organizers of the National Innovation Summit seem to agree.  In May, Fermilab received an Innovation Award from TechConnect for the Ghost Train Generator. The invention will also be featured as a showcase technology in the upcoming Defense Innovation Summit in October.

The Ghost Train Generator is currently in the pipeline to receive a patent with help from Fermilab, and its prospects are promising, according to Sauers. It is a nonprovisional patent, which has specific claims and can be licensed. After that, if the generator passes muster and is granted a patent, Plant will receive a portion of the royalties that it generates for Fermilab.

Fermilab encourages a culture of scientific innovation and exploration beyond the field of particle physics, according to Sauers, who noted that Plant’s invention is just one of a number of technology transfer initiatives at the lab.

Plant agrees—Fermilab’s environment helped motivate his efforts to find a solution for railroad crossing accidents.

“It’s just a general problem-solving state of mind,” he says. “That’s the philosophy we have here at the lab.”

Editor's note: A version of this article was originally published by Fermilab.

by Daniel Garisto at September 28, 2017 05:33 PM

Symmetrybreaking - Fermilab/SLAC

Fermilab on display

The national laboratory opened usually inaccessible areas of its campus to thousands of visitors to celebrate 50 years of discovery.

Fermilab on display

Fermi National Accelerator Laboratory’s yearlong 50th anniversary celebration culminated on Saturday with an Open House that drew thousands of visitors despite the unseasonable heat.

On display were areas of the lab not normally open to guests, including neutrino and muon experiments, a portion of the accelerator complex, lab spaces and magnet and accelerator fabrication and testing areas, to name a few. There were also live links to labs around the world, including CERN, a mountaintop observatory in Chile, and the mile-deep Sanford Underground Research Facility that will house the international neutrino experiment, DUNE.

But it wasn’t all physics. In addition to hands-on demos and a STEM fair, visitors could also learn about Fermilab’s art and history, walk the prairie trails or hang out with the ever-popular bison. In all, some 10,000 visitors got to go behind-the-scenes at Fermilab, shuttled around on 80 buses and welcomed by 900 Fermilab workers eager to explain their roles at the lab. Below, see a few of the photos captured as Fermilab celebrated 50 years of discovery.

by Lauren Biron at September 28, 2017 03:47 PM

September 27, 2017

Matt Strassler - Of Particular Significance

LIGO and VIRGO Announce a Joint Observation of a Black Hole Merger

Welcome, VIRGO!  Another merger of two big black holes has been detected, this time by both LIGO’s two detectors and by VIRGO as well.

Aside from the fact that this means that the VIRGO instrument actually works, which is great news, why is this a big deal?  By adding a third gravitational wave detector, built by the VIRGO collaboration, to LIGO’s Washington and Louisiana detectors, the scientists involved in the search for gravitational waves now can determine fairly accurately the direction from which a detected gravitational wave signal is coming.  And this allows them to do something new: to tell their astronomer colleagues roughly where to look in the sky, using ordinary telescopes, for some form of electromagnetic waves (perhaps visible light, gamma rays, or radio waves) that might have been produced by whatever created the gravitational waves.

The point is that with three detectors, one can triangulate.  The gravitational waves travel for billions of years, traveling at the speed of light, and when they pass by, they are detected at both LIGO detectors and at VIRGO.  But because it takes light a few thousandths of a second to travel the diameter of the Earth, the waves arrive at slightly different times at the LIGO Washington site, the LIGO Louisiana site, and the VIRGO site in Italy.  The precise timing tells the scientists what direction the waves were traveling in, and therefore roughly where they came from.  In a similar way, using the fact that sound travels at a known speed, the times that a gunshot is heard at multiple locations can be used by police to determine where the shot was fired.

You can see the impact in the picture below, which is an image of the sky drawn as a sphere, as if seen from outside the sky looking in.  In previous detections of black hole mergers by LIGO’s two detectors, the scientists could only determine a large swath of sky where the observed merger might have occurred; those are the four colored regions that stretch far across the sky.  But notice the green splotch at lower left.  That’s the region of sky where the black hole merger announced today occurred.  The fact that this region is many times smaller than the other four reflects what including VIRGO makes possible.  It’s a small enough region that one can search using an appropriate telescope for something that is making visible light, or gamma rays, or radio waves.

Skymap of the LIGO/Virgo black hole mergers.

Image credit: LIGO/Virgo/Caltech/MIT/Leo Singer (Milky Way image: Axel Mellinger)

 

While a black hole merger isn’t expected to be observable by other telescopes, and indeed nothing was observed by other telescopes this time, other events that LIGO might detect, such as a merger of two neutron stars, may create an observable effect. We can hope for such exciting news over the next year or two.


Filed under: Astronomy, Gravitational Waves Tagged: black holes, Gravitational Waves, LIGO

by Matt Strassler at September 27, 2017 05:50 PM

September 26, 2017

Symmetrybreaking - Fermilab/SLAC

Shining with possibility

As Jordan-based SESAME nears its first experiments, members are connecting in new ways.

Header: A new light

Early in the morning, physicist Roy Beck Barkai boards a bus in Tel Aviv bound for Jordan. By 10:30 a.m., he is on site at SESAME, a new scientific facility where scientists plan to use light to study everything from biology to archaeology. He is back home by 7 p.m., in time to have dinner with his children.

Before SESAME opened, the closest facility like it was in Italy. Beck Barkai often traveled for two days by airplane, train and taxi for a day or two of work—an inefficient and expensive process that limited his ability to work with specialized equipment from his home lab and required him to spend days away from his family.  

“For me, having the ability to kiss them goodbye in the morning and just before they went to sleep at night is a miracle,” Beck Barkai says. “It felt like a dream come true. Having SESAME at our doorstep is a big plus."

SESAME, also known as the International Centre for Synchrotron-Light for Experimental Science and Applications in the Middle East, opened its doors in May and is expected to host its first beams of synchrotron light this year. Scientists from around the world will be able to apply for time to use the facility’s powerful light source for their experiments. It’s the first synchrotron in the region. 

Beck Barkai says SESAME provides a welcome dose of convenience, as scientists in the region can now drive to a research center instead of flying with sensitive equipment to another country. It’s also more cost-effective.

Located in Jordan to the northwest of the city of Amman, SESAME was built by a collaboration made up of Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, Turkey and the Palestinian Authority—a partnership members hope will improve relations among the eight neighbors.

“SESAME is a very important step in the region,” says SESAME Scientific Advisory Committee Chair Zehra Sayers. “The language of science is objective. It’s based on curiosity. It doesn’t need to be affected by the differences in cultural and social backgrounds. I hope it is something that we will leave the next generations as a positive step toward stability.”

Inline_1: A new light
Artwork by Ana Kova

Protein researcher and a University of Jordan professor Areej Abuhammad says she hopes SESAME will provide an environment that encourages collaboration. 

“I think through having the chance to interact, the scientists from around this region will learn to trust and respect each other,” she says. “I don’t think that this will result in solving all the problems in the region from one day to the next, but it will be a big step forward.”

The $100 million center is a state-of-the-art research facility that should provide some relief to scientists seeking time at other, overbooked facilities. SESAME plans to eventually host 100 to 200 users at a time. 

SESAME’s first two beamlines will open later this year. About twice per year, SESAME will announce calls for research proposals, the next of which is expected for this fall. Sayers says proposals will be evaluated for originality, preparedness and scientific quality. 

Groups of researchers hoping to join the first round of experiments submitted more than 50 applications. Once the lab is at full operation, Sayers says, the selection committee expects to receive four to five times more than that.

Opening up a synchrotron in the Middle East means that more people will learn about these facilities and have a chance to use them. Because some scientists in the region are new to using synchrotrons or writing the style of applications SESAME requires, Sayers asked the selection committee to provide feedback with any rejections. 

Abuhammad is excited for the learning opportunity SESAME presents for her students—and for the possibility that experiences at SESAME will spark future careers in science. 

She plans to apply for beam time at SESAME to conduct protein crystallography, a field that involves peering inside proteins to learn about their function and aid in pharmaceutical drug discovery. 

Another scientist vying for a spot at SESAME is Iranian chemist Maedeh Darzi, who studies the materials of ancient manuscripts and how they degrade. Synchrotrons are of great value to archaeologists because they minimize the damage to irreplaceable artifacts. Instead of cutting them apart, scientists can take a less damaging approach by probing them with particles. 

Darzi sees SESAME as a chance to collaborate with scientists from the Middle East and to promote science, peace and friendship. For her and others, SESAME could be a place where particles put things back together.

by Signe Brewster at September 26, 2017 02:13 PM

September 24, 2017

September 21, 2017

Symmetrybreaking - Fermilab/SLAC

Concrete applications for accelerator science

A project called A2D2 will explore new applications for compact linear accelerators.

Tom Kroc, Matteo Quagliotto and Mike Geelhoed set up a sample beneath the A2D2 accelerator to test the electron beam.

Particle accelerators are the engines of particle physics research at Fermi National Accelerator Laboratory. They generate nearly light-speed, subatomic particles that scientists study to get to the bottom of what makes our universe tick. Fermilab experiments rely on a number of different accelerators, including a powerful, 500-foot-long linear accelerator that kick-starts the process of sending particle beams to various destinations.

But if you’re not doing physics research, what’s an accelerator good for?

It turns out, quite a lot: Electron beams generated by linear accelerators have all kinds of practical uses, such as making the wires used in cars melt-resistant or purifying water.

A project called Accelerator Application Development and Demonstration (A2D2) at Fermilab’s Illinois Accelerator Research Center will help Fermilab and its partners to explore new applications for compact linear accelerators, which are only a few feet long rather than a few hundred. These compact accelerators are of special interest because of their small size—they’re cheaper and more practical to build in an industrial setting than particle physics research accelerators—and they can be more powerful than ever.

“A2D2 has two aspects: One is to investigate new applications of how electron beams might be used to change, modify or process different materials,” says Fermilab’s Tom Kroc, an A2D2 physicist. “The second is to contribute a little more to the understanding of how these processes happen.”

To develop these aspects of accelerator applications, A2D2 will employ a compact linear accelerator that was once used in a hospital to treat tumors with electron beams. With a few upgrades to increase its power, the A2D2 accelerator will be ready to embark on a new venture: exploring and benchmarking other possible uses of electron beams, which will help specify the design of a new, industrial-grade, high-power machine under development by IARC and its partners.

It won’t be just Fermilab scientists using the A2D2 accelerator: As part of IARC, the accelerator will be available for use (typically through a formal CRADA or SPP agreement) by anyone who has a novel idea for electron beam applications. IARC’s purpose is to partner with industry to explore ways to translate basic research and tools, including accelerator research, into commercial applications.

“I already have a lot of people from industry asking me, ‘When can I use A2D2?’” says Charlie Cooper, general manager of IARC. “A2D2 will allow us to directly contribute to industrial applications—it’s something concrete that IARC now offers.”

Speaking of concrete, one of the first applications in mind for compact linear accelerators is creating durable pavement for roads that won’t crack in the cold or spread out in the heat. This could be achieved by replacing traditional asphalt with a material that could be strengthened using an accelerator. The extra strength would come from crosslinking, a process that creates bonds between layers of material, almost like applying glue between sheets of paper. A single sheet of paper tears easily, but when two or more layers are linked by glue, the paper becomes stronger.

“Using accelerators, you could have pavement that lasts longer, is tougher and has a bigger temperature range,” says Bob Kephart, director of IARC. Kephart holds two patents for the process of curing cement through crosslinking. “Basically, you’d put the road down like you do right now, and you’d pass an accelerator over it, and suddenly you’d turn it into really tough stuff—like the bed liner in the back of your pickup truck.”

This process has already caught the eye of the U.S. Army Corps of Engineers, which will be one of A2D2’s first partners. Another partner will be the Chicago Metropolitan Water Reclamation District, which will test the utility of compact accelerators for water purification. Many other potential customers are lining up to use the A2D2 technology platform.

“You can basically drive chemical reactions with electron beams—and in many cases those can be more efficient than conventional technology, so there are a variety of applications,” Kephart says. “Usually what you have to do is make a batch of something and heat it up in order for a reaction to occur. An electron beam can make a reaction happen by breaking a bond with a single electron.”

In other words, instead of having to cook a material for a long time to reach a specific heat that would induce a chemical reaction, you could zap it with an electron beam to get the same effect in a fraction of the time.

In addition to exploring the new electron-beam applications with the A2D2 accelerator, scientists and engineers at IARC are using cutting-edge accelerator technology to design and build a new kind of portable, compact accelerator, one that will take applications uncovered with A2D2 out of the lab and into the field. The A2D2 accelerator is already small compared to most accelerators, but the latest R&D allows IARC experts to shrink the size while increasing the power of their proposed accelerator even further.

“The new, compact accelerator that we’re developing will be high-power and high-energy for industry,” Cooper says. “This will enable some things that weren’t possible in the past. For something such as environmental cleanup, you could take the accelerator directly to the site.”

While the IARC team develops this portable accelerator, which should be able to fit on a standard trailer, the A2D2 accelerator will continue to be a place to experiment with how to use electron beams—and study what happens when you do.

“The point of this facility is more development than research, however there will be some research on irradiated samples,” says Fermilab’s Mike Geelhoed, one of the A2D2 project leads. “We’re all excited—at least I am. We and our partners have been anticipating this machine for some time now. We all want to see how well it can perform.”

Editor's note: This article was originally published by Fermilab.

by Leah Poffenberger at September 21, 2017 05:18 PM

September 19, 2017

Symmetrybreaking - Fermilab/SLAC

50 years of stories

To celebrate a half-century of discovery, Fermilab has been gathering tales of life at the lab.

People discussing Fermilab history

Science stories usually catch the eye when there’s big news: the discovery of gravitational waves, the appearance of a new particle. But behind the blockbusters are the thousands of smaller stories of science behind the scenes and daily life at a research institution. 

As the Department of Energy’s Fermi National Accelerator Laboratory celebrates its 50th anniversary year, employees past and present have shared memories of building a lab dedicated to particle physics.

Some shared personal memories: keeping an accelerator running during a massive snowstorm; being too impatient for the arrival of an important piece of detector equipment to stay put and wait for it to arrive; accidentally complaining about the lab to the lab’s director.

Others focused on milestones and accomplishments: the first daycare at a national lab, the Saturday Morning Physics Program built by Nobel laureate Leon Lederman, the birth of the web at Fermilab.

People shared memories of big names that built the lab: charismatic founding director Robert R. Wilson, fiery head of accelerator development Helen Edwards, talented lab artist Angela Gonzales.

And or course, employees told stories about Fermilab’s resident herd of bison.

There are many more stories to peruse. You can watch a playlist of the video anecdotes or find all of the stories (both written and video) collected on Fermilab’s 50th anniversary website.

by Lauren Biron at September 19, 2017 01:00 PM

September 15, 2017

Symmetrybreaking - Fermilab/SLAC

SENSEI searches for light dark matter

Technology proposed 30 years ago to search for dark matter is finally seeing the light.

Two scientists in hard hats stand next to a cart holding detector components.

In a project called SENSEI, scientists are using innovative sensors developed over three decades to look for the lightest dark matter particles anyone has ever tried to detect.

Dark matter—so named because it doesn’t absorb, reflect or emit light—constitutes 27 percent of the universe, but the jury is still out on what it’s made of. The primary theoretical suspect for the main component of dark matter is a particle scientists have descriptively named the weakly interactive massive particle, or WIMP.

But since none of these heavy particles, which are expected to have a mass 100 times that of a proton, have shown up in experiments, it might be time for researchers to think small.

“There is a growing interest in looking for different kinds of dark matter that are additives to the standard WIMP model,” says Fermi National Accelerator Laboratory scientist Javier Tiffenberg, a leader of the SENSEI collaboration. “Lightweight, or low-mass, dark matter is a very compelling possibility, and for the first time, the technology is there to explore these candidates.”

Sensing the unseen

In traditional dark matter experiments, scientists look for a transfer of energy that would occur if dark matter particles collided with an ordinary nucleus. But SENSEI is different; it looks for direct interactions of dark matter particles colliding with electrons.

“That is a big difference—you get a lot more energy transferred in this case because an electron is so light compared to a nucleus,” Tiffenberg says.

If dark matter had low mass—much smaller than the WIMP model suggests—then it would be many times lighter than an atomic nucleus. So if it were to collide with a nucleus, the resulting energy transfer would be far too small to tell us anything. It would be like throwing a ping-pong ball at a boulder: The heavy object wouldn’t go anywhere, and there would be no sign the two had come into contact.

An electron is nowhere near as heavy as an atomic nucleus. In fact, a single proton has about 1836 times more mass than an electron. So the collision of a low-mass dark matter particle with an electron has a much better chance of leaving a mark—it’s more bowling ball than boulder.

Bowling balls aren't exactly light, though. An energy transfer between a low-mass dark matter particle and an electron would leave only a blip of energy, one either too small for most detectors to pick up or easily overshadowed by noise in the data.

“The bowling ball will move a very tiny amount,” says Fermilab scientist Juan Estrada, a SENSEI collaborator. “You need a very precise detector to see this interaction of lightweight particles with something that is much heavier.”

That’s where SENSEI’s sensitive sensors come in.

SENSEI will use skipper charge-couple devices, also called skipper CCDs. CCDs have been used for other dark matter detection experiments, such as the Dark Matter in CCDs (or DAMIC) experiment operating at SNOLAB in Canada. These CCDs were a spinoff from sensors developed for use in the Dark Energy Camera in Chile and other dark energy search projects.

CCDs are typically made of silicon divided into pixels. When a dark matter particle passes through the CCD, it collides with the silicon’s electrons, knocking them free, leaving a net electric charge in each pixel the particle passes through. The electrons then flow through adjacent pixels and are ultimately read as a current in a device that measures the number of electrons freed from each CCD pixel. That measurement tells scientists about the mass and energy of the particle that got the chain reaction going. A massive particle, like a WIMP, would free a gusher of electrons, but a low-mass particle might free only one or two.

Typical CCDs can measure the charge left behind only once, which makes it difficult to decide if a tiny energy signal from one or two electrons is real or an error.

Skipper CCDs are a new generation of the technology that helps eliminate the “iffiness” of a measurement that has a one- or two-electron margin of error. “The big step forward for the skipper CCD is that we are able to measure this charge as many times as we want,” Tiffenberg says.

The charge left behind in the skipper CCD can be sampled multiple times and then averaged, a method that yields a more precise measurement of the charge deposited in each pixel than the measure-one-and-done technique. That’s the rule of statistics: With more data, you get closer to a property’s true value.

SENSEI scientists take advantage of the skipper CCD architecture, measuring the number of electrons in a single pixel a whopping 4000 times.

“This is a simple idea, but it took us 30 years to get it to work,” Estrada says.

From idea to reality to beyond

A small SENSEI prototype is currently running at Fermilab in a detector hall 385 feet below ground, and it has demonstrated that this detector design will work in the hunt for dark matter.

Skipper CCD technology and SENSEI were brought to life by Laboratory Directed Research and Development (LDRD) funds at Fermilab and Lawrence Berkeley National Laboratory (Berkeley Lab). LDRD programs are intended to provide funding for development of novel, cutting-edge ideas for scientific discovery.

The Fermilab LDRDs were awarded only recently—less than two years ago—but close collaboration between the two laboratories has already yielded SENSEI’s promising design, partially thanks to Berkeley lab’s previous work in skipper CCD design.

Fermilab LDRD funds allow researchers to test the sensors and develop detectors based on the science, and the Berkeley Lab LDRD funds support the sensor design, which was originally proposed by Berkeley Lab scientist Steve Holland.

“It is the combination of the two LDRDs that really make SENSEI possible,” Estrada says.

Future SENSEI research will also receive a boost thanks to a recent grant from the Heising-Simons Foundation.

“SENSEI is very cool, but what’s really impressive is that the skipper CCD will allow the SENSEI science and a lot of other applications,” Estrada says. “Astronomical studies are limited by the sensitivity of their experimental measurements, and having sensors without noise is the equivalent of making your telescope bigger—more sensitive.”

SENSEI technology may also be critical in the hunt for a fourth type of neutrino, called the sterile neutrino, which seems to be even more shy than its three notoriously elusive neutrino family members.

A larger SENSEI detector equipped with more skipper CCDs will be deployed within the year. It’s possible it might not detect anything, sending researchers back to the drawing board in the hunt for dark matter. Or SENSEI might finally make contact with dark matter—and that would be SENSEI-tional.

Editor's note: This article is based on an article published by Fermilab.

by Leah Poffenberger at September 15, 2017 07:00 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Thinking about space and time in Switzerland

This week, I spent a very enjoyable few days in Bern, Switzerland, attending the conference ‘Thinking about Space and Time: 100 Years of Applying and Interpreting General Relativity’. Organised by Claus Beisbart, Tilman Sauer and Christian Wüthrich, the workshop took place at the Faculty of Philosophy at the University of Bern, and focused on the early reception of Einstein’s general theory of relativity and the difficult philosophical questions raised by the theory. The conference website can be found here and the conference programme is here .

Image result for university of bern

The university of Bern, Switzerland

Of course, such studies also have a historical aspect, and I particularly enjoyed talks by noted scholars in the history and philosophy of 20th century science such as Chris Smeenk (‘Status of the Expanding Universe Models’), John Norton (‘The Error that Showed the Way; Einstein’s Path to the Field Equations’), Dennis Lehmkuhl (‘The Interpretation of Vacuum Solutions in Einstein’s Field Equations’), Daniel Kennefick (‘A History of Gravitational Wave Emission’) and Galina Weinstein (‘The Two-Body Problem in General Relativity as a Heuristic Guide to the Einstein-Rosen Bridge and the EPR Argument’). Other highlights were a review of the problem of dark energy (something I’m working on myself at the moment) by astrophysicist Ruth Durrer and back-to-back talks on the so-called black-hole information paradox from physicist Sabine Hossenfelder and philosopher Carina Prunkl. There were also plenty of talks on general relativity such as Claus Kiefer’s recall of the problems raised at the famous 1955 Bern conference (GR0),  and a really interesting talk on Noether’s theorems by Valeriya Chasova.

IMG_1194[1]

Walking to the conference through the old city early yesterday morning

IMG_1196[1]

Dr Valereria Chasova giving a talk on Noether’s theorems

My own talk, ‘Historical and Philosophical Aspects of Einstein’s 1917 Model of the Universe’, took place on the first day, the slides are here. (It’s based on our recent review of the Einstein World which has just appeared in EPJH). As for the philosophy talks, I don’t share the disdain some physicists have for philosophers. It seems to me that philosophy has a big role to play in understanding what we think we have discovered about space and time, not least in articulating the big questions clearly. After all, Einstein himself had great interest in the works of philosophers, from Ernst Mach to Hans Reichenbach, and there is little question that modern philosophers such as Harvey Brown have made important contributions to relativity studies. Of course, some philosophers are harder to follow than others, but this is also true of mathematical talks on relativity!

The conference finished with a tour of the famous Einstein Haus in Bern. It’s strange walking around the apartment Einstein lived in with Mileva all those years ago, it has been preserved extremely well. The tour included a very nice talk by Professor Hans Ott , President of the Albert Einstein Society, on AE’s work at the patent office, his 3 great breakthroughs of 1905, and his rise from obscurity to stardom in the years 1905-1909.

Einstein’s old apartment in Bern, a historic site maintained by the Albert Einstein Society

All in all, my favourite sort of conference. A small number of speakers and participants, with plenty of time for Q&A after each talk. I also liked the way the talks took place in a lecture room in the University of Bern, a pleasant walk from the centre of town through the old part of the city (not some bland hotel miles from anywhere). This afternoon, I’m off to visit the University of Zurich and the ETH, and then it’s homeward bound.

Update

I had a very nice day being shown around  ETH Zurich, where Einstein studied as a student

 

Image may contain: sky, night and outdoor
Image may contain: sky and outdoor
Image may contain: one or more people, sky and outdoor
Image may contain: sky and outdoor
Imagine taking a mountain lift from the centre of town to lectures!

by cormac at September 15, 2017 09:41 AM

September 12, 2017

Symmetrybreaking - Fermilab/SLAC

Clearing a path to the stars

Astronomers are at the forefront of the fight against light pollution, which can obscure our view of the cosmos.

Header: Clearing a path to the stars

More than a mile up in the San Gabriel Mountains in Los Angeles County sits the Mount Wilson Observatory, once one of the cornerstones of groundbreaking astronomy. 

Founded in 1904, it was twice home to the largest telescope on the planet, first with its 60-inch telescope in 1908, followed by its 100-inch telescope in 1917. In 1929, Edwin Hubble revolutionized our understanding of the shape of the universe when he discovered on Mt. Wilson that it was expanding. 

But a problem was radiating from below. As the city of Los Angeles grew, so did the reach and brightness of its skyglow, otherwise known as light pollution. The city light overpowered the photons coming from faint, distant objects, making deep-sky cosmology all but impossible. In 1983, the Carnegies, who had owned the observatory since its inception, abandoned Mt. Wilson to build telescopes in Chile instead.

“They decided that if they were going to do greater, more detailed and groundbreaking science in astronomy, they would have to move to a dark place in the world,” says Tom Meneghini, the observatory’s executive director. “They took their money and ran.” 

(Meneghini harbors no hard feelings: “I would have made the same decision,” he says.)

Beyond being a problem for astronomers, light pollution is also known to harm and kill wildlife, waste energy and cause disease in humans around the globe. For their part, astronomers have worked to convince local governments to adopt better lighting ordinances, including requiring the installation of fixtures that prevent light from seeping into the sky. 

Inline_1: Clearing a path to the stars
Artwork by Corinne Mucha

Many towns and cities are already reexamining their lighting systems as the industry standard shifts from sodium lights to light-emitting diodes, or LEDs, which last longer and use far less energy, providing both cost-saving and environmental benefits. But not all LEDs are created equal. Different bulbs emit different colors, which correspond to different temperatures. The higher the temperature, the bluer the color. 

The creation of energy-efficient blue LEDs was so profound that its inventors were awarded the 2014 Nobel Prize in Physics. But that blue light turns out to be particularly detrimental to astronomers, for the same reason that the daytime sky is blue: Blue light scatters more than any other color. (Blue lights have also been found to be more harmful to human health than more warmly colored, amber LEDs. In 2016, the American Medical Association issued guidance to minimize blue-rich light, stating that it disrupts circadian rhythms and leads to sleep problems, impaired functioning and other issues.)

The effort to darken the skies has expanded to include a focus on LEDs, as well as an attempt to get ahead of the next industry trend. 

At a January workshop at the annual American Astronomical Society (AAS) meeting, astronomer John Barentine sought to share stories of towns and cities that had successfully battled light pollution. Barentine is a program manager for the International Dark-Sky Association (IDA), a nonprofit founded in 1988 to combat light pollution. He pointed to the city of Phoenix, Arizona. 

Arizona is a leader in reducing light pollution. The state is home to four of the 10 IDA-recognized “Dark Sky Communities” in the United States. “You can stand in the middle of downtown Flagstaff and see the Milky Way,” says James Lowenthal, an astronomy professor at Smith College.

But it’s not immune to light pollution. Arizona’s Grand Canyon National Park is designated by the IDA as an International Dark Sky Park, and yet, on a clear night, Barentine says, the horizon is stained by the glow of Las Vegas 170 miles away.

Inline_2: Clearing a path to the stars
Artwork by Corinne Mucha

In 2015, Phoenix began testing the replacement of some of its 100,000 or so old streetlights with LEDs, which the city estimated would save $2.8 million a year in energy bills. But they were using high-temperature blue LEDs, which would have bathed the city in a harsh white light. 

Through grassroots work, the local IDA chapter delayed the installation for six months, giving the council time to brush up on light pollution and hear astronomers’ concerns. In the end, the city went beyond IDA’s “best expectations,” Barentine says, opting for lights that burn at a temperature well under IDA’s maximum recommendations. 

“All the way around, it was a success to have an outcome arguably influenced by this really small group of people, maybe 10 people in a city of 2 million,” he says. “People at the workshop found that inspiring.”

Just getting ordinances on the books does not necessarily solve the problem, though. Despite enacting similar ordinances to Phoenix, the city of Northampton, Massachusetts, does not have enough building inspectors to enforce them. “We have this great law, but developers just put their lights in the wrong way and nobody does anything about it,” Lowenthal says. 

For many cities, a major part of the challenge of combating light pollution is simply convincing people that it is a problem. This is particularly tricky for kids who have never seen a clear night sky bursting with bright stars and streaked by the glow of the Milky Way, says Connie Walker, a scientist at the National Optical Astronomy Observatory who is also on the board of the IDA. “It’s hard to teach somebody who doesn’t know what they’ve lost,” Walker says.

Walker is focused on making light pollution an innate concern of the next generation, the way campaigns in the 1950s made littering unacceptable to a previous generation of kids. 

In addition to creating interactive light-pollution kits for children, the NOAO operates a citizen-science initiative called Globe at Night, which allows anyone to take measurements of brightness in their area and upload them to a database. To date, Globe at Night has collected more than 160,000 observations from 180 countries. 

It’s already produced success stories. In Norman, Oklahoma, for example, a group of high school students, with the assistance of amateur astronomers, used Globe at Night to map light pollution in their town. They took the data to the city council. Within two years, the town had passed stricter lighting ordinances. 

“Light pollution is foremost on our minds because our observatories are at risk,” Walker says. “We should really be concentrating on the next generation.”

by Laura Dattaro at September 12, 2017 01:00 PM

September 08, 2017

Sean Carroll - Preposterous Universe

Joe Polchinski’s Memories, and a Mark Wise Movie

Joe Polchinski, a universally-admired theoretical physicist at the Kavli Institute for Theoretical Physics in Santa Barbara, recently posted a 150-page writeup of his memories of doing research over the years.

Memories of a Theoretical Physicist
Joseph Polchinski

While I was dealing with a brain injury and finding it difficult to work, two friends (Derek Westen, a friend of the KITP, and Steve Shenker, with whom I was recently collaborating), suggested that a new direction might be good. Steve in particular regarded me as a good writer and suggested that I try that. I quickly took to Steve’s suggestion. Having only two bodies of knowledge, myself and physics, I decided to write an autobiography about my development as a theoretical physicist. This is not written for any particular audience, but just to give myself a goal. It will probably have too much physics for a nontechnical reader, and too little for a physicist, but perhaps there with be different things for each. Parts may be tedious. But it is somewhat unique, I think, a blow-by-blow history of where I started and where I got to. Probably the target audience is theoretical physicists, especially young ones, who may enjoy comparing my struggles with their own. Some disclaimers: This is based on my own memories, jogged by the arXiv and Inspire. There will surely be errors and omissions. And note the title: this is about my memories, which will be different for other people. Also, it would not be possible for me to mention all the authors whose work might intersect mine, so this should not be treated as a reference work.

As the piece explains, it’s a bittersweet project, as it was brought about by Joe struggling with a serious illness and finding it difficult to do physics. We all hope he fully recovers and gets back to leading the field in creative directions.

I had the pleasure of spending three years down the hall from Joe when I was a postdoc at the ITP (it didn’t have the “K” at that time). You’ll see my name pop up briefly in his article, sadly in the context of an amusing anecdote rather than an exciting piece of research, since I stupidly spent three years in Santa Barbara without collaborating with any of the brilliant minds on the faculty there. Not sure exactly what I was thinking.

Joe is of course a world-leading theoretical physicist, and his memories give you an idea why, while at the same time being very honest about setbacks and frustrations. His style has never been to jump on a topic while it was hot, but to think deeply about fundamental issues and look for connections others have missed. This approach led him to such breakthroughs as a new understanding of the renormalization group, the discovery of D-branes in string theory, and the possibility of firewalls in black holes. It’s not necessarily a method that would work for everyone, especially because it doesn’t necessarily lead to a lot of papers being written at a young age. (Others who somehow made this style work for them, and somehow survived, include Ken Wilson and Alan Guth.) But the purity and integrity of Joe’s approach to doing science is an example for all of us.

Somehow over the course of 150 pages Joe neglected to mention perhaps his greatest triumph, as a three-time guest blogger (one, two, three). Too modest, I imagine.

His memories make for truly compelling reading, at least for physicists — he’s an excellent stylist and pedagogue, but the intended audience is people who have already heard about the renormalization group. This kind of thoughtful but informal recollection is an invaluable resource, as you get to see not only the polished final product of a physics paper, but the twists and turns of how it came to be, especially the motivations underlying why the scientist chose to think about things one way rather than some other way.

(Idea: there is a wonderful online magazine called The Players’ Tribune, which gives athletes an opportunity to write articles expressing their views and experiences, e.g. the raw feelings after you are traded. It would be great to have something like that for scientists, or for academics more broadly, to write about the experiences [good and bad] of doing research. Young people in the field would find it invaluable, and non-scientists could learn a lot about how science really works.)

You also get to read about many of the interesting friends and colleagues of Joe’s over the years. A prominent one is my current Caltech colleague Mark Wise, a leading physicist in his own right (and someone I was smart enough to collaborate with — with age comes wisdom, or at least more wisdom than you used to have). Joe and Mark got to know each other as postdocs, and have remained friends ever since. When it came time for a scientific gathering to celebrate Joe’s 60th birthday, Mark contributed a home-made movie showing (in inimitable style) how much progress he had made over the years in the activities they had enjoyed together in their relative youth. And now, for the first time, that movie is available to the whole public. It’s seven minutes long, but don’t make the mistake of skipping the blooper reel that accompanies the end credits. Many thanks to Kim Boddy, the former Caltech student who directed and produced this lost masterpiece.

When it came time for his own 60th, Mark being Mark he didn’t want the usual conference, and decided instead to gather physicist friends from over the years and take them to a local ice rink for a bout of curling. (Canadian heritage showing through.) Joe being Joe, this was an invitation he couldn’t resist, and we had a grand old time, free of any truly serious injuries.

We don’t often say it out loud, but one of the special privileges of being in this field is getting to know brilliant and wonderful people, and interacting with them over periods of many years. I owe Joe a lot — even if I wasn’t smart enough to collaborate with him when he was down the hall, I learned an enormous amount from his example, and often wonder how he would think about this or that issue in physics.

 

by Sean Carroll at September 08, 2017 06:18 PM

Symmetrybreaking - Fermilab/SLAC

Detectors in the dirt

A humidity and temperature monitor developed for CMS finds a new home in Lebanon.

A technician from the Optosmart company examines the field in the Bekaa valley in Lebanon.

People who tend crops in Lebanon and people who tend particle detectors on the border of France and Switzerland have a need in common: large-scale humidity and temperature monitoring. A scientist who noticed this connection is working with farmers to try to use a particle physics solution to solve an agricultural problem.

Farmers, especially those in dry areas found in the Middle East, need to produce as much food as possible without using too much water. Scientists on experiments at the Large Hadron Collider want to track the health of their detectors—a sudden change in humidity or temperature can indicate a problem.

To monitor humidity and temperature in their detector, members of the CMS experiment at the LHC developed a fiber-optic system. Fiber optics are wires made from glass that can carry light. Etching small mirrors into the core of a fiber creates a “Bragg grating,” a system that either lets light through or reflects it back, based on its wavelength and the distance between the mirrors.

“Temperature will naturally have an impact on the distance between the mirrors because of the contraction and dilation of the material,” says Martin Gastal, a member of the CMS collaboration at the LHC. “By default, a Bragg grating sensor is a temperature sensor.”

Scientists at the University of Sannio and INFN Naples developed a material for the CMS experiment that could turn the temperature sensors into humidity monitors as well. The material expands when it comes into contact with water, and the expansion pulls the mirrors apart. The sensors were tested by a team from the Experimental Physics Department at CERN.

In December 2015, Lebanon signed an International Cooperation Agreement with CERN, and the Lebanese University joined CMS. As Professor Haitham Zaraket, a theoretical physicist at the Lebanese University and member of the CMS experiment, recalls, they picked fiber optic monitoring from a list of CMS projects for one of their engineers to work on. Martin then approached them about the possibility of applying the technology elsewhere.

With Lebanon’s water resources under increasing pressure from a growing population and agricultural needs, irrigation control seemed like a natural application. “Agriculture consumes quite a high amount of water, of fresh water, and this is the target of this project,” says Ihab Jomaa, the Department Head of Irrigation and Agrometeorology at the Lebanese Agricultural Research Institute. “We are trying to raise what we call in agriculture lately ‘water productivity.’”

The first step after formally establishing the Fiber Optic Sensor Systems for Irrigation (FOSS4I) collaboration was to make sure that the sensors could work at all in Lebanon’s clay-heavy soil. The Lebanese University shipped 10 kilograms of soil from Lebanon to Naples, where collaborators at University of Sannio adjusted the sensor design to increase the measurement range.

During phase one, which lasted from March to June, 40 of the sensors were used to monitor a small field in Lebanon. It was found that, contrary to the laboratory findings, they could not in practice sense the full range of soil moisture content that they needed to. Based on this feedback, “we are working on a new concept which is not just a simple modification of the initial architecture,” Haitham says. The new design concept is to use fiber optics to monitor an absorbing material planted in the soil rather than having a material wrapped around the fiber.

“We are reinventing the concept,” he says. “This should take some time and hopefully at the end of it we will be able to go for field tests again.” At the same time, they are incorporating parts of phase three, looking for soil parameters such as pesticide or chemicals inside the soil or other bacterial effects.

If the new concept is successfully validated, the collaboration will move on to testing more fields and more crops. Research and development always involves setbacks, but the FOSS4I collaboration has taken this one as an opportunity to pivot to a potentially even more powerful technology.

by Jameson O'Reilly at September 08, 2017 04:40 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
November 17, 2017 06:50 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at