Particle Physics Planet


July 05, 2015

Emily Lakdawalla - The Planetary Society Blog

In Pictures: Russian Spacecraft Ends Streak of Station Supply Mishaps
Following back-to-back space station resupply failures, a Russian Progress vehicle pulled into port this morning at the International Space Station.

July 05, 2015 12:30 PM

Peter Coles - In the Dark

Why not doing research all the time can make you a better researcher

Yesterday I read a nice little article  in Nature about how doing something different from research every now and again can actually make you a better researcher. I agree with that completely, so thought I’d expand upon the theme with a few comments of my own. I think this is an issue of particular importance for early career researchers, as that is the stage at which good habits need to be established, so I will focus on PhD students.

The point is that a postgraduate research degree is very different from a programme of undergraduate study. For one thing, as a research student you are expected to work on your own a great deal of the time. That’s because nobody else will be doing precisely the same project so, although other students will help you out with some things, you’re not trying to solve the same problems as your peers as is the case with an undergraduate. Your supervisor will help you of course and make suggestions (of varying degrees of helpfulness), but a PhD is still a challenge that you have to meet on your own. I don’t think it is good supervisory practice to look over a research student’s shoulder all the time. It’s part of the purpose of a PhD that the student learns to go it alone. There is a balance of course, but my own supervisor was rather “hands off” and I regard that as the right way to supervise. I’ve always encouraged my own students to do things their own way rather than try to direct them too much.

The sense of isolation that can come from immersing yourself in research is tough in itself, but there’s also the scary fact that you do not usually know whether your problem has a solution, let alone whether you yourself can find it. There is no answer at the back of the book; if there were you would not be doing research. A good supervisor will suggest a project that he or she thinks is both interesting and feasible, but the expectation is that you will very quickly be in a position where you know more about that topic than your supervisor.

I think almost every research student goes through a phase in which they feel out of their depth. There are times when you get thoroughly stuck and you begin to think you will never crack it. Self-doubt, crisis of confidence, call it what you will, I think everyone who has done a postgraduate degree has experienced it. I certainly did. A year into my PhD I felt I was getting nowhere with the first problem I had been given to solve. All the other research students seemed much cleverer and more confident than me. Had I made a big mistake thinking I could this? I started to panic and began to think about what kind of job I should go into if I abandoned the idea of pursuing a career in research.

So why didn’t I quit?

There were a number of factors, including the support and encouragement of my supervisor, staff and fellow students in the Astronomy Centre, and the fact that I loved living in Brighton, but above all it was because I knew that I would feel frustrated for the rest of my life if I didn’t see it through. I’m a bit obsessive about things like that. I can never leave a crossword unfinished either.

But while it can be good to be a  obsessive about your research, that doesn’t mean you should try to exclude other things, even other obsessions, from your life.

What happened in my case was that after some discussion with my supervisor I shelved that first troublesome problem and tried another, much easier one. I cracked that fairly quickly and it became my first proper publication. Moreover, thinking about that other problem revealed that there was a way to finesse the difficulty I had failed to overcome in the first project. I returned to the first project and this time saw it through to completion. With my supervisor’s help that became my second paper, published in 1987.

I know it’s wrong to draw inferences about other people from one’s own particular experiences, but I do feel that there are general lessons. One is that if you are going to complete a research degree you have to have a sense of determination that borders on obsession. I was talking to a well-known physicist at a meeting not long ago and he told me that when he interviews prospective physics students he asks them “Can you live without physics?”. If the answer is “yes” then he tells them not to do a PhD. It’s not just a take-it-or-leave-it kind of job being a scientist. You have to immerse yourself in it and be prepared to put long hours in. When things are going well you will be so excited that you will find it as hard to stop as it is when you’re struggling. I’d imagine it is the just same for other disciplines.

The other, equally important, lesson to be learned is that it is essential to do other things as well. Being “stuck” on a problem is part-and-parcel of mathematics or physics research, but sometimes battering your head against the same thing for days on end just makes it less and less likely you will crack it. The human brain is a wonderful thing, but it can get stuck in a rut. One way to avoid this happening is to have more than one thing to think about.

I’ve lost count of the number of times I’ve been stuck on the last clue in a crossword. What I always do in that situation is put it down and do something else for a bit. It could even be something as trivial as making a cup of tea, just as long as I don’t think about the clue at all while I’m doing it. Nearly always when I come back to it and look at it afresh I can solve it. I have a large stack of prize dictionaries to prove that this works!

It can be difficult to force yourself to pause in this way. I’m sure that I’m not the only physicist who has been unable to sleep for thinking about their research. I do think however that it is essential to learn how to effect your own mental reboot. In the context of my PhD research this involved simply turning to a different research problem, but I think the same purpose can be served in many other ways: taking a break, going for a walk, playing sport, listening to or playing music, reading poetry, doing a crossword, or even just taking time out to socialize with your friends. Time spent sitting at your desk isn’t guaranteed to be productive.

So, for what it’s worth here is my advice to new postgraduate students. Work hard. Enjoy the challenge. Listen to advice from your supervisor, but remember that the PhD is your opportunity to establish your own identity as a researcher. Above all, in the words of the Desiderata:

Beyond a wholesome discipline,
be gentle with yourself.

Never feel guilty about establishing a proper work-life balance. Having more than one dimension to your life will not only improve your well-being but also make you a better researcher.


by telescoper at July 05, 2015 10:14 AM

Emily Lakdawalla - The Planetary Society Blog

New Horizons enters safe mode 10 days before Pluto flyby
New Horizons decided to put on a little 4th of July drama for the mission's fans. It's currently in safe mode, and it will likely be a day or two before it recovers and returns to science, but it remains on course for the July 14 flyby. Here's the mission update in its entirety.

July 05, 2015 04:10 AM

July 04, 2015

Christian P. Robert - xi'an's og

Bayesian statistics from methods to models and applications

A Springer book published in conjunction with the great BAYSM 2014 conference in Wien last year has now appeared. Here is the table of contents:

  • Bayesian Survival Model Based on Moment Characterization by Arbel, Julyan et al.
  • A New Finite Approximation for the NGG Mixture Model: An Application to Density Estimation by Bianchini, Ilaria
  • Distributed Estimation of Mixture Model by Dedecius, Kamil et al.
  • Jeffreys’ Priors for Mixture Estimation by Grazian, Clara and X
  • A Subordinated Stochastic Process Model by Palacios, Ana Paula et al.
  • Bayesian Variable Selection for Generalized Linear Models Using the Power-Conditional-Expected-Posterior Prior by Perrakis, Konstantinos et al.
  • Application of Interweaving in DLMs to an Exchange and Specialization Experiment by Simpson, Matthew
  • On Bayesian Based Adaptive Confidence Sets for Linear Functionals by Szabó, Botond
  • Identifying the Infectious Period Distribution for Stochastic Epidemic Models Using the Posterior Predictive Check by Alharthi, Muteb et al.
  • A New Strategy for Testing Cosmology with Simulations by Killedar, Madhura et al.
  • Formal and Heuristic Model Averaging Methods for Predicting the US Unemployment Rate by Kolly, Jeremy
  • Bayesian Estimation of the Aortic Stiffness based on Non-invasive Computed Tomography Images by Lanzarone, Ettore et al.
  • Bayesian Filtering for Thermal Conductivity Estimation Given Temperature Observations by Martín-Fernández, Laura et al.
  • A Mixture Model for Filtering Firms’ Profit Rates by Scharfenaker, Ellis et al.

Enjoy!


Filed under: Books, Kids, pictures, Statistics, Travel, University life, Wines Tagged: Austria, BAYSM 2014, conference, proceedings, Springer-Verlag, Vienna, Wien, WU Wirtschaftsuniversität Wien, young Bayesians

by xi'an at July 04, 2015 10:15 PM

Tommaso Dorigo - Scientificblogging

AMVA4NewPhysics Logo
Apologizing for a hiatus due to vacations, I am posting today a tentative logo of the Marie-Curie network I am coordinating, AMVA4NewPhysics. A brief explanation of the symbols at the basis of the logo is given below, in order for you to propose changes or even help by offering different ideas (and if you're a graphic designer, then maybe you consider producing a better one for us ?).

read more

by Tommaso Dorigo at July 04, 2015 08:41 PM

Lubos Motl - string vacua and pheno

What is string theory? Ask Ashoke and Nima
If you have 94 spare minutes, you should watch this insightful and amusing panel discussion on "What is string theory", a public event that took place on Monday, at the end of the Strings 2015 annual conference in India.



Rajesh Gopakumar introduces the two main heroes, Milner Prize winners Ashoke Sen and Nima Arkani-Hamed.




They talk about all kinds of conceptual questions, what string theory and quantum gravity is, why quantum gravity is difficult, and so on. The monologues begin with a rather conventional Ashoke Sen's 20-minute introduction to modern physics, quantum field theory, and string theory.

There is room for questions (the audience is mostly local undergrads) at 22:00. The first question is about connections between Higgs fields and gravitational fields. Nima explains that those two are further than suggested by the pop-science talk about it. Most of our mass is from gluons, not the Higgs, anyway.




The second question is whether you can say anything quantitative about the weak force, buddies. You bet, pal! Nima traces the history of the weak force since the 19th century observations of the beta-decay. Comments about the galactic-size colliders. Even without those, we may do things to probe the high-energy regime, like rare processes (analogous to the old neutron decay, e.g. the future proton decay). Big experiments are analogous to building of temples – one needs stamina and character and isn't guaranteed to see God at the end, anyway.

What do higher energies improve about the experiments? We get to shorter distances – paradoxically, we need a very big machine for that. Doubled energy multiplies the rate of new particle production by 50 or so, a power law. Ashoke adds that with the identification of the right compactification, one may predict everything. A question about the range and strength of forces. Nima warns a young Gentlemen that many propositions seemingly reasonable at the level of words become unreasonable at the level of quantitative scrutiny. It's important to learn as much as possible to be able to disprove your own words as quickly as possible, that's what schools are good for. Very important comments – the broader laymen community around physics misunderstands these conceptual matters. Ashoke answers the same question (discourages a strange tunneling connection between the strong force and gravity) more "technically".

At 43:00, a young chap asked about the status of dark matter and dark energy; the "exact" meaning of supersymmetry; and "contenders" to string theory. Nima says how dark matter affects gravitational fields but emits no light. We may be close to the direct detection – perhaps next 5 years. One TeV particle zipping through this room. Dark energy is just a constant energy density of the vacuum. SUSY explained via an analogy with the Lorentz symmetry that implies antimatter (within quantum mechanics). Supersymmetry is the last thing like that to see; an update of spacetime that is more quantum mechanical, with new anticommuting numbers. Those imply another doubling of the world. A very concise story – surely not the first time when Nima said it.

Ashoke gives an alternative view on the dark energy – shows that the cosmological constant is the height of the minimum of the potential energy graphs in the landscape. Why is the dark energy so tiny is like why is the Universe so big? Ashoke's answer about the "contenders". He may be "biased" but he thinks that loop quantum gravity hasn't gotten anywhere – in comparison with string theory. It can't even explain Newton's force. Nima adds that the constraints demanded from good theories are harsh. To find even one theory that can predict the cross section of some graviton scattering at some energy and that is still compatible with the consistency requirements is tough. The passing proposals always turned out to have strings running inside. It's not a matter of sociology; if someone managed to find some actual non-stringy theory able to do the same thing, she would be extremely famous. Amen to all of that.

Another question at 57:00 was about Einstein's cosmological constant – yes, it's the same discussion (and Nima says that Einstein's goal was just the opposite, to make the Universe static, not accelerating) – and about the many worlds interpretation. Nima emphasizes that MWI is a different thing than the multiverse. Nima nicely and politely says that "concerning the many worlds interpretation, many of us think that there is not much substance in these debates." ;-) Nima moves on and suggests that it's plausible that the other regions of the multiverse could be unphysical, not simultaneously existing with our patch.

A guy asks why can't we just construct a model of dark matter by ourselves if we know all the particles. Ashoke corrects that – we only know light enough (and strongly enough interacting) particle species. And that's just not enough to identify the point in the landscape – and predict all the other particles. Nima adds why Planckian ultrashort distances become physically meaningless or very different – Planck energy colliders produce black holes that don't get smaller anymore. I missed that someone had asked the question. Ashoke adds comments.

Another participant asks what strings say about entanglement. Ashoke says that entanglement is generally everywhere in quantum mechanics. String theory allows entanglement to be linked to geometry in a relationship that is still emerging. Just as if he were me ;-), Nima attacks the widespread slogan that "entanglement is mysterious". Quantum mechanics has been around for almost 100 years. And it's really fundamentally different. Far from mysterious, entanglement is the engine that makes quantum mechanics special and produces all the new phenomena. Entanglement is only mysterious from the viewpoint of classical physics but not from the perspective of the right theory which is quantum mechanics. And physicists' job isn't to enslave our explanation to our bodies and minds as they evolved – to understand things classically. The laws of QM are beautiful, crystal clear, and leave nothing mysterious about entanglement at all. I encourage you to be skeptical about people who say that QM is mysterious etc. Nature just doesn't want to be described in this way and it's amazing that the physicists 90 years ago could have found the better way.

At 1:12:00, a charming girl made a long monologue but didn't use the microphone wisely so I didn't understand a word. But she entertained the panelists. She apparently said something like "it's not too important if the Higgs field gives masses etc., what important things does it do?" ;-) Does it solve the Israel-Palestine conflict? One may see that the interaction with the Bengalúru audience is spontaneous and entertaining – some of the responses are funny, indeed. Nima may be an applied string theorist but his defense of string theory – and why it maximally fulfills all the dreams of unification – is spontaneous and cool. The Higgs boson is strange – meaning the first spinless particles (that weren't known and may be assumed to be very heavy). The Hydrogen atom is also light and spinless but not elementary – the Higgs looks elementary. Ashoke says that the theorists knew the Higgs 50 years before the discovery. Half a century of alternatives have basically failed. Nima adds that decades ago, there seemed to be just-particle and not-just-particle camps in physics but string theorists have unified them. QFT and ST are much closer than both camps had realized. There are no camps anymore. String theory has unified everything. Due to the links to things already established, it's hard to imagine strings will go away.

Someone asks whether the dark energy and the vacuum energy are the same or if there are extra terms. Nima says it's an experimental question: other things may cause the acceleration, too. But we may still ask whether the vacuum energy is constant in time or not. It seems very constant, experimentally. Change at most by 10% since the early Universe. Next experiments will probe it up to 1%. If it were variable, the rate would probably be much higher. Another part of the question: the vacuum energy only makes sense when gravity is turned on because otherwise the energy may be shifted by an additive shift without changing physics. In the lab, we would hardly observe the effects of the C.C. – doubling each 10 billion years is too slow a process. Concerning the final question from that Gentleman, spin-3/2 particles are special because they can only exist if SUSY exists. And SUSY then determines all of its (gravitino's) interactions.

After another question, Ashoke says that "grand unified theory" may have different meanings. The unification of forces may be either the "straightforward one" (mathematics of GUT) or a more general one, but still internally consistent. Ashoke says that it would be inconsistent to combine a classical theory of gravity with the quantum theory of everything else.

Nima talks about the steady progress in physics since 19th century which was also making our very dreams different, deeper, and more mature. Quantum mechanics implied that our Laplacian dream (to predict everything deterministically) has "failed". But it's good that it did, Nature just works differently. Those changes aren't disappointments. The nature of questions keeps on evolving – and it always pushed us towards more striking ideas. Lorentz and other classical physicists could have been disappointed by QM but it's QM that allowed the wonderful new unification of waves and particles etc. The more we understand, the deeper simplicity and beauty of the Universe we uncover. The string landscape is analogous, Nima says: instead of viewing it as a failure, you must understand it as a sign of the amazing unified nature of string theory – its ability to view all possibilities as solutions to the same equations.

Later in the day, there were public lectures by Seiberg, Strominger, and Vafa. And one more Nima. Check the Indian institute's YouTube channel. At the top, you see lots of more popular talks by Witten, Maldacena, and others, and technical talks beneath them.

Thanks to Giotis

by Luboš Motl (noreply@blogger.com) at July 04, 2015 03:22 PM

Clifford V. Johnson - Asymptotia

Lost Treasure
very_rough_pageStill doing detailed layouts for the book. I've been working on a story for which I was sure that I'd done some rough layouts a long time ago that I really liked. But I could not find them at all, and resigned myself to having to do it again. There's always that moment of hesitation where one is poised between just diving in and re-doing something, or spending more time searching... Which is the better strategy to save time? This time, I thought I'd do one last look, and started to dig around in my computer, hoping that maybe I'd had the sense to scan the sketches at some point - I vaguely recall having made a policy decision to scan developmental sketches whenever I could, for ease of [...] Click to continue reading this post

by Clifford at July 04, 2015 01:31 PM

Peter Coles - In the Dark

Introduction of me, my ideology and this blog

telescoper:

A new blog about science education for students with special needs.

Why not give it a follow?

Originally posted on SENSE: Science Education as a Non-Sighted Experience:

If you look at the “About me” section of this set of articles, you will most probably find nothing. The reason for that is deep in the roots of the idea; it does not matter who I am. It does not matter what my name, age, gender is, that does not tell you a lot about me if you know I have long hair and green eyes. Similarly, it really doesn’t change much whether you know my nationality, profession, or the food, music I like. I would be too subjective and biased to introduce myself in words of descriptions, adjectives and attributes. Why don’t you tell me and retell others who I am, what I believe in, what I do, based on the things I say, think or in this case blog post.
I also wouldn’t want to write about me, simply because there are certainly more people thinking the…

View original 388 more words


by telescoper at July 04, 2015 12:36 PM

Quantum Diaries

Why Dark Matter Exists: Believing Without Seeing
The Milky Way rises over the Cerro Tololo Inter-American Observatory in northern Chile. The Dark Energy Survey operates from the largest telescope at the observatory, the 4-meter Victor M. Blanco Telescope (left). Photo courtesy of Andreas Papadopoulos

The Milky Way rises over the Cerro Tololo Inter-American Observatory in northern Chile. The Dark Energy Survey operates from the largest telescope at the observatory, the 4-meter Victor M. Blanco Telescope (left). Photo courtesy of Andreas Papadopoulos

For decades physicists have been convinced that most of our universe is invisible, but how do we know that if we can’t see it? I want to explain the thought process that leads one to believe in a theory via indirect evidence. For those who want to see a nice summary of the evidence, check this out. So this post isn’t 3000 words, I will simply say that either our theories of gravity are wrong, or the vast majority of the matter in our universe is invisible. That most of the matter in the universe is invisible, or “dark”, is actually well supported. Dark matter as a theory fits the data much better than modifications to gravity (with a couple of possible exceptions like mimetic dark matter). This isn’t necessarily surprising; frankly it would be a bit arrogant to assume that only matter similar to us exists. Particle physicists have known for a long time that not all particles are affected by all the fundamental forces. For example, the neutrino is invisible as it doesn’t interact with the electromagnetic force (or strong force, for that matter). So the neutrino is actually a form of dark matter, though it is much too quick and light to make up most of what we see.

The standard cosmological model, the ΛCDM model, has had tremendous success explaining the evolution of our universe. This is what most people refer to when they think of dark matter: the CDM stands for “cold dark matter”, and it is this consistency that allows us to explain observations from almost every cosmological epoch that is so compelling about dark matter. We see the effect of dark matter across the sky in the CMB, in the helium formed in primordial nucleosynthesis, in the very structure of the galaxies. We see dark matter a minute after the big bang, a million years, a billion years, and even today. Simply put, when you add in dark matter (and dark energy) almost the entirety of cosmological history makes sense.  While there some elements that seem to be lacking in the ΛCDM model (small scale structure formation, core vs cusp, etc), these are all relatively small details that seem to have solutions in either simulating normal matter more accurately, or small changes to the exact nature of dark matter.

Dark matter is essentially like a bank robber: the money is gone, but no-one saw the theft. Not knowing exactly who stole the money doesn’t mean that someone isn’t living it up in the Bahamas right now. The ΛCDM model doesn’t really care about the fine details of dark matter: things like its mass, exact interactions and formation are mostly irrelevant. To the astrophysicist, there are really two features that they require: dark matter cannot have strong interactions with normal matter (electromagnetic or strong forces), and dark matter must be moving relatively slowly (or “cold”). Anything that has these properties is called a dark matter “candidate” as it could potentially be the main constituent of dark matter. Particle physicists try to come up with these candidates, and hopefully find ways to test them. Ruling out a candidate is not the same as ruling out the idea of dark matter itself, it is just removing one of a hundred suspects.

Being hard to find is a crucial property of dark matter. We know dark matter must be a slippery bastard, as it doesn’t interact via the electromagnetic or strong forces. In one sense, assuming we can discover dark matter in our lifetime is presumptuous: we are assuming that it has interactions beyond gravity. This is one of a cosmologist’s fondest hopes as without additional interactions we are screwed. This is because gravity is by far the weakest force. You can test this yourself – go to the fridge, and get a magnet. With a simple fridge magnet, weighing only a few grams, you can pick up a paperclip, overpowering the 6*10^24 kg of gravitational mass the earth possesses. Trying to get a single particle, weighing about the same as an atom, to show an appreciable effect only through gravity is ludicrous. That being said, the vast quantities of dark matter strewn throughout our universe have had a huge and very detectable gravitational impact. This gravitational impact has led to very successful and accurate predictions. As there are so many possibilities for dark matter, we try to focus on the theories that link into other unsolved problems in physics to kill two birds with one stone. While this would be great, and is well motivated, nature doesn’t have to take pity on us.

So what do we look for in indirect evidence? Essentially, you want an observation that is predicted by your theory, but is very hard to explain without it. If you see an elephant shaped hole in your wall, and elephant shaped foot prints leading outside, and all your peanuts gone, you are pretty well justified in thinking that an elephant ate your peanuts. A great example of this is the acoustic oscillations in the CMB. These are huge sound waves, the echo of theCMB big bang in the primordial plasma. The exact frequency of this is related to the amount of matter in the universe, and how this matter interacts. Dark matter makes very specific predictions about these frequencies, which have been confirmed by measurements of the CMB. This is a key observation that modified gravity theories tend to have trouble explaining.

The combination of the strong indirect evidence for dark matter, the relative simplicity of the theory and the lack of serious alternatives means that research into dark matter theories is the most logical path. That is not to say that alternatives should not be looked into, but to disregard the successes of dark matter is simply foolish. Any alternative must match the predictive power and observational success of dark matter, and preferably have a compelling reason for being ‘simpler’ or philosophically nicer then dark matter. While I spoke about dark matter, this is actually something that occurs all the time in science: natural selection, atomic theory and the quark model are all theories that have all been in the same position at one time or another. A direct discovery of dark matter would be fantastic, but is not necessary to form a serious scientific consensus. Dark matter is certainly mysterious, but ultimately not a particularly strange idea.

Disclaimer: In writing this for a general audience, of course I have to make sacrifices. Technical details like the model dependent nature of cosmological observations are important, but really require an entire blog post to themselves to answer fully.

 

 

by Alex Millar at July 04, 2015 06:39 AM

July 03, 2015

Christian P. Robert - xi'an's og

generating from a failure rate function [X’ed]

While I now try to abstain from participating to the Cross Validated forum, as it proves too much of a time-consuming activity with little added value (in the sense that answers are much too often treated as disposable napkins by users who cannot be bothered to open a textbook and who usually do not exhibit any long-term impact of the provided answer, while clogging the forum with so many questions that the individual entries seem to get so little traffic, when compared say with the stackoverflow forum, to the point of making the analogy with disposable wipes more appropriate!), I came across a truly interesting question the other night. Truly interesting for me in that I had never considered the issue before.

The question is essentially wondering at how to simulate from a distribution defined by its failure rate function, which is connected with the density f of the distribution by

\eta(t)=\frac{f(t)}{\int_t^\infty f(x)\,\text{d}x}=-\frac{\text{d}}{\text{d}t}\,\log \int_t^\infty f(x)\,\text{d}x

From a purely probabilistic perspective, defining the distribution through f or through η is equivalent, as shown by the relation

F(t)=1-\exp\left\{-\int_0^t\eta(x)\,\text{d}x\right\}

but, from a simulation point of view, it may provide a different entry. Indeed, all that is needed is the ability to solve (in X) the equation

\int_0^X\eta(x)\,\text{d}x=-\log(U)

when U is a Uniform (0,1) variable. Which may help in that it does not require a derivation of f. Obviously, this also begs the question as to why would a distribution be defined by its failure rate function.


Filed under: Books, Kids, Statistics, University life Tagged: cross validated, failure rate, Monte Carlo Statistical Methods, probability theory, reliability, simulation, StackExchange, stackoverflow, survival analysis

by xi'an at July 03, 2015 10:15 PM

astrobites - astro-ph reader's digest

Mapping the Milky Way

Title: The Skeleton of the Milky Way

Authors: Catherine Zucker, Cara Battersby, and Alyssa Goodman

First Author’s Institution: Astronomy Department, University of Virginia

Status: Submitted to the Astrophysical Journal

How many spiral arms does the Milky Way have? You might be surprised to learn that astronomers are still not completely sure. Unlike other galaxies that we can see face-on, our location in the Milky Way makes it hard for us to determine the structure of our own home galaxy.

CO and the Milky Way

Figure 1 shows the velocity-integrated map of CO, which traces out the Milky Way. The color indicates the density of the molecular gas. From Dame, Hartmann, and Thaddeus (2001). To see a bigger image, check out this link: https://www.cfa.harvard.edu/mmw/Fig2_Dame.pdf)

Figure 1: The velocity-integrated map of CO, which traces out the distribution of H2 in the Milky Way. The color indicates the density of the molecular gas. From Dame, Hartmann, and Thaddeus (2001). To see a bigger image, check out this link: https://www.cfa.harvard.edu/mmw/Fig2_Dame.pdf)

A lot of what we do know about the Milky Way’s structure comes from radial velocity measurements of interstellar gas and much of the interstellar gas is contained in giant molecular clouds (GMCs). Using the line-of-sight velocities of the gas and the Milky Way’s rotation curve, we can determine the distances of the gas from the center of the Galaxy, mapping out the structure of the Milky Way. These molecular clouds consist mostly of molecular hydrogen, or H2. Because H2 lacks a permanent electric dipole moment, it is almost invisible at the low temperatures of the molecular clouds, making it very difficult to detect directly. Instead of measuring emission from molecular hydrogen to locate the GMCs, astronomers often use tracers for the molecular hydrogen, like CO. These tracers make up a small fraction of the gas in molecular clouds, but they have lower-frequency transitions that we can detect. By mapping out the emission of CO across the sky, we can also effectively map out the location of the giant molecular clouds. Figure 1 shows a velocity-integrated map of the Milky Way using CO as a tracer, from Dame, Hartmann, and Thaddeus (2001).

However, when piecing together the three-dimensional structure of the galaxy this way, it can be difficult to distinguish overlapping features and finer details. Astronomers also use a combination of other methods to get a more complete and detailed picture of the Milky Way. These include measurements of parallax and proper motions of masers, maps of dust extinction, and millimeter-wavelength surveys of the Galactic plane. The authors of today’s paper outline yet another way to study the structure of the Galaxy on large scales by looking at long, thin infrared dark clouds (IRDCs) that they call “bones.” They hypothesize that these bones trace out the densest parts of the Milky Way’s spiral arms, thus forming the “skeleton” of our Galaxy and delineating its three-dimensional structure. In today’s paper, they explore ten bone candidates, six of which they believe mark out important spiral features.

Searching for bones

Infrared dark clouds are so named because they appear as dark extinction features that we can see silhouetted against molecular clouds in the infrared. Continuing on their hypothesis that bone-like, filamentary IRDCs can trace out the locations of the Milky Way’s spiral arms, the authors use mid-infrared imaging of the Galactic plane to search for bone candidates where the spiral arms are supposed to be. Earth is actually located about 25 parsecs above the Galactic plane, so we do have a little perspective on the plane of the Galaxy. The Galactic coordinate system, which has the Sun at its center, thus has a fundamental plane that happens to be just off of the Galactic plane. This allows the authors to search for bones at predictable offsets from 0 degrees in Galactic Latitude.

After locating fifteen potential candidates with visual inspection of the mid-infrared imaging, the authors then check that each of these candidates is likely to be associated with a spiral arm. This includes having similar line-of-sight velocities along the length of the structure (no abrupt shifts of more than 3 km/s per 10 parsecs along the bone candidate), and measured radial velocities close the radial velocities predicted by the Milky Way’s rotation curve for spiral arms at those distances.

Figure 6 from the paper, which shows a detail of Filament 5, the best bone candidate. The background is a GLIMPSE-Spitzer 8 micron image, the dashed line across the figure indicates the location of the Galactic mid-plane, and is color-coded with velocities from Dame & Thaddeus (2011). The solid colored lines on either side indicate the +/- 20 pc from Galactic midplane at the distance of the Scutum-Centaurus model, indicating that the filament lies within 15 parsecs of the Galactic plane. Finally, the yellow-boxed inset shows Filament 5 in greater detail. The squares, triangles, and circles correspond to sources from various radio catalogues.

Figure 2: Figure 6 from the paper, which shows a detail of Filament 5, the best bone candidate. The background is a GLIMPSE-Spitzer 8 micron image, the dashed line across the figure indicates the location of the Galactic mid-plane, and is color-coded with velocities from Dame & Thaddeus (2011). The solid colored lines on either side indicate the +/- 20 pc from Galactic midplane at the distance of the Scutum-Centaurus model, indicating that the filament lies within 15 parsecs of the Galactic plane. Finally, the yellow-boxed inset shows Filament 5 in greater detail. The squares, triangles, and circles correspond to sources from various radio surveys.

Using radial velocity data from five radio surveys, they were able to find that ten of their fifteen candidates had radial velocities consistent with the Galactic arms — Scutum-Centaurus and Norma — that they were studying. From there, they constructed further criteria for these candidates to be “bones.” In total, they have six criteria for the bone candidates:

  1. mostly continuous mid-infrared extinction feature
  2. containing no abrupt shifts of velocity within the feature
  3. having a similar radial velocity to a Milky Way arm
  4. parallel to the plane of the Milky Way to within 30 degrees
  5. be within 20 parsecs of the Galactic mid-plane (assuming the Galaxy is flat)
  6. have an aspect ratio of ≥ 50:1 (be long and skinny).

Six of their ten bone candidates meet all six of their bone criteria, to varying degrees. While some of them are are low-grade bone candidates the authors keep all six of them in their catalogue, because they believe that the features could all be bones in various stages of evolution. Filament 5, their strongest bone candidate, and can be seen in detail in the inset of Figure 2. This filament is very close to parallel to the Galactic mid-plane, has strong velocity contiguity, almost exactly matches (within 1-2 km/s) the fit for the Scutum-Centaurus arm, and possesses an aspect ratio of 140:1.

The authors are optimistic that these results and future studies of other bones can eventually be used to help pin down models of the spiral arms with great accuracy (they estimate to about 1 parsec). With the help of a skeletal model of the Milky Way, we might finally be able to answer fundamental questions about its three-dimensional structure.

by Caroline Huang at July 03, 2015 08:03 PM

Clifford V. Johnson - Asymptotia

Ships and Knobs…
[Extract from some of my babble that night:] "..."science advisor" which is such a confusing and misunderstood term. Most people think of us (and use us) as fact-checkers, and while I DO do that, it is actually the least good use of a scientist in the service of story-telling. As fact-checkers, usually engaged late in the process of a film being made, we’re just tinkering at the edges of an already essentially completed project. It is as if the main ship that is the movie has been built, has the journey planned out, and the ship has maybe even sailed, and we’re called in to spend an hour or two discussing whether the cabin door handles should be brass or chrome finish..." [I went on to describe how to help make better ships, sent on more interesting journeys..] Photo from here. Original FB version of this post here. Click to continue reading this post

by Clifford at July 03, 2015 05:38 PM

CERN Bulletin

The Physics of Music and the Music of Physics | CERN at the Montreux Jazz Festival | 9 July
CERN will be back at the Montreux Jazz Festival for its third annual workshop: ‘The Physics of Music and The Music of Physics’ on 9 July at 3 p.m. in Petit Palais.   The Physics of Music and the Music of Physics Petit Palais, Montreux Jazz Festival Thursday 9 July 2015 - 3.00 p.m. Free Entrance - for more information, visit the event site Run 2 of the LHC began this spring, bringing with it hopes and promise of new physics and discovery. One of many key items on the LHC shopping list is the existence of new spatial dimensions, a potential means to harmonise gravity in our theoretical understanding of nature. Robert Kieffer, of the CERN Beam Instrumentation Group and Gaëtan Parseihian, of the Laboratoire de Mécanique et d'Acoustique, CNRS, Marseille, will animate the Physics of Music half of the workshop with a demonstration of the physics behind acoustics. Their programme includes a lesson on sound sculpture and the addition of spatial dimensions to music, followed by a discussion and demonstration of sound perception. Participants will then be treated to an ambisonic concert composed from various sounds made at CERN. The Music of Physics half of the workshop will be animated by Juliana Cherston, of the Massachusetts Institute of Technology Media Lab, Domenico Vicinanza of Anglia Ruskin University and the GEANT Association, and Ewan Hill, of the University of Victoria, TRIUMF, and the ATLAS Experiment at CERN. They will present a new project that maps physical parameters of LHC proton collisions to sound parameters, thus creating music from the events.  The grand finale will feature jazz pianist Al Blatter playing a duet with sonified live collisions from the LHC. In his attempt to bring harmony between humankind and nature, Al hopes to bring music to a higher dimension – a feat only possible at the Montreux Jazz Festival. Steven Goldfarb

July 03, 2015 03:01 PM

Emily Lakdawalla - The Planetary Society Blog

Pluto's progression: Third-to-last Pluto day before encounter
Only two days remain until New Horizons' historic encounter with Pluto....two Pluto days, that is. Pluto and Charon rotate together once every 6.4 days, so as New Horizons has approached the pair over the last week, we've been treated to one stately progression of all of their longitudes.

July 03, 2015 01:49 PM

ATLAS Experiment

From ATLAS Around the World : Working with Silicon in Japan

I joined the ATLAS experiment in 2012 after graduating from the University of Tokyo, however my previous experience was completely different from collider physics. During my Master’s course, I focused on the behaviour of a kind of silicon detector operated in Geiger mode. In my Doctoral course, I designed and developed a gaseous detector called a Time Projection Chamber used for neutron lifetime measurements. These studies were done with very few colleagues in Japan. At that time, the experiments at CERN looked like a “castle” to me.

The ATLAS SCT

Workers assembling the ATLAS SemiConductor Tracker (SCT) at CERN.

Right after I came to ATLAS, I was surprised that more than 3000 people had operated the well-established ATLAS detector system and analysed the data so quickly. At that time, we were in the last year of Run 1, and I began investigating the performance of the SemiConductor Tracker (SCT), which is one of the inner tracking detectors in ATLAS. Through this study, I realised that there were many new things for me.

For the SCT, 44 institutes from 17 countries have contributed so far. The SCT consists of 4088 modules, which have two planes of silicon with 768 strips, so that we have six million channels. The details have been described by our project leader, Dave Robinson. Since 2014, I have filled the role of SCT Data Quality coordinator, who promptly checks the data to see whether or not the SCT has a problem. For this purpose, strong communication among the people responsible for various activities in the SCT is very important. In addition, a good understanding of the other inner detectors is needed in order to evaluate the performance of the SCT. With the help of many experts, we have prepared for stable data taking during Run 2.

Now I’m considering how to discover new physics. Almost all the analyses done by ATLAS and CMS assumed the decay of new particles at the collision point of the proton beams. Alternatively, I would like to target new particles with flight lengths longer than a millimetre and up to a few metres, which is favoured from the existence of relic dark matter in the Universe (for an example, see our results from Run 1). For this search, the high performance of the SCT will be essential. This is also my motivation for contributing the SCT operation.

I hope we will report something new from ATLAS in the next few years!!


Hidetoshi Otono Hidetoshi Otono is an assistant professor at the Kyushu University in Japan, who joined the ATLAS experiment in 2012. He has contributed to operation of the SemiConductor Tracker (SCT) as a data quality coordinator and searched for long-lived particles by making full use of the SCT.

by Hidetoshi at July 03, 2015 01:19 PM

Christian P. Robert - xi'an's og

“UK outmoded universities must modernise”

[A rather stinky piece in The Guardian today, written by a consultant self-styled Higher Education expert… No further comments needed!]

“The reasons cited for this laggardly response [to innovations] will be familiar to any observer of the university system: an inherently conservative and risk-averse culture in most institutions; sclerotic systems and processes designed for a different world, and a lack of capacity, skills and willingness to change among an ageing academic community. All these are reinforced by perceptions that most proposed innovations are over-hyped and that current ways of operating have plenty of life left in them yet.”


Filed under: Books, Kids, pictures, University life Tagged: marketing, privatisation, reform, The Guardian, United Kingdom

by xi'an at July 03, 2015 12:18 PM

Peter Coles - In the Dark

“Dutch universities start their Elsevier boycott plan”

telescoper:

Good for them!

Originally posted on Bibliographic Wilderness:

“We are entering a new era in publications”, said Koen Becking, chairman of the Executive Board of Tilburg University in October. On behalf of the Dutch universities, he and his colleague Gerard Meijer negotiate with scientific publishers about an open access policy. They managed to achieve agreements with some publishers, but not with the biggest one, Elsevier. Today, they start their plan to boycott Elsevier.

Dutch universities start their Elsevier boycott plan

View original


by telescoper at July 03, 2015 08:15 AM

CERN Bulletin

Communication: I like
To fulfill its mission to represent CERN personnel with the Management and the Member States, the Staff Council has set up a series of Commissions: employment conditions, pensions, legal matters, social protection, health and safety, InformAction, CAPA (individual cases) and, more recently, Media-Com. As its name suggests the Media-Com Commission deals with all matters of communication. The mandate of the new Commission is to implement and optimize the communication channels that the Staff Association uses to keep you informed. To attract the greatest number of people, Media-Com operates through multiple communication channels, such as articles in the Echo, the Staff Association information bulletin, the Staff Association website (http://staff- association.web.cern.ch/), Facebook, and, more recently, the intra-CERN Social platform. The Social platform is a discussion forum, for exchanging ideas, expressing views, reacting to, and commenting on current events of the Staff Association. To participate, you need to log on to https://social.cern.ch/ and then choose to follow news from the Staff Association. All these tools, implemented and maintained by Staff Association, inform you about the directions chosen by and the challenges facing the Staff Association. Yet they also allow you to express yourself and inform us about your views in an easy way. Information is not an end in itself, but explaining, expressing oneself, debating…, communicating in general, are essential processes to guarantee a correct representation of the staff.

by Staff Association at July 03, 2015 07:00 AM

July 02, 2015

astrobites - astro-ph reader's digest

The Rocky Road to Gas Planet-hood

Title: A metallicity recipe for rocky planets
Authors: R. I. Dawson, E. Chiang, E. J. Lee
First Author’s Institution: University of California, Berkeley, Berkeley, CA
Status: Submitted to MNRAS

 

 

The space between the stars is actually not so empty.  It’s also quite dusty. It’s so dusty, in fact, that if you brought a large parcel of space to the densities you find on Earth, you would hardly be able to see your hand if you stretched your arm out in front of you.  This interstellar dust—typically only a micrometer across, a fraction of the thickness of a human hair—is the stuff that planets are made out of.  The formation of planets is really an incredible saga of how the lowly dust grain grows to planet proportions, a transformation of millions of years, a growth in size by a million million, and takes the conglomeration of about a hundred million million million million million million grains (that is, ~10^38 grains).

It all begins with the birth of a star, when gigantic cold clouds of gas and dust are flattened into a gaseous dusty disk encircling a growing protostar by the concerted powers of  angular momentum conservation and gravity. The grains orbit the protostar within this disk. When the orbits of two grains intersect, they can collide and possibly stick together. This process continues over and over again to produce large grains, which in turn collide with each other to produce small rocks, which collide to form “planetesimals” the size of asteroids, and so on, scurrying with bolts and jolts up the size scale until they eventually form rocky bodies the sizes of Mercury, Mars, the Earth, thousands of kilometers across.  These rocky bodies can become terrestrial planets or the seeds of the cores of gas and ice giants such as those found within our own Solar System and beyond.

The authors of today’s paper attempt to explain why some of these planetary embryos remain largely unchanged, barren, and purely rocky while others end up cloaked with a gas envelope.  They focus on planets discovered by the Kepler Telescope that are a few to several times more massive than the Earth, which they divide into two types: those that are rocky, or super-Earths, and those that have a thick gaseous atmosphere, or mini-Neptunes.

How might the mini-Neptunes have acquired their gas?  The disks in which the transformation from grain to planet occurs is actually mostly gas—the dust makes up only about 1% of the mass of the disk—and thus a natural guess as to the source of planet atmospheres.  But there’s a hitch.  The gas in a disk is fated to banishment, first as the disk drains onto the growing, massive protostar at its center, then by evaporation as the newborn star heats the disk to temperatures so high (hundreds to thousands of Kelvins) that the gas can escape the gravity of the star and disk.  However, planetary embryos can’t hold onto an atmosphere unless they’re fairly massive, about an Earth mass or two.  Thus the road to mini-Neptune-hood (or to a gas/ice giant, for that matter) is a growth race against the shrinking supply of gas.

So what affects how quickly an embryo grows?  In order for growth-inducing collisions to occur between planetary embryos, their orbits must cross.  Close passages of rocky bodies with similar masses can stir up their velocities, pushing them into skewed, elliptical orbits that allow the embryos to traverse a broad range of radii (as opposed to circular orbits, which restricts the embryo to a single radius), increasing the chances of orbit crossing and thus growth.  Working against these velocity-stirring, growth-inducing, gravitational encounters is, once again, the gas.  Much like how the atmosphere reduces the speed of projectiles or a marble dropped into honey stops moving—a process known as dynamical friction—the gas attempts to erase the effects of stirring by bringing the rocky bodies to its own orbital speed, slowing down the rocky bodies sped up by close encounters and conversely, speeding up those that were slowed down.

But not all hope is lost.  The solution?  Wait until the gas disappears enough to stop erasing the velocity stir-ups that encourage growth.  The authors estimate that the gas begins to have a negligible effect on embryo velocities when a disk has similar surface densities of gas and dust.  One crucial consequence of this is that it shortens the time during which the embryos can acquire gaseous envelopes—the gas sticks around for a short million years by the time it stops affecting embryo velocities.  It helps if the disk started with more dusty solids, which would produce more material for the the embryos to grow from.  In fact, the authors estimate that planetary embryos in disks with larger solid surface densities grow exponentially faster—a result they confirm with more detailed N-body calculations.

Figure 1.  Super-Earths are in black and gas-enveloped mini-Neptunes are in orange.  The higher a disk's solid surface density, the more gas-enveloped mini-Neptunes it forms.

Figure 1. The dependence of whether a planetary embryo becomes a purely rocky super-Earth vs. a gas-enveloped mini-Neptune on the solid surface density of the disk in which they form.  Super-Earths are in black and gas-enveloped mini-Neptunes are in orange. Disks with high enough solid surface densities only form mini-Neptunes as they can form protoplanets massive enough to hold onto gas envelopes before the gas in the disk disappears.  Disks with lower surface densities form a mix of super-Earths and mini-Neptunes.

To confirm that the embryos that grew massive enough to begin acquiring a gaseous envelope had enough time to become mini-Neptunes, the authors ran simple one-dimensional models of envelope growth and used those results to estimate the amount of gas each planet in their N-body simulations would have. They confirmed that disks with larger solid surface densities grew all its embryos fast enough to convert them into gas-enveloped mini-Neptunes before the gas completely disappeared, while disks with lower surface densities produced a mix of gaseous mini-Neptunes and purely rocky super-Earths (see Figure 1).

While these calculations are simplified, they highlight the importance of the availability of solids on the formation of planets.  The differences in the amount of solids a disk has may be able to explain the lack of purely rocky planets around high-metallicity stars (which they assume was once encircled by a disk with more solids than those around low-metallicity stars) observed by Kepler. However, their model cannot explain why we’ve observed low-metallicity stars with mini-Neptunes. It’s possible that solids could accumulate in disks around low-metallicity stars, producing solid surface densities high enough for massive rocky embryos to form quickly and transform into mini-Neptunes before the gas disappears. The mini-Neptunes could also have formed further out in the disk, then migrated inwards to where we observe them today.  The full story of the origins of super-Earths and mini-Neptunes awaits unraveling with further work.

 

 

by Stacy Kim at July 02, 2015 08:06 PM

ZapperZ - Physics and Physicists

Don't Ask Siri To Divide 0/0
... unless you want a snarky remark about your personal life from her. You might get this response:

"Imagine that you have zero cookies and you split them evenly among zero friends. How many cookies does each person get? See? It doesn't make sense. And Cookie Monster is sad that there are no cookies, and you are sad that you have no friends."

Yowzah!

So, have you tried any other math questions with Siri and got similar amusing responses? Do share!

Zz.

by ZapperZ (noreply@blogger.com) at July 02, 2015 07:19 PM

Emily Lakdawalla - The Planetary Society Blog

The Senate Appropriations Committee’s FY 2016 CJS Bill
Congress has made good progress so far this year in moving the annual appropriations bills that fund the government. However, a looming budget battle over the sequestration and budget caps threaten to sideline progress until Congress and the White House reach agreement. Here’s the current situation.

July 02, 2015 04:34 PM

CERN Bulletin

LHC Report: Out of the clouds

In order for the LHC to deliver intense proton beams to the experiments, operators have to perform “scrubbing” of the beam pipes. This operation is necessary to reduce the formation of electron clouds, which would generate instabilities in the colliding beams.

 

Electron clouds are generated in accelerators running with positively charged particles when electrons - produced by the ionisation of residual molecules in the vacuum or by the photoelectric effect from synchrotron radiation - are accelerated by the beam field and hit the surface of the vacuum chamber producing other electrons. This avalanche-like process can result in the formation of clouds of electrons. Electron clouds are detrimental to the beam for a few reasons. First, the electrons impacting the walls desorb molecules and degrade the ultra-high vacuum in the beam chamber. Furthermore, they interact electromagnetically with the beam, leading to the oscillation and expansion of the particle bunches. This increases the probability of quenches in the superconducting magnets and entails a luminosity reduction. Fortunately, when electron cloud production becomes sufficiently intense, after a while, it reduces the probability of creating secondary electrons at the chamber walls and inhibits the avalanche. Running a machine under an intense electron cloud regime is the definition of machine scrubbing.

The scrubbing process in the LHC started gently on Wednesday, 24 June, when the first long trains of bunches spaced 50-nanoseconds (ns) apart were circulated in the LHC. The strong electron cloud initially created caused the first beams injected with this bunch spacing to become unstable and lose particles. Then, however, this electron cloud started “scrubbing” the beam pipes, thus reducing the number of electrons emitted.

This beneficial effect meant that more bunches could then be injected, while the beam stability and quality improved at the same time. Within about 36 hours of the beginning of scrubbing, it was possible to circulate stable trains of 600 bunches in the machine with very limited electron cloud generation and, consequently, without beam degradation.

At this point, the machine experts judged that the scrubbing process could only be efficiently continued by packing bunches closer together. It was therefore decided to fill the LHC with beams of many bunches with 25 ns spacing, which is also the target beam configuration for luminosity production in Run 2.

While this switch made the beam operation more challenging and all available weapons to stabilise the beams had to be used, the amount of electron clouds produced around the LHC quickly increased and the scrubbing progressed well. It took about four additional days to reach the point of filling the LHC with 1200 bunches in tightly packed trains of bunches separated by 25 ns. The beam stability, initially difficult to handle, also gradually improved as the electron cloud became less dense in all the machine sectors.

The next step consisted of performing a test run with 50 ns spaced beams filling the whole LHC, which was expected to be electron cloud free after the scrubbing period. This was successfully finished this Friday morning (3 July). The machine is now fully validated for 50 ns operation, and very good progress has been made towards 25 ns spacing. However, some work is still needed to prepare the LHC for operation with the full complement of 25 ns spaced bunches later in this year. Another scrubbing run will be performed later in the summer.

July 02, 2015 04:07 PM

arXiv blog

Why Wikipedia + Open Access = Revolution

The way scientific information diffuses through the knowledge economy is changing, and the first evidence from Wikipedia shows how.

 

July 02, 2015 02:46 PM

CERN Bulletin

Presidents' Words
In the context of the sixtieth anniversary of the Staff Association, we asked former presidents to tell us about their years of Presidency. We start in this issue of Echo with contributions from Michel Vitasse and Jean-Pol Matheys.   Michel Vitasse Having had the honour and pleasure of participating in the development of the Staff Association, as its president for seven years, during three different periods in the years 1980, 1990 and 2000 and working with seven Directors-General, I was asked to write a few lines about this experience. First of all, it has been a wonderful human experience. What a privilege to have met, at all levels, colleagues from all nationalities and of all trades, who devote all their efforts with dedication and passion, to an ideal of European scientific collaboration. Furthermore, I was able to share with others some principles of action, such as: Defending all categories of staff, maintaining its unity, by taking into account in our strategic and tactical choices, the various cultural sensitivities of our members Recognizing and understanding the constraints of our social partners (Management and Member States). Explaining to them the point of view and expectations of staff, and then develop, in the concertation process, innovative proposals in all areas affecting employment and working conditions. It was indeed at the initiative of the Staff Association that the recruitment program financed by Saved Leave (SLS), the early retirement programme (PRP), and the Long Term Care dependency benefits (LTC) were created.   Today, this tradition of concertation is still part of the CERN culture. This success of being heard, the Staff Association owes it to its credibility, but also to its representativeness. Indeed, the staff has always given strong support to their representatives, whenever they deemed it necessary to do so. I still remember the bitter conflict in November 1995, which opposed staff and our supervisory bodies. TREF, created shortly before, has until today allowed us to reduce the risk of such conflicts. Let us hope it will continue. Long live CERN and the Staff Association! Jean-Pol Matheys I was the president of the Staff Association for four years, 1999 to 2002, and it is a pleasure for me to look back on this particular period in my professional career. It was a very difficult and emotionally-charged period, yet humanly enriching, so mostly good memories come to mind. Promote and proactively defend the interests of the staff, while respecting those of the Organization, is a challenge that can be taken on successfully only thanks to the visible and constant support of all staff. Constantly reaching out to staff, organizing meetings, explaining the why and the how, and, when the time comes, all together beat the drum, are essential steps. That a team within the Staff Association could converge on this approach to meet the challenges and the means to attain them, would have been impossible without the work, sometimes tough, of my predecessors, and I want to thank them warmly. I would like to pay a special tribute to the late Michel Borghini, who can unfortunately no longer contribute to this column. A dialogue, including with those whose views and interests are quite different from ours, is both necessary and rewarding. With Management and the Member States, dialogue is essential because it feeds the concertation process. Often we have to go and meet our stakeholders to encourage dialogue; sometimes we have to demand more strongly concertation in good faith as requested by our legal texts. And, if we do not succeed the first time, we try, and try again. That is how success is forged. Thus, for example, CERN was finally able to integrate some people working on the site for many years but employed by sub-contractors on rotational contract. This success story of the Local Staff, to which I helped to contribute, is one of my fondest memories. To continue building on what our predecessors have constructed, leaning on and strengthening the credibility and representation of the Staff Association, not hesitating to be tough, yet correct when necessary; this is what I wish for the Staff Association over the next 60 years.

by Staff Association at July 02, 2015 01:31 PM

CERN Bulletin

Collection for Nepal
You are wonderful, thank you! On 25 April 2015, Nepal and neighboring countries suffered a violent earthquake, which killed thousands. On 28 April CERN Staff Association and CERN Management appealed to your generosity to help those affected, and opened an account to be able to receive your donations. We are now pleased to announce that the amount raised is CHF 34'800, and was donated to the NGO Live to Love of His Holiness the Gyalwang Drukpa. We thank everyone who contributed to this important cause.

by Staff Association at July 02, 2015 01:01 PM

Symmetrybreaking - Fermilab/SLAC

The wonderful thing about triggers

Physicist Jim Pivarski explains how particle detectors know when to record data.

Imagine you're a particle physicist in 1932. You have a cloud chamber that can show you the tracks of particles, and you have a camera to capture those tracks for later analysis. How do you set up an apparatus to take pictures whenever tracks appear?

At first, you might just try to be quick with your finger, but since the tracks disappear in a quarter of a second, you'd end up with a lot of near misses. You might give up and snap pictures randomly, since you'll be lucky some fraction of the time. Naturally, this wasteful process doesn't work if the type of event you're looking for is rare. You could also leave the shutter open and expose the film to anything that appears over a long interval. All events would overlap in the same picture, making it harder to interpret.

Now suppose you have another piece of equipment: a Geiger counter. This device emits an electric signal every time a charged particle passes through it. Two physicists, Blackett and Occhialini, surrounded their cloud chamber with Geiger counters and used the electric signals to trigger the cloud chamber and take pictures. This kind of apparatus is crucial to detectors today.

Experiments such as CMS only record one in a million LHC collisions — the rest are lost to further analysis. Collisions that break up protons but do not create new particles are 10 billion times more common than collisions that produce Higgs bosons, so modern triggers must be extremely selective.

Blackett and Occhialini's original trigger system relied on two Geiger counters: one above and one below the cloud chamber. Each Geiger counter was noisy and therefore prone to taking bad pictures, but both counters were unlikely to accidentally trigger at the same time. The two electronic signals were passed through a circuit that registered only if both counters triggered.

Today, triggers combine millions of data channels in complex ways, but the main idea is the same. Events should be selected only if signals in adjacent detectors line up. The desired geometric patterns are encoded into microchips for fast, coarse decisions and then are computed in detail using a farm of computers that make slower decisions downstream.

The modern trigger filter resembles a pipeline: Microchips make tens of millions of decisions per second and then pass on hundreds of thousands of candidates per second to the computing farm. By comparison, Google's computing farm handles 40,000 search queries per second.


A version of this article was published in Fermilab Today.

 

Like what you see? Sign up for a free subscription to symmetry!

by Jim Pivarski at July 02, 2015 01:00 PM

Peter Coles - In the Dark

Bad Statistics, Bad Science

I saw an interesting article in Nature the opening paragraph of which reads:

The past few years have seen a slew of announcements of major discoveries in particle astrophysics and cosmology. The list includes faster-than-light neutrinos; dark-matter particles producing γ-rays; X-rays scattering off nuclei underground; and even evidence in the cosmic microwave background for gravitational waves caused by the rapid inflation of the early Universe. Most of these turned out to be false alarms; and in my view, that is the probable fate of the rest.

The piece goes on to berate physicists for being too trigger-happy in claiming discoveries, the BICEP2 fiasco being a prime example. I agree that this is a problem, but it goes fare beyond physics. In fact its endemic throughout science. A major cause of it is abuse of statistical reasoning.

Anyway, I thought I’d take the opportunity to re-iterate why I statistics and statistical reasoning are so important to science. In fact, I think they lie at the very core of the scientific method, although I am still surprised how few practising scientists are comfortable with even basic statistical language. A more important problem is the popular impression that science is about facts and absolute truths. It isn’t. It’s a process. In order to advance it has to question itself. Getting this message wrong – whether by error or on purpose -is immensely dangerous.

Statistical reasoning also applies to many facets of everyday life, including business, commerce, transport, the media, and politics. Probability even plays a role in personal relationships, though mostly at a subconscious level. It is a feature of everyday life that science and technology are deeply embedded in every aspect of what we do each day. Science has given us greater levels of comfort, better health care, and a plethora of labour-saving devices. It has also given us unprecedented ability to destroy the environment and each other, whether through accident or design.

Civilized societies face rigorous challenges in this century. We must confront the threat of climate change and forthcoming energy crises. We must find better ways of resolving conflicts peacefully lest nuclear or conventional weapons lead us to global catastrophe. We must stop large-scale pollution or systematic destruction of the biosphere that nurtures us. And we must do all of these things without abandoning the many positive things that science has brought us. Abandoning science and rationality by retreating into religious or political fundamentalism would be a catastrophe for humanity.

Unfortunately, recent decades have seen a wholesale breakdown of trust between scientists and the public at large. This is due partly to the deliberate abuse of science for immoral purposes, and partly to the sheer carelessness with which various agencies have exploited scientific discoveries without proper evaluation of the risks involved. The abuse of statistical arguments have undoubtedly contributed to the suspicion with which many individuals view science.

There is an increasing alienation between scientists and the general public. Many fewer students enrol for courses in physics and chemistry than a a few decades ago. Fewer graduates mean fewer qualified science teachers in schools. This is a vicious cycle that threatens our future. It must be broken.

The danger is that the decreasing level of understanding of science in society means that knowledge (as well as its consequent power) becomes concentrated in the minds of a few individuals. This could have dire consequences for the future of our democracy. Even as things stand now, very few Members of Parliament are scientifically literate. How can we expect to control the application of science when the necessary understanding rests with an unelected “priesthood” that is hardly understood by, or represented in, our democratic institutions?

Very few journalists or television producers know enough about science to report sensibly on the latest discoveries or controversies. As a result, important matters that the public needs to know about do not appear at all in the media, or if they do it is in such a garbled fashion that they do more harm than good.

Years ago I used to listen to radio interviews with scientists on the Today programme on BBC Radio 4. I even did such an interview once. It is a deeply frustrating experience. The scientist usually starts by explaining what the discovery is about in the way a scientist should, with careful statements of what is assumed, how the data is interpreted, and what other possible interpretations might be and the likely sources of error. The interviewer then loses patience and asks for a yes or no answer. The scientist tries to continue, but is badgered. Either the interview ends as a row, or the scientist ends up stating a grossly oversimplified version of the story.

Some scientists offer the oversimplified version at the outset, of course, and these are the ones that contribute to the image of scientists as priests. Such individuals often believe in their theories in exactly the same way that some people believe religiously. Not with the conditional and possibly temporary belief that characterizes the scientific method, but with the unquestioning fervour of an unthinking zealot. This approach may pay off for the individual in the short term, in popular esteem and media recognition – but when it goes wrong it is science as a whole that suffers. When a result that has been proclaimed certain is later shown to be false, the result is widespread disillusionment.

The worst example of this tendency that I can think of is the constant use of the phrase “Mind of God” by theoretical physicists to describe fundamental theories. This is not only meaningless but also damaging. As scientists we should know better than to use it. Our theories do not represent absolute truths: they are just the best we can do with the available data and the limited powers of the human mind. We believe in our theories, but only to the extent that we need to accept working hypotheses in order to make progress. Our approach is pragmatic rather than idealistic. We should be humble and avoid making extravagant claims that can’t be justified either theoretically or experimentally.

The more that people get used to the image of “scientist as priest” the more dissatisfied they are with real science. Most of the questions asked of scientists simply can’t be answered with “yes” or “no”. This leaves many with the impression that science is very vague and subjective. The public also tend to lose faith in science when it is unable to come up with quick answers. Science is a process, a way of looking at problems not a list of ready-made answers to impossible problems. Of course it is sometimes vague, but I think it is vague in a rational way and that’s what makes it worthwhile. It is also the reason why science has led to so many objectively measurable advances in our understanding of the World.

I don’t have any easy answers to the question of how to cure this malaise, but do have a few suggestions. It would be easy for a scientist such as myself to blame everything on the media and the education system, but in fact I think the responsibility lies mainly with ourselves. We are usually so obsessed with our own research, and the need to publish specialist papers by the lorry-load in order to advance our own careers that we usually spend very little time explaining what we do to the public or why.

I think every working scientist in the country should be required to spend at least 10% of their time working in schools or with the general media on “outreach”, including writing blogs like this. People in my field – astronomers and cosmologists – do this quite a lot, but these are areas where the public has some empathy with what we do. If only biologists, chemists, nuclear physicists and the rest were viewed in such a friendly light. Doing this sort of thing is not easy, especially when it comes to saying something on the radio that the interviewer does not want to hear. Media training for scientists has been a welcome recent innovation for some branches of science, but most of my colleagues have never had any help at all in this direction.

The second thing that must be done is to improve the dire state of science education in schools. Over the last two decades the national curriculum for British schools has been dumbed down to the point of absurdity. Pupils that leave school at 18 having taken “Advanced Level” physics do so with no useful knowledge of physics at all, even if they have obtained the highest grade. I do not at all blame the students for this; they can only do what they are asked to do. It’s all the fault of the educationalists, who have done the best they can for a long time to convince our young people that science is too hard for them. Science can be difficult, of course, and not everyone will be able to make a career out of it. But that doesn’t mean that it should not be taught properly to those that can take it in. If some students find it is not for them, then so be it. I always wanted to be a musician, but never had the talent for it.

I realise I must sound very gloomy about this, but I do think there are good prospects that the gap between science and society may gradually be healed. The fact that the public distrust scientists leads many of them to question us, which is a very good thing. They should question us and we should be prepared to answer them. If they ask us why, we should be prepared to give reasons. If enough scientists engage in this process then what will emerge is and understanding of the enduring value of science. I don’t just mean through the DVD players and computer games science has given us, but through its cultural impact. It is part of human nature to question our place in the Universe, so science is part of what we are. It gives us purpose. But it also shows us a way of living our lives. Except for a few individuals, the scientific community is tolerant, open, internationally-minded, and imbued with a philosophy of cooperation. It values reason and looks to the future rather than the past. Like anyone else, scientists will always make mistakes, but we can always learn from them. The logic of science may not be infallible, but it’s probably the best logic there is in a world so filled with uncertainty.

 

 


by telescoper at July 02, 2015 11:47 AM

Jester - Resonaances

On the LHC diboson excess
The ATLAS diboson resonance search showing a 3.4 sigma excess near 2 TeV has stirred some interest. This is understandable: 3 sigma does not grow on trees, and moreover CMS also reported anomalies in related analyses. Therefore it is worth looking at these searches in a bit more detail in order to gauge how excited we should be.

The ATLAS one is actually a dijet search: it focuses on events with two very energetic jets of hadrons.  More often than not, W and Z boson decay to quarks. When a TeV-scale  resonance decays to electroweak bosons, the latter, by energy conservation,  have to move with large velocities. As a consequence, the 2 quarks from W or Z boson decays will be very collimated and will be seen as a single jet in the detector.  Therefore, ATLAS looks for dijet events where 1) the mass of each jet is close to that of W (80±13 GeV) or Z (91±13 GeV), and  2) the invariant mass of the dijet pair is above 1 TeV.  Furthermore, they look into the substructure of the jets, so as to identify the ones that look consistent with W or Z decays. After all this work, most of the events still originate from ordinary QCD production of quarks and gluons, which gives a smooth background falling with the dijet invariant mass.  If LHC collisions lead to a production of  a new particle that decays to WW, WZ, or ZZ final states, it should show as a bump on top of the QCD background. ATLAS observes is this:

There is a bump near 2 TeV, which  could indicate the existence of a particle decaying to WW and/or WZ and/or ZZ. One important thing to be aware of is that this search cannot distinguish well between the above 3  diboson states. The difference between W and Z masses is only 10 GeV, and the jet mass windows used in the search for W and Z  partly overlap. In fact, 20% of the events fall into all 3 diboson categories.   For all we know, the excess could be in just one final state, say WZ, and simply feed into the other two due to the overlapping selection criteria.

Given the number of searches that ATLAS and CMS have made, 3 sigma fluctuations of the background should happen a few times in the LHC run-1 just by sheer chance.  The interest in the ATLAS  excess is however amplified by the fact that diboson searches in CMS also show anomalies (albeit smaller) just below 2 TeV. This can be clearly seen on this plot with limits on the Randall-Sundrum graviton excitation, which is one  particular model leading to diboson resonances. As W and Z bosons sometimes decay to, respectively, one and two charged leptons, diboson resonances can be searched for not only via dijets but also in final states with one or two leptons.  One can see that, in CMS, the ZZ dilepton search (blue line), the WW/ZZ dijet search (green line), and the WW/WZ one-lepton (red line)  search all report a small (between 1 and 2 sigma) excess around 1.8 TeV.  To make things even more interesting,  the CMS search for WH resonances return 3 events  clustering at 1.8 TeV where the standard model background is very small (see Tommaso's post). Could the ATLAS and CMS events be due to the same exotic physics?

Unfortunately, building a model explaining all the diboson data is not easy. Enough to say that the ATLAS excess has been out for a week and there's isn't yet any serious ambulance chasing paper on arXiv. One challenge is the event rate. To fit the excess, the resonance should be produced with a cross section of order 10 femtobarns. This requires the new particle to couple quite strongly to light quarks (or gluons), at least as strong as the W and Z bosons. At the same time, it should remain a narrow resonance decaying dominantly to dibosons. Furthermore, in concrete models, a sizable coupling to electroweak gauge bosons will get you in trouble with electroweak precision tests.

However, there is yet a bigger problem, which can be also  seen in the plot above. Although the excesses in CMS occur roughly at the same mass, they are not compatible when it comes to the cross section. And so the limits in the single-lepton search are not consistent with the new particle interpretation of the excess in dijet  and  the dilepton searches, at least in the context of the Randall-Sundrum graviton model. Moreover, the limits from the CMS one-lepton search are grossly inconsistent with the diboson interpretation of the ATLAS excess! In order to believe that the ATLAS 3 sigma excess is real one has to move to much more baroque models. One possibility is that  the dijets observed by ATLAS do not originate from  electroweak bosons, but rather from an exotic particle with a similar mass. Another possibility is that the resonance decays only to a pair of Z bosons and not to W bosons, in which case the CMS limits are weaker; but I'm not sure if there exist consistent models with this property.  

My conclusion...  For sure this is something to observe in the early run-2. If this is real, it should clearly show in both experiments already this year.  However, due to the inconsistencies between different search channels and the theoretical challenges, there's little reason to get excited yet.

Thanks to Chris for digging out the CMS plot.

by Jester (noreply@blogger.com) at July 02, 2015 09:47 AM

Lubos Motl - string vacua and pheno

The Hindu: an interview with Ed Witten
A big portion of the world's string theorists gathered in Bengalúru, India last week. The local newspapers have published a couple of stories – e.g. about Ashoke Sen etc. One fresh interview in The Hindu is titled
‘Supersymmetry may show up at the new run of LHC’
Šubašrý Desikan has talked to Edward Witten who was introduced as the "world's only physicist who has won the Fields Medal".

Much like in most interviews since 2006 or so, the first question was a deeply unoriginal one about the empirical character of string theory. Witten answered that physicists are interested in string theory because of its elegance and especially because it seems to be the only way to reconcile the two pillars of the 20th century physics, quantum mechanics and general relativity.




When asked about the experimental tests in a foreseeable future, Witten said that he finds the discovery of supersymmetry at the 13 TeV LHC run likely. Witten also discussed some observable implications of string theory for cosmology and singled out Juan Maldacena's inflation talk (video, PDF) at the conference which is based on Juan's and Nima's recent joint paper. In Witten's opinion, it may take decades to get the required technology etc.




In the wake of a question about the 100th anniversary of Einstein's general relativity, the famous physicist reminded the readers of the experimental implications of GR: all of modern (Big Bang etc.) cosmology, modified predictions for the motion of objects in the Solar System, gravitational lensing, gravitational waves, and black holes.

The journalist suggests that mathematics has ultimately nothing to do with reality. Witten says "It is true that mathematics is the language in which physical laws are formulated." For a decade, the last line of my e-mails said "Superstring theory is the language in which God wrote the world." Up to his poetic deficit, Witten said the same thing. Since Newton's times (he needed to invent calculus), doing physics was inseparable from understanding old and new things in mathematics.

Witten sees string theory both as a theory of elementary particles as well as a theory that has much to say to other branches of physics and mathematics. The very composition of the question suggests that the journalist is a reader of some anti-string websites which is unfortunate.

He was also asked about the world news and activism. Witten feels strongly about many things but he has personally and semi-professionally worked on the Israeli-Palestinian peace project – although I would doubt that he has been defending the best strategy to achieve a sustainable peace. At the end, the readers – and numerous curious and potentially very important students waiting in India – are told that the horizons of physics are at least as wide as ever before.

by Luboš Motl (noreply@blogger.com) at July 02, 2015 06:26 AM

Emily Lakdawalla - The Planetary Society Blog

Mars Exploration Rovers Update: Opportunity Phones Home after Conjunction Healthy, Ready to Rove
After three weeks of being in a communications blackout on the other side of the Sun during the Earth-Mars solar conjunction, Opportunity phoned home, reporting that she is healthy and ready to continue her mission.

July 02, 2015 05:12 AM

July 01, 2015

Peter Coles - In the Dark

In the Heat of the Night

It seems appropriate to post this, since today has been the hottest day since the last day on which temperatures were at the same level as today. It’s the opening titles of one of my favourite films, In the Heat of the Night, with music provided by the late great Ray Charles. If you haven’t seen the film then you should. It’s part murder mystery part social commentary and it won 5 Oscars, including Best Picture and Best Actor for Rod Steiger’s brilliant portrayal of Police Chief Bill Gillespie.


by telescoper at July 01, 2015 08:42 PM

astrobites - astro-ph reader's digest

Extreme Makeover: Exoplanet Edition
  • Title: Tidally-Driven Roche-Lobe Overflow of Hot Jupiters with MESA
  • Authors: Francesca Valsecchi, Saul Rappaport, Frederic A. Rasio, Pablo Marchant, & Leslie A. Rogers
  • First Author’s Institution: Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA) and Northwestern University
  • Paper Status: Submitted to Astrophysical Journal

 

Twenty years ago, there was much less debate over how planetary systems formed and evolved. Our Solar System seemed to be a natural occurrence that would likely be reproduced around other stars. However, these theories were sculpted from a sample size of one, and the avalanche of exoplanet discovery over the past two decades has revealed that our home may be an oddball in the menagerie of planetary systems in the Milky Way. Two of the many surprises that planet-hunting telescopes such as Kepler have unveiled are giant Jupiter-sized planets residing on the doorstep of their host stars (hot Jupiters), and an abundance of planets that are between the sizes of Earth and Neptune (super-Earths/mini-Neptunes), many of which also in the close vicinity of their star.

The mechanism that causes some hot Jupiters to be in short-period orbits of just a few days is still up for debate. Front-running theories are inward migration through the protoplanetary disk and tidal circulation of an orbit that was made highly eccentric through a gravitational interaction. Regardless of what brought the hot Jupiters so close their stars, observations tell us that the planets are there, and physics tells us that won’t stay there. Tidal dissipation will cause these tight-orbit gas giants to lose orbital energy by transferring angular momentum to the star’s spin, causing the planet to creep in closer and closer to their host star. Eventually, they could get close enough for their Roche Lobe to overflow, allowing the star to win a gravitational tug-of-war and start stealing away gas from the planet’s atmosphere. Today’s paper investigates the evolution of hot Jupiters through this dynamic evolutionary phase, and examines if star-planet interactions could transform these hot Jupiters into super-Earths.

Screen Shot 2015-06-30 at 6.54.01 PM

Figure 1. Planetary mass vs orbital period, in units of Earth mass and days. This simulations used conservative mass transfer, meaning all the mass that left the planet was accreted by the star. Open orange circles represent the location of confirmed exoplanetary systems hosting only one planet. Colored lines are evolutionary models for different core masses, with evolution proceeding downward. “Xs” mark the end/start of RLO, and “squares” mark the times when the mass of the envelope drops below 20%, 10%, and 1% of the total mass. Colored open circles along the evolutionary tracks mark 1 billion year intervals. Figure 3 from the paper.

The figure to the right shows the evolution of hot Jupiters using the stellar binary evolution code MESA. The orange circles represent the location of confirmed exoplanetary systems, and the lines represent the evolutionary tracks of simulated hot Jupiters. All of the simulated planets have a mass of Jupiter orbiting stars with a mass of the Sun, and began evolving at the same distance from the host star and at the same time in the system’s history. The authors varied the mass of the planet’s solid core (between 1-30 Earth masses) and whether the mass transfer was conservative or non-conservative (i.e. whether all or just a fraction of the mass that is stripped from the planet was accreted by the star). Irradiation and subsequent photoevaporation of the planet’s gas were also taken into account to reproduce realistic atmospheric loss. Simulated planets evolve from the top to the bottom of the plot, with colored circles along the lines of evolution representing 1 billion year intervals.

As a concrete example of what’s happening, let’s take a look at the planet with a 10 Earth-mass core using conservative mass transfer (red dashed line in figure 1).  First, the orbit shrink for about 100 million years as tides transfer angular momentum from the orbit to the star’s spin (left along the short horizontal line). At attaining a period of about half a day (a semi-major axis of about 0.01 AU) the planet’s Roche Lobe overflows, and its orbit begins to expand due to the mass loss. Though it moves further away from the star, as it loses mass the planet has less of a grasp on its atmosphere, and continues to lose matter to the star. When it reaches an orbital period of about 1.2 days, the Roche Lobe overflow (RLO) phase stops (the first X).  Photoevaporation continues to eat away at the atmosphere for the next billion or so years, dropping the fraction of atmosphere down to 20% and 10% of its original value (the big and medium squares, respectively), and tidal decay once again brings the planet closer to its star. RLO once again ensues for a second time (the second X), but tidal decay overpowers it now that there is not much mass left to eat up and cause the orbit to expand. The orbit shrinks and the fraction of the planet’s remaining atmosphere drops to 1% (the final small square), leaving behind a rocky core with a tenuous atmosphere about 10 times the mass of the Earth…a super-Earth! Though this particular planet may be doomed to end up inside its star, others (see the green evolutionary track) can leave the super-Earth far enough away from its star for tidal decay to be a negligible effect.

Screen Shot 2015-06-30 at 3.29.48 PM

Figure 2. The upper panel shows the probability densities of orbital periods for single-planet Kepler candidates (grey) and multi-planet Kepler candidates (outline) with sizes less that 5 Earth radii. The lower panel shows the distribution of planet sizes and orbital periods for single-planet Kepler candidates, with the hot Jupiter and hot super-Earth populations visible as peaks near 10 Earth radii and 1 Earth radii, respectively. Figure 1 from Valsecchi et. al 2014.

This evolutionary pathway is more than just a neat trick one can play using a stellar binary evolution simulation – this process may actually be taking place in planetary systems in the Milky Way! First of all, in figure 1 we can see that there are many observed hot Jupiters (orange circles in the upper-right of the plot) near the orbital period in which these simulations started, the closest of which should already be experiencing significant orbital decay through tides. We also see a good deal of hot super-Earths (scattered orange circles at the bottom of the plot) that fall near the simulated evolutionary trajectories. Second, most observed super-Earths consist of a rocky or icy core surrounded by a hydrogen/helium envelope that makes up a small fraction planet’s total mass, similar to the end products of these simulations. Another kicker is shown in figure 2, which is from an earlier paper by the lead author and collaborators. The top histogram shows the orbital periods of single-planet Kepler candidates (grey) and multiple-planet Kepler candidates (outlined) with radii less than 5 Earth-radii (i.e. not hot Jupiters). It is unlikely that these populations are from the same parent distribution because of the excess of single-planet systems sitting around 1-day orbits. The bottom contour maps the distribution of planet sizes and orbital period for single-planet system Kepler candidates, clearly a bimodal distribution with the populations of hot Jupiters and hot super-Earths represented by the peaks at 10 and 1 Earth radii, respectively. These pieces of evidence along with the simulations of today’s paper may indicate that hot Jupiters and hot super-Earths are more intimately connected than previously thought – the same planet caught at different times in its evolutionary history. The only thing missing – observing a hot Jupiter experiencing RLO, which we will hopefully find as we continue the search for more bizarre worlds in our Milky Way.

 

 

 

Disclaimer:  I am a student in CIERA/Northwestern University, where two of the authors on this paper are from. However, I did not play any role in this research. I just think it’s cool!

Also, check out this astrobite for another article of planetary metamorphoses!

by Michael Zevin at July 01, 2015 06:00 PM

ZapperZ - Physics and Physicists

100 Years Of General Theory of Relativity
This is a nice Nature Physics article summarizing the history of the General Theory of Relativity, especially on the historical verification of Einstein's idea.

If you have access to Nature Physics articles, you might also want to read the link in this paragraph:

Not everyone embraced the theory, though: in a Commentary on page 518 Milena Wazeck discusses the anti-relativist movement of the 1920s and uncovers an international network of opponents. Without any attempt at engaging in scientific argumentation, the refuters considered themselves “the last defenders of true physics”. Wazeck sees parallels with adversaries of Darwinism or anthropogenic climate change.

I suppose I shouldn't be surprised, but I continue to be amazed that human beings have such short memory, and how we continue to repeat the same things or the same mistakes that had been done before.

Zz.

by ZapperZ (noreply@blogger.com) at July 01, 2015 03:42 PM

Lubos Motl - string vacua and pheno

Memories, asymptotic symmetries, and soft theorems
Last Monday, the Strings 2015 annual conference started in Bengalúru, India. Now it's over. With three exceptions, the written documents used by the speakers are posted on the page with talk titles and videos. Unfortunately, most of the videos have still not been posted; the last released ones were added 4 days ago.

(Update July 1st: thank God, the videos are available.)

There have been numerous interesting talks at the conference. Some of them are nice reviews. In order to focus on talks with a truly new original content that is sufficiently conceptual to be appropriate for a semitechnical blog, let me pick Andy Strominger's talk (PDF), not only because Andy celebrates his 60th birthday in a month.




Like other Strominger PDF files, you find lots of cute childish drawings inside, all of which could have been contributed by one of his daughters.




Andy was reporting on the content of 12 papers he co-wrote since August 2013 along with 9 collaborators, with contributions from 46+ other physicists he enumeartes at the beginning.



On Wednesday after the conference week, the video was finally posted, thank God.

Without a loss of generality, let us use the term Strominger-Pasterski et al. for this group of physicists. Sabrina Gonzalez Pasterski (physicsgirl.com) is a teenage pilot and one of 30 under 30 who has already attended the Lindau meeting of Nobel prize winners. ;-)



I am sure that if Tim Hunt posted this Sabrina 1988 music video about her collaborators and the distractions, he would be in hot water. ;-)

Most TRF readers are mature white men above 80 years of age. But those TRF readers who are teenagers and who want to work with Andy Strominger should know that you must first learn how to pilot an aircraft and build your aircraft before you are 17. Once you do that, you must meet the founder of Amazon.com who convinces you to do physics instead. (I am sure that once I describe her in this way, Andy Strominger feels even prouder that he could have collaborated with her.)

Strominger's slogan summarizing the 12 recent papers is a triangle linking all 3 pairs of three previously independent concepts:
  1. gravitational memory
  2. soft theorem
  3. asymptotic symmetry
Black holes carry lots of entropy or information which we often imagine as something that resides in the ultrashort distance regime of quantum gravity. But Strominger-Pasterski et al. want to reimagine all these things from the viewpoint of very long distance physics, i.e. from the deeply infrared perspective.

You make operations at very long distances and the gravitational systems remember what you do; we are talking about operations involving the gravitational field's influence on rotating satellites, something that NASA and pals are actually going to test soon. OK, that's my very cheap sketch of the memory part.

The soft theorem is Weinberg's 1965 observation that the addition of a very soft graviton (one with a zero momentum) to an S-matrix scattering process only changes the S-matrix amplitude in an extremely simple way.

This law is a universal principle and similar principles may often be explained by symmetries. So Strominger-Pasterski et al. identify the symmetries – some new asymptotic symmetries. They are basically translations ("supertranslations" but not in the supersymmetric sense) where the displacement depends on the angle – or new Virasoro symmetries acting on the sphere at infinity. If you want just some extra details, the first newly found symmetries of the gravitational scattering arise as a diagonal subgroup of BMS+ x BMS, the pair of the asymptotic scri-plus-and-minus BMS supertranslation groups. This new symmetry is infinite-dimensional; the corresponding conservation laws are pretty much separate energy-like conservation laws for each direction in the two-sphere at null infinity.

If we return to the triangle, it works just like rock-paper-scissors. The new asymptotic symmetries explain the soft theorem, as a special case of the Ward theorem; the soft theorem produces memory through the Fourier transform; and memory implies asymptotic symmetries if you consider vacuum-vacuum translations.

I am afraid that you will have to read at least some of those 12 papers (or at least look at the talk) to understand what Strominger-Pasterski et al. actually mean. But the overall message is that there are richer insights and structures hiding (and partially waiting to be understood) in the deeply infrared (long distance) behavior of the quantum gravitational theories. Three more specific conclusions described in Andy's talk:
  1. black holes carry an infinite amount of BMS hair (Burg, Metzner, Sachs 1962 supertranslations i.e. certain diffeomorphisms defined through their simple action on radiative data at scri-plus)
  2. the Hawking radiation is constrained by an infinite tower of related conservation laws
  3. the zero-energy vacuum boasts an infinite degeneracy i.e. it can store an infinite amount of information
The mysteries of quantum gravity don't necessarily hide in the ultrashort phenomena that may only be seen with the impossible Planckian eyeglasses. Using some better tools to look at them, a big part of the information about those mysteries may be projected at infinitely long distances. The temperature of the Hawking radiation isn't the only "parameter" that describes it; even if you only allow long-distance apparatuses, lots of additional hair and information is encoded in the charges under those supertranslations etc.

This summary was distilled from my incomplete technical understanding what all those 12 papers are doing but my guess is that Strominger would largely agree, anyway.

by Luboš Motl (noreply@blogger.com) at July 01, 2015 02:56 PM

Symmetrybreaking - Fermilab/SLAC

Higgs factory proposed for Beijing

Scientists in China hope to build a successor to the Large Hadron Collider—and take a new place on the international particle physics stage.

The discovery of the Higgs boson was one of the biggest moments in the history of particle physics. It was also the discovery of one of the strangest particles ever revealed—an elementary, point-like particle with no spin, no electromagnetic charge and the ability to interact with itself.

“In this situation, you just have to put this brand new weird particle under as powerful a microscope as you can,” says Nima Arkani-Hamed, a theoretical physicist at the Institute for Advanced Study in Princeton.

Physicists in China are hoping to build that powerful microscope in the form of an electron-positron collider in a ring up to 100 kilometers long just outside Beijing.

Arkani-Hamed is the first director of Beijing’s new Center for Future High Energy Physics, tasked with investigating the physics capabilities of such a machine and getting physicists around the world on board with the project.

The Large Hadron Collider, where the Higgs was discovered, produces the particle in proton-proton collisions. Experiments at the LHC will continue to study the Higgs over the next decades. But scientists around the world—including the group in China—are also planning ahead for ways to get an even closer look at the bizarre particle.

If constructed in China, the proposed Higgs-factory collider would be the biggest particle physics project ever undertaken there. “It is putting a stamp on the country’s arrival on the international stage in some sense,” says Charlie Young, a physicist at SLAC. “It’s the science, but it’s also more than just the science.”

The hope is that creating a million Higgs will reveal tiny deviations from theoretical predictions about the particle’s nature, which would open up new paths to exploring science beyond the Standard Model of particle physics.

The idea for a circular Higgs factory in China first came from Yifang Wang (pictured above), the director of the Institute of High Energy Physics in Beijing, who proposed it at a September 2012 meeting on the future of particle physics in China.

Rendering of the proposed CEPC

Courtesy of: IHEP

IHEP has several successful particle physics experiments already, including the Daya Bay Neutrino Experiment, which studies neutrino oscillation. It’s also home to the Beijing Electron Positron Collider, a comparatively tiny 200-meter version of the giant collider Wang hopes to build in the next decade.

“The Circular Electron Positron Collider is actually a very natural continuation of our effort in last 30 years,” Wang says. “Without this new Higgs factory, we probably have to think about something else.”

In a proton-proton collider like the LHC, collisions that produce a Higgs also produce dozens of other particles, making it difficult to tease out which collision produced a Higgs. That’s because protons are made of quarks and gluons. But electron-positron collisions are much clearer, producing only a Higgs plus the well-known Z boson. That will allow physicists to study exotic decays.

China is not alone in wanting to host a Higgs factory. The proposed International Linear Collider could be tuned to produce Higgs particles while exploring other new physics at higher energies. The idea of a circular Higgs factory actually first surfaced in 2011 as a possible reuse of the LHC tunnel at CERN.

As part of the European long-range plan, CERN has established an international collaboration to develop a design for a Future Circular Collider in a new tunnel that would accelerate protons around the town of Geneva. The FCC could go through an intermediate phase as an electron-positron collider first.

China’s government is set to award five-year funding to several national-scale projects by the end of this year, and Wang is hoping that a large investment in R&D to advance the case for the CEPC will be one of the winners.

“If they say yes to the development funding, I bet they’re going to go for the whole thing,” Arkani-Hamed says. “That would be an earthquake. That would be an enormous thing.”

Winning the funding this year could allow construction to begin as early as 2020, with the first data starting to come in by the early 2030s.

 

Like what you see? Sign up for a free subscription to symmetry!

by Laura Dattaro at July 01, 2015 01:00 PM

astrobites - astro-ph reader's digest

Write for Astrobites in Spanish!

We are looking for enthusiastic students to join the “Astrobites en Español” team.

Requirements: Preferably master or PhD students in physics or astronomy, fluent in Spanish and English. We ask you to submit:

  • One “astrobito” with original content in Spanish (for example, something like this). You should choose a paper that appeared on astro-ph in the last three months and summarise it at an appropriate level for undergraduate students. We ask you that it is not in your specific area of expertise and we allow a maximum of 1000 words.
  • A brief (200 word maximum) note, also in Spanish, where you explain your motivation to write for Astrobitos.

Commitment: We will ask you to write a post about once per month, and to edit on a similar frequency. You would also have the opportunity to represent Astrobitos in conferences.  Our authors dedicate a couple of hours a month developing material for Astrobitos.

(There is no monetary compensation for writing for Astrobitos. Our work is ad honorem.)

If you are interested, please send us the material to write4astrobites@gmail.com with subject “Material para Astrobitos”. The deadline is July 31st, 2015, but we will keep the call open until we fill 5 positions. We will let you know whether your application was successful at the end of August. Thanks!

by Elisa Chisari at July 01, 2015 08:14 AM

John Baez - Azimuth

Trends in Reaction Network Theory (Part 2)

Here in Copenhagen we’ll soon be having a bunch of interesting talks on chemical reaction networks:

Workshop on Mathematical Trends in Reaction Network Theory, 1-3 July 2015, Department of Mathematical Sciences, University of Copenhagen. Organized by Elisenda Feliu and Carsten Wiuf.

Looking through the abstracts, here are a couple that strike me.

First of all, Gheorghe Craciun claims to have proved the biggest open conjecture in this field: the Global Attractor Conjecture!

• Gheorge Craciun, Toric differential inclusions and a proof of the global attractor conjecture.

This famous old conjecture says that for a certain class of chemical reactions, the ones coming from ‘complex balanced reaction networks’, the chemicals will approach equilibrium no matter what their initial concentrations are. Here’s what Craciun says:

Abstract. In a groundbreaking 1972 paper Fritz Horn and Roy Jackson showed that a complex balanced mass-action system must have a unique locally stable equilibrium within any compatibility class. In 1974 Horn conjectured that this equilibrium is a global attractor, i.e., all solutions in the same compatibility class must converge to this equilibrium. Later, this claim was called the Global Attractor Conjecture, and it was shown that it has remarkable implications for the dynamics of large classes of polynomial and power-law dynamical systems, even if they are not derived from mass-action kinetics. Several special cases of this conjecture have been proved during the last decade. We describe a proof of the conjecture in full generality. In particular, it will follow that all detailed balanced mass action systems and all deficiency zero mass-action systems have the global attractor property. We will also discuss some implications for biochemical mechanisms that implement noise filtering and cellular homeostasis.

Manoj Gopalkrishnan wrote a great post explaining the concept of complex balanced reaction network here on Azimuth, so if you want to understand the conjecture you could start there.

Even better, Manoj is talking here about a way to do statistical inference with chemistry! His talk is called ‘Statistical inference with a chemical soup':

Abstract. The goal is to design an “intelligent chemical soup” that can do statistical inference. This may have niche technological applications in medicine and biological research, as well as provide fundamental insight into the workings of biochemical reaction pathways. As a first step towards our goal, we describe a scheme that exploits the remarkable mathematical similarity between log-linear models in statistics and chemical reaction networks. We present a simple scheme that encodes the information in a log-linear model as a chemical reaction network. Observed data is encoded as initial concentrations, and the equilibria of the corresponding mass-action system yield the maximum likelihood estimators. The simplicity of our scheme suggests that molecular environments, especially within cells, may be particularly well suited to performing statistical computations.

It’s based on this paper:

• Manoj Gopalkrishnan, A scheme for molecular computation of maximum likelihood estimators for log-linear models.

I’m not sure, but this idea may exploit existing analogies between the approach to equilibrium in chemistry, the approach to equilibrium in evolutionary game theory, and statistical inference. You may have read Marc Harper’s post about that stuff!

David Doty is giving a broader review of ‘Computation by (not about) chemistry':

Abstract. The model of chemical reaction networks (CRNs) is extensively used throughout the natural sciences as a descriptive language for existing chemicals. If we instead think of CRNs as a programming language for describing artificially engineered chemicals, what sorts of computations are possible for these chemicals to achieve? The answer depends crucially on several formal choices:

1) Do we treat matter as infinitely divisible (real-valued concentrations) or atomic (integer-valued counts)?

2) How do we represent the input and output of the computation (e.g., Boolean presence or absence of species, positive numbers directly represented by counts/concentrations, positive and negative numbers represented indirectly by the difference between counts/concentrations of a pair of species)?

3) Do we assume mass-action rate laws (reaction rates proportional to reactant counts/concentrations) or do we insist the system works correctly under a broader class of rate laws?

The talk will survey several recent results and techniques. A primary goal of the talk is to convey the “programming perspective”: rather than asking “What does chemistry do?”, we want to understand “What could chemistry do?” as well as “What can chemistry provably not do?”

I’m really interested in chemical reaction networks that appear in biological systems, and there will be lots of talks about that. For example, Ovidiu Radulescu will talk about ‘Taming the complexity of biochemical networks through model reduction and tropical geometry’. Model reduction is the process of simplifying complicated models while preserving at least some of their good features. Tropical geometry is a cool version of algebraic geometry that uses the real numbers with minimization as addition and addition as multiplication. This number system underlies the principle of least action, or the principle of maximum energy. Here is Radulescu’s abstract:

Abstract. Biochemical networks are used as models of cellular physiology with diverse applications in biology and medicine. In the absence of objective criteria to detect essential features and prune secondary details, networks generated from data are too big and therefore out of the applicability of many mathematical tools for studying their dynamics and behavior under perturbations. However, under circumstances that we can generically denote by multi-scaleness, large biochemical networks can be approximated by smaller and simpler networks. Model reduction is a way to find these simpler models that can be more easily analyzed. We discuss several model reduction methods for biochemical networks with polynomial or rational rate functions and propose as their common denominator the notion of tropical equilibration, meaning finite intersection of tropical varieties in algebraic geometry. Using tropical methods, one can strongly reduce the number of variables and parameters of biochemical network. For multi-scale networks, these reductions are computed symbolically on orders of magnitude of parameters and variables, and are valid in wide domains of parameter and phase spaces.

I’m talking about the analogy between probabilities and quantum amplitudes, and how this makes chemistry analogous to particle physics. You can see two versions of my talk here, but I’ll be giving the ‘more advanced’ version, which is new:

Probabilities versus amplitudes.

Abstract. Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, the master equation for a chemical reaction network describes the interactions of molecules in a stochastic rather than quantum way. If we look at it from the perspective of quantum theory, this formalism turns out to involve creation and annihilation operators, coherent states and other well-known ideas—but with a few big differences.

Anyway, there are a lot more talks, but if I don’t have breakfast and walk over to the math department, I’ll miss those talks!

You can learn more about individual talks in the comments here (see below) and also in Matteo Polettini’s blog:

• Matteo Polettini, Mathematical trends in reaction network theory: part 1 and part 2, Out of Equilibrium, 1 July 2015.


by John Baez at July 01, 2015 05:39 AM

June 30, 2015

Jester - Resonaances

Sit down and relaxion
New ideas are rare in particle physics these days. Solutions to the naturalness problem of the Higgs mass are true collector's items. For these reasons, the new mechanism addressing the naturalness problem via cosmological relaxation have stirred a lot of interest in the community. There's already an article explaining the idea in popular terms. Below, I will give you a more technical introduction.

In the Standard Model, the W and Z bosons and fermions get their masses via the Brout-Englert-Higgs mechanism. To this end, the Lagrangian contains  a scalar field H with a negative mass squared  V = - m^2 |H|^2. We know that the value of the parameter m is around 90 GeV - the Higgs boson mass divided by the square root of 2. In quantum field theory,  the mass of a scalar particle is expected to be near the cut-off scale M of the theory, unless there's a symmetry protecting it from quantum corrections.  On the other hand, m much smaller than M, without any reason or symmetry principle, constitutes the naturalness problem. Therefore, the dominant paradigm has been that, around the energy scale of 100 GeV, the Standard Model must be replaced by a new theory in which the parameter m is protected from quantum corrections.  We know several mechanisms that could potentially protect the Higgs mass: supersymmetry, Higgs compositeness, the Goldstone mechanism, extra-dimensional gauge symmetry, and conformal symmetry. However, according to experimentalists, none seems to be realized at the weak scale; therefore, we need to accept that nature is fine-tuned (e.g. susy is just behind the corner), or to seek solace in religion (e.g. anthropics).  Or to find a new solution to the naturalness problem: one that is not fine-tuned and is consistent with experimental data.

Relaxation is a genuinely new solution, even if somewhat contrived. It is based on the following ingredients:
  1.  The Higgs mass term in the potential is V = M^2 |H|^2. That is to say,  the magnitude of the mass term is close to the cut-off of the theory, as suggested by the naturalness arguments. 
  2. The Higgs field is coupled to a new scalar field - the relaxion - whose vacuum expectation value is time-dependent in the early universe, effectively changing the Higgs mass squared during its evolution.
  3. When the mass squared turns negative and electroweak symmetry is broken, a back-reaction mechanism should prevent further time evolution of the relaxion, so that the Higgs mass terms is frozen at a seemingly unnatural value.       
These 3 ingredients can be realized in a toy model where the Standard Model is coupled to the QCD axion. The crucial interactions are  
Then the story goes as follows. The axion Φ starts at a small value where the M^2 term dominates and there's no electroweak symmetry breaking. During inflation its value slowly increases. Once gΦ > M^2, electroweak symmetry breaking is triggered and the Higgs field acquires a vacuum expectation value.  The crucial point is that the height of the axion potential Λ depends on the light quark masses which in turn depend on the Higgs expectation value v. As the relaxion evolves, v increases, and Λ also increases proportionally, which provides the desired back-reaction. At some point, the slope of the axion potential is neutralized by the rising Λ, and the Higgs expectation value freezes in. The question is now quantitative: is it possible to arrange the freeze-in to happen at the value v well below the cut-off scale M? It turns out the answer is yes, at the cost of choosing strange (though not technically unnatural) theory parameters.  In particular, the dimensionful coupling g between the relaxion and the Higgs has to be less than 10^-20 GeV (for a cut-off scale larger than 10 TeV), the inflation has to last for at least 10^40 e-folds, and the Hubble scale during inflation has to be smaller than the QCD scale.   

The toy-model above ultimately fails. Normally, the QCD axion is introduced so that its expectation value cancels the CP violating θ-term in the Standard Model Lagrangian. But here it is stabilized at a value determined by its coupling to the Higgs field. Therefore, in the toy-model, the axion effectively generates an order one θ-term, in conflict with the experimental bound  θ < 10^-10. Nevertheless, the same  mechanism can be implemented in a realistic model. One possibility is to add new QCD-like interactions with its own axion playing the relaxion role. In addition, one needs new "quarks" charged under the new strong interactions. These masses have to be sensitive to the electroweak scale v, thus providing a back-reaction on the axion potential that terminates its evolution. In such a model, the quantitative details would be a bit different than in the QCD axion toy-model. However, the "strangeness" of the parameters persists in any model constructed so far. Especially, the very low scale of inflation required by the relaxation mechanism is worrisome. Could it be that the naturalness problem is just swept into the realm of poorly understood physics of inflation? The ultimate verdict thus depends on whether a complete and  healthy model incorporating both relaxation and inflation can be constructed.

Certainly TBC.

Thanks to Brian for a great tutorial. 

by Jester (noreply@blogger.com) at June 30, 2015 09:03 PM

astrobites - astro-ph reader's digest

Classifying Holes in the Sun

Machine learning, or teaching computers to teach themselves, is becoming insanely popular throughout science and industry. This is largely driven by the continuous onslaught of new and extensive datasets that often require machine learning to fully understand. A recent Astrobite summarized some of the massive datasets that astronomy will soon face in the high energy, transient sky; however, we already have a wealth of data from a source much closer to home: the Sun.

The data streaming down from our Sun is extremely unique in astronomy for many reasons, including its size and resolution. For example, let’s compare it the average dataset of a supernova you might get from LSST. The supernova would likely be no more than a few pixels wide, from which you would extract the supernova’s brightness and color. You would get a new image of the supernova every few days and eventually make a lightcurve with your data. That’s not a lot of data for a single object. In contrast, multiwavelength data from our Sun is being uploaded every few minutes  with millions of pixels and high resolution. We’re literally able to watch a video of the Sun and solar features which are only hundreds of kilometers wide! Astronomers have to sort through all of this data and pull out meaningful observations.

Figure 1. Schematic of a coronal hole, demonstrating that the magnetic field lines all point outward. This allows for the solar wind to escape.

Figure 1. Schematic of a coronal hole, demonstrating that the magnetic field lines all point outward. This allows for the solar wind to escape.

In today’s paper, the authors use machine learning to classify two important phenomena on the Sun which both contribute to space weather: coronal holes and filament channels. Coronal holes are regions where the Sun’s corona is colder and have a single magnetic polarity which allows wind to escape, as shown in Figure 1. These high-speed solar wind streams help shape the solar wind distribution in the solar system. Filament channels are elongated areas of the Sun where filaments, or arcs of hot plasma, form. Similarly to coronal holes, filament channels are often dark spots in the Sun’s corona. Coronal holes and filament channels are both shown in Figure 2 — it’s probably easy to spot the differences in this image!

Traditionally, coronal holes and filament channels are identified by eye or by very basic image processing techniques which often confuse the two phenomena. Harnessing the power of machine learning, we can train a computer to understand the subtle differences between the two. To complete this challenge, the first step is to prepare a dataset in which all of the holes and channels have already been labeled. The computer can then use this as a “training set” to see how humans normally classify phenomena and learn how to classify itself.

 

Figure 2. Images of the Sun in three different wavelengths, with highlighted areas around coronals holes and filament channels. The holes are outlined in blue while the channels are highlighted in red and orange. Original image from here.

Figure 2. Images of the Sun in three different wavelengths, with highlighted areas around coronals holes and filament channels. The holes are outlined in blue while the channels are highlighted in red and orange. Original image from here.

The computer can’t magically learn how to classify using just the images, so the astronomer needs to provide features of each phenomenon which the computer can use to differentiate between the two. In addition to features which are typically used image classification (such as a mean or contrast), the authors use some physical intuition to help them classify. For one, the filament channels are clearly elongated while the holes tend to be symmetric. Additionally, the holes typically have a single polarity (which allows solar wind to flow outwards), while the filament channels are often magnetically neutral (having both negative and positive polarity).

The authors tested a number of common classification algorithms, but they found that one of the best methods for solar classification was a support vector machine (SVM) algorithm. SVM classifies objects by combining features in such a way that the two phenomena become as distinct as possible. For example, if I am trying to classify whether an animal is or is not a house cat, I might ask whether or not that animal lives in a house or if it meows. Many animals can either live in a house as a pet or meow in the wild, but only housecats will have both features. Similarly in our astronomical scenario, SVM combines the various features of channels and holes to best distinguish between the two. You can watch a visualization of SVM in this video.

In addition to being exceptionally accurate, the classification is fast, taking only a few minutes.  These techniques can thus be used for realtime analysis of the Sun. The authors are hopeful that this analysis will be incorporated into publically available data reduction pipelines in order to help solar physicists make sense of petabytes of data that the Sun produces. Machine learning will undoubtedly become an essential tool for both solar astronomers as our solar database continues to grow and for other astronomers as missions become more complex and data-intensive.

by Ashley Villar at June 30, 2015 04:32 PM

ZapperZ - Physics and Physicists

4 Common Misconception About Quantum Physics
I've been critical of several physics article that have appeared in Epoch Times, many of them verging on crackpottery. But I have to admit, this one is actually quite good. It details 4 important misconception that came out of QM.

My summary of these misconceptions are:

1. Quantum entanglement transfers information faster than c.

2. Consciousness is necessary to "collapse" wave-function.

3. QM is only valid at the subatomic level.

4. Wave-particle "duality".

You may read the article to get the details, but for an article designed for the general public, it is actually quite accurate and understandable.

Zz.

by ZapperZ (noreply@blogger.com) at June 30, 2015 03:48 PM

arXiv blog

The Social-Network Illusion That Tricks Your Mind

Network scientists have discovered how social networks can create the illusion that something is common when it is actually rare.

June 30, 2015 02:00 PM

Symmetrybreaking - Fermilab/SLAC

How do you solve a puzzle like neutrinos?

When it comes to studying particles that zip through matter as though it weren’t even there, you use every method you can think of.

Sam Zeller sounds borderline embarrassed by scientists’ lack of understanding of neutrinos—particularly how much mass they have.

“I think it’s a pretty sad thing that we don’t know,” she says. “We know the masses of all the particles except for neutrinos.” And that’s true even for the Higgs, which scientists only discovered in 2012.

Ghostly neutrinos, staggeringly abundant and ridiculously aloof, have held onto their secrets long past when they were theorized in the 1930s and detected in the 1950s. Scientists have learned a few things about them:

 

  • They come in three flavors associated with three other fundamental particles (the electron, muon and tau).
  • They change, or oscillate, from one type to another.
  • They rarely interact with anything, and trillions upon trillions stream through us every minute.
  • They have a very small mass.

But right now, there are still more questions than answers.

Zeller, one of thousands of neutrino researchers around the world and co-spokesperson for the neutrino experiment MicroBooNE based at Fermilab, says the questions about neutrinos don’t stop at mass. She writes down a shopping list of things physicists want to find out:

 

  • Is one type of neutrino much heavier than the other two, or much lighter?
  • What is the absolute mass of the neutrino?
  • Are there more than three types of neutrinos?
  • Do neutrinos and antineutrinos behave differently?
  • Is the neutrino its own antiparticle?
  • Is our picture of neutrinos correct?

No single experiment can answer all of these questions. Instead, there are dozens of experiments looking at neutrinos from different sources, each contributing a piece to the puzzle. Some neutrinos stream unimpeded from far away, born in supernovae, the sun, the atmosphere or cosmic sources. Others originate closer to home, in the Earth, nuclear reactors, radioactive decays or particle accelerators. Their different birthplaces imbue them with different flavors and energies—a range so great, it spans at least 16 orders of magnitude. Armed with the knowledge of where and how to look, scientists are entering an exhilarating experimental time.

“That’s why neutrino physics is so exciting right now,” Zeller says. “It’s not as if we’re shooting in the dark or we don’t know what we’re doing. Worldwide, we’re embarking on a program to answer these questions. That path will make use of these many different sources, and in the end you put it all together and hope the story makes sense.”

Neutrinos from nuclear reactors

The first confirmation that neutrinos were more than just a theory came from nuclear reactors, where neutrinos are produced in a process called beta decay. A team of scientists led by Clyde Cowan and Frederick Reines found neutrinos spewing in a steady stream from reactors at the Hanford Site in Washington and the Savannah River Plant in South Carolina between 1953 and 1959.

Reactors have been useful for neutrino physics ever since, particularly because they produce only one kind of neutrino: electron antineutrinos. When studying the way particles change from one type to another, it’s invaluable to know exactly what you’re starting with.

Reactor experiments such as KamLAND, which studied particles from 53 nuclear reactors in Japan, echoed results from projects examining solar and atmospheric neutrinos. All of them found that neutrinos changed flavor over time.

“Once we know that neutrinos are oscillating, that gives us the strongest evidence that neutrinos are massive,” says Dan Dwyer, a scientist at Lawrence Berkeley National Laboratory and researcher on the international Daya Bay Reactor Neutrino Experiment based in China.

Such projects now look for the way neutrinos change and for hints about their relative masses.

Because reactor experiments allow for precision, they’re also ideal to hunt for a fourth type of particle—the yet unobserved sterile neutrino, thought to interact only through gravity.

Neutrinos from accelerators

Reactor neutrinos aren’t the only way to look for additional neutrinos. That’s where the powerhouse of neutrino research—the accelerator—comes in.

Scientists can use a beam of easier-to-control particles such as protons to create a beam of neutrinos.

First, they accelerate the protons and smash them into a target. The energy released in this collision converts to mass in the form of a flood of new massive particles. Those particles decay into less massive particles, including neutrinos.

Before the massive particles decay, scientists use magnets to focus them into a beam. Afterward, they use blocking material to skim off unwanted bits while the neutrinos—which can pass through a light-year of lead without even noticing it’s there—flow freely through.

Neutrino beams from accelerators are typically made of muon neutrinos and antineutrinos, but the experiments that use accelerators split into two main groups: short-baseline experiments, which look at oscillations over smaller distances, and long-baseline experiments, which study neutrinos that have traveled over hundreds of miles.

Both types of experiments look at how neutrinos oscillate. At short distances, neutrinos are less likely to have changed flavors, though the influence of undiscovered new particles or forces might affect that rate. At long distances, neutrinos are more likely to have changed after traveling for a few milliseconds at nearly the speed of light. Oscillation patterns can give scientists clues as to the masses of the different types of neutrinos.

Oscillation studies over long distances, like Japan’s T2K experiment or the United States’ NOvA experiment and proposed DUNE experiment, can help researchers find how neutrinos relate to antineutrinos. One method is to search for charge parity violation.

This complicated-sounding term essentially asks whether matter and antimatter can pull off “the old switcheroo”—that is, whether the universe treats matter and antimatter particles identically. If the oscillations of neutrinos are fundamentally different from the oscillations of antineutrinos, then CP is broken.

Scientists already know that CP is violated for one major building block of the universe: the quarks. Does the same happen for the other major family, the leptons? Neutrinos might hold the key.

Studying neutrinos without neutrinos

It’s odd that one of the most important questions regarding neutrinos can be answered only by looking for a process apparently lacking in neutrinos.

In neutrinoless double beta decay, a particle would decay into electrons and neutrinos, but the neutrinos would annihilate one another within the nucleus.

“If you see it, it tells you that neutrinos are different in a fundamental way,” says Boris Kayser, a theorist at Fermilab.

Neutrinoless double beta decay would occur only if neutrinos and their antiparticles were one and the same. No other fundamental particle of matter has this property.

“Neutrinos are very special,” Kayser says. “It could be that they violate rules that other particles don’t violate.”

Several experiments worldwide are under way to search for this process, with future generations planned.

A different experiment, KATRIN, hopes to find the masses of the neutrinos by looking at particular electrons. As a radioactive kind of hydrogen decays, it spits out an antineutrino and a partner electron. Scientists will use the world’s largest spectrometer to measure the energy of these electrons to learn about the neutrino.

Geoneutrinos

Unperturbed by magnetism or mass in their paths, neutrinos are perhaps the ultimate messengers of the universe. Once found, the particles point back to their origins, places scientists can’t otherwise see. Investigating these neutrinos provides insight into the particles themselves and is a useful way to probe the unknown.

Take the Earth as an example. Scientists can use detectors to capture geoneutrinos, typically low-energy electron antineutrinos, to learn about the composition of our planet without trying to drill miles below the surface. Because we’ve learned that neutrinos are born of particle decay, the number of geoneutrinos tells researchers how much potassium, thorium and uranium lurk below, heating our world.

Solar neutrinos

Neutrinos are also created in processes in the sun. But when Ray Davis built a solar neutrino detector filled with dry cleaning fluid, his experiment picked up only a third of the predicted neutrinos.

This solar neutrino problem hinted that we didn’t understand our sun; in reality, we didn’t understand neutrinos. Solar neutrino experiments after Davis’ showed that neutrinos from the sun were changing flavor, and a reactor experiment later confirmed that the flavor change was caused by neutrino oscillation.

Modern solar neutrino experiments such as Italy’s Borexino provide insight into the core of the sun and help put limits on sterile neutrinos.

Others, like Japan’s Super-Kamiokande detector, can look at how solar neutrinos change when traveling through the earth versus neutrinos oscillating primarily in the vacuum of space.

“The reason that’s important is that if the neutrino interacts with matter in new, unknown ways, which is possible, then this effect would be changed,” says Josh Klein, professor of physics at the University of Pennsylvania. “It’s a very sensitive measure of new physics.”

Cosmic neutrinos

Cosmic neutrinos illuminate powerful phenomena occurring within our galaxy and beyond. Massive extragalactic neutrino hunting experiments, such as the IceCube experiment that sprawls across a cubic kilometer of ice in Antarctica, can find neutrinos that have oscillated over much longer distances than we can test with accelerators.

“We see neutrinos [with energies] from below 10 [billion electronvolts] to above a thousand [trillion electronvolts],” says Francis Halzen, physicist at the University of Wisconsin, Madison, and leader of IceCube. “Nobody has ever built something that covers this energy range of particles.”

Giant neutrino detectors like this one can look for sterile neutrinos and gather information on oscillations and mass hierarchy.

They’re also useful for understanding dark matter and supernovae, analyzing atmospheric neutrinos that form when cosmic rays hit our atmosphere and telling other astronomers where to point their telescopes if neutrinos from a supernova burst hit. Physicists learn properties of neutrinos, but the neutrinos in turn unlock secrets of the universe.

“Whenever we have made a picture of the universe in a different wavelength region of light, we have always seen things we didn’t expect,” Halzen says. “We’re doing now what astronomers have been doing for decades: looking at the sky in different ways.”

Neutrinos matter for matter

At the end of the day, why go to all this trouble for such a tiny particle? In addition to helping scientists probe the interior of the Earth or the far-off corners of the cosmos, neutrinos could hold the key to why matter exists today.

Scientists know that antimatter and matter are produced in equal parts and should ultimately have annihilated one another, leaving a dark and empty universe. But here we stand, matter in all its glory.

Sometime early in the universe’s history, an imbalance arose and shifted the scales toward a matter-dominated universe. If physicists find that neutrinos have certain characteristics—including CP violation—it could help explain why the universe turned out the way it did.

“They’re the most abundant massive particle in the universe,” Zeller says. “If you find out something weird about neutrinos, it’s bound to tell you something about how the universe evolved or how it came to be the way we observe today.”

 

Check out our printable poster about neutrinos.

Like what you see? Sign up for a free subscription to symmetry!

by Lauren Biron at June 30, 2015 01:00 PM

Life as a Physicist

Education and “Internet Time”

I saw this link on techcrunch go by discussing the state of venture capital in the education sector. There is a general feeling, at least in the article, that when dealing with universities that things are not moving at internet speed:

“The challenge is, in general, education is a pretty slow to move category, particularly if you’re trying to sell into schools and universities … In many cases they don’t seem to show the sense of urgency that the corporate world does.” says Steve Murray, a partner with Softbank Capital, and investor in the education technology company, EdCast.

I had to laugh a bit. Duh. MOOC’s are a classic example. Massively Open Online Courses – a way to educate large numbers of people with a very small staff. The article refers to the problems with this, actually:

The first generation of massively open online courses have had (well-documented) problems with user retention.

So why have universities been so slow to just jump into the latest and greatest education technology? Can you imagine sending your kid to get a degree from the University of Washington, where they are trying out some new way of education that, frankly, fails on university scale? We are a publically funded university. We’d be shut! The press, rightly, would eat us alive. No institution is going to jump before they look and move their core business over to something that hasn’t been proven.

Another way to look at this, perhaps, is that each University has a brand to maintain. Ok, I’m not a business person here, so I probably am not using the word in quite the right way. None the less. My department at the University of Washington, the Physics Department, is constantly looking at the undergraduate curricula. We are, in some sense, driven by the question “What does it mean to have a degree from the University of Washington Physics Department?” or “What physics should they know?” or another flavor: “They should be able to explain and calculate X by the time they are awarded the degree.” There is a committee in the department that is responsible for adjusting the courses and material covered, and they are constantly proposing changes.

So far only certain technological solutions have an obvious “value proposition.” For example, the online homework websites. This enables students to practice problems without having to spend a huge amount of money on people who will do the grading of the exams. Learning Management Systems, like Canvas, allows us to quickly setup a website for the course that includes just about everything we need as teachers, saving us bunch of time.

Those examples make teaching cheaper and more efficient. But that isn’t always the case. Research (yes, research!!!) has shown that students learn better when they are actively working on a problem (in groups of peers is even more powerful) – so we can flip the class room: have them watch lectures on video and during traditional lecture time work in groups. To do it right, you need to redesign the room… which costs $$… And the professor now has to spend extra time recording the lectures. So there is innovation – and it is helping students learn better.

I think most of us in education will happily admit to the fact that there are inefficiencies in the education system – but really big ones? The problem with the idea that there are really big inefficiencies is that no one has really shown how to educate people on the scale of a University in a dramatically cheaper way. As soon as that happens the inefficiencies will become obvious along with the approach to “fix” them. There are things we need to focus on doing better, and there are places that seem like they are big inefficiencies… and MOOC’s will have a second generation to address their problems. And all of us will watch the evolution, and some professors will work with the companies to improve their products… but it isn’t going to happen overnight, and it isn’t obvious to me that it will happen at all, at least not for the bulk of students.

Education is labor intensive. In order to learn the student has to put in serious time. And as long this remains the case, we will be grappling with costs.


by gordonwatts at June 30, 2015 12:56 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Robert Boyle Summer School 2015

Last weekend, I attended the Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s my favourite annual conference by some margin – a small number of talks by highly eminent scholars of the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

Born in Lismore into a wealthy landowning family, Robert Boyle became one of the most important figures in the Scientific Revolution, well-known for his scientific discoveries, his role in the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

boyle

The Irish-born scientist and aristocrat Robert Boyle   

As ever, the summer school took place in Lismore, the beautiful town that is the home of Lismore Castle where Boyle was born. This year, the conference commemorated the 350th anniversary of the Philosophical Transactions of the Royal Society by considering the history of the publication of scientific work, from the first issue of  Phil. Trans. to the problem of fraud in scientific publication today.

IMG_0902[1]

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

The summer school opened on Thursday evening with an intriguing warm-up talk on science in modern novels. Jim Malone , Emeritus Robert Boyle Professor of Medicine at Trinity College Dublin, presented a wonderful tour of his favourite novels involving science, with particular emphasis on the novels of C.P. Snow , Ian McEwan and the Irish satirist Flann O’Brien. I must admit I have not read the novels of C.P. Snow (although I am familiar with his famous essay on the two cultures of science and literature). As for Flann O’ Brien, we were treated to a superb overview of the science in his novels, not least the wonderful and surreal novel ‘ The Third Policeman’. Nowadays, there is an annual conference in memory of Flann O’ Brien, I hope Jim gives  a presentation at this meeting! Finally, I was delighted that the novels of Ian McEwan were included in the discussion. I too enjoyed the novels ‘Saturday’ and ‘Solar’ hugely, was amazed by the author’s grasp of science and the practice of science .

Turning to the core theme of the conference, the first talk on Friday morning was ‘Robert Boyle, Philosophical Transactions and Scientific Communication’ by Professor Michael Hunter of Birkbeck College. Professor Hunter is one of the world’s foremost experts on Boyle, and he gave a thorough overview of Boyle’s use of the Phil. Trans to disseminate his findings. Afterwards, Dr. Aileen Fyfe of the University of St Andrews gave the talk ‘Peer Review: A History From 1665′ carefully charting how the process of peer review evolved from Boyle’s time to today. The main point here was that today’s process of a journal sending papers out to be refereed by experts in the field is a relatively new development. In Boyle’s day, a submitted paper was evaluated by either the Secretary of the Royal Society or by one of the Fellows. However, it seemed to me that this ‘gatekeeper’ approach still constituted review by peers and was, if anything, more restrictive than today’s peer review.

IMG_9853

The renowned Boyle scholar Professor Michael Hunter of Birbeck College, UCL, in action

On Friday afternoon, we had the wonderful talk ‘Lady Ranelagh, the Hartlib Circle and Networks for Scientific Correspondence’  in the spectacular setting of St Carthage’s Cathedral, given by Dr.Michelle DiMeo of the Chemical Heritage Foundation.  I knew nothing of Lady Ranelagh (Robert Boyle’s elder sister) or the The Hartlib Circle  before this. The Circle was clearly an important  forerunner of the Philosophical Transactions and Lady Ranelagh’s role in the Circle and in Boyle’s scientific life has been greatly overlooked.

eire06-041e

St Carthage’s Cathedral in Lismore

IMG_0328

Professor DiMeo unveiling a plaque in memory of Lady Ranelagh at the Castle. The new plaque is on the right, to accompany the existing plaque in memory of Robert Boyle on the left 

On Friday evening, we had a barbecue in the Castle courtyard, accompanied by music and dance from local music group Sonas. After this, many of us trooped down to one of the village pubs for an impromptu music session (okay, not entirely impromptu, ahem). The highlight was when Sir John Pethica,  VP of the Royal Society, produced a fiddle and joined in. As did his wife, Pam – talk about Renaissance men and women!

a700f8da8bdea9b0a9ad5044f71de8c5

Off to the Castle for a barbecue

On Saturday morning, Professor Dorothy Bishop of the University of Oxford gave the talk ‘How persistence of dead tree technology has stifled scientific communication ; time for a radical rethink’, a presentation that included some striking accounts of some recent cases of fraudulent publication in science – not least a case she herself played a major part in exposing! In the next talk,‘ The scientific record: archive, intellectual property , communication or filter?’ Sir John Pethica of Oxford University and Trinity College Dublin made some similar observations, but noted that the problem may be much more prevalent in some areas of science than others. This made sense to me, as my own experience of the publishing world in physics has been of very conservative editors that err on the side of caution. Indeed, it took a long time for our recent discovery of an unknown theory by Einstein to be accepted by the physics journals.

All in all, a superb conference in a beautiful setting.  Other highlights included a fascinating account of poetry in science by Professor Iggy McGovern, a Professor of Physics at Trinity College Dublin and published poet, including several examples from his own work and that of Patrick Kavanagh, and a guided tour of the Castle Gardens, accompanied by Robert Boyle and his sister. You can find the full conference programme here.

Copy of IMG_0572

Robert Boyle and his sister Lady Ranelagh picking flowers in the Castle Gardens


by cormac at June 30, 2015 12:32 PM

June 29, 2015

The n-Category Cafe

What is a Reedy Category?

I’ve just posted the following preprint, which has apparently quite little to do with homotopy type theory.

The notion of Reedy category is common and useful in homotopy theory; but from a category-theoretic point of view it is odd-looking. This paper suggests a category-theoretic understanding of Reedy categories, which I find more satisfying than any other I’ve seen.

So what is a Reedy category anyway? The idea of this paper is to start instead with the question “what is a Reedy model structure?” For a model category <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> and a Reedy category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, then <semantics>M C<annotation encoding="application/x-tex">M^C</annotation></semantics> has a model structure in which a map <semantics>AB<annotation encoding="application/x-tex">A\to B</annotation></semantics> is

  • …a weak equivalence iff <semantics>A xB x<annotation encoding="application/x-tex">A_x\to B_x</annotation></semantics> is a weak equivalence in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> for all <semantics>xC<annotation encoding="application/x-tex">x\in C</annotation></semantics>.
  • …a cofibration iff the induced map <semantics>A x L xAL xBB x<annotation encoding="application/x-tex">A_x \sqcup_{L_x A} L_x B \to B_x</annotation></semantics> is a cofibration in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> for all <semantics>xC<annotation encoding="application/x-tex">x\in C</annotation></semantics>.
  • …a fibration iff the induced map <semantics>A xB x× M xBM xA<annotation encoding="application/x-tex">A_x \to B_x \times_{M_x B} M_x A</annotation></semantics> is a fibration in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> for all <semantics>xC<annotation encoding="application/x-tex">x\in C</annotation></semantics>.

Here <semantics>L x<annotation encoding="application/x-tex">L_x</annotation></semantics> and <semantics>M x<annotation encoding="application/x-tex">M_x</annotation></semantics> are the latching object and matching object functors, which are defined in terms of the Reedy structure of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>. However, at the moment all we care about is that if <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> has degree <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> (part of the structure of a Reedy category is an ordinal-valued degree function on its objects), then <semantics>L x<annotation encoding="application/x-tex">L_x</annotation></semantics> and <semantics>M x<annotation encoding="application/x-tex">M_x</annotation></semantics> are functors <semantics>M C nM<annotation encoding="application/x-tex">M^{C_n} \to M</annotation></semantics>, where <semantics>C n<annotation encoding="application/x-tex">C_n</annotation></semantics> is the full subcategory of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> on the objects of degree less than <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>. In the prototypical example of <semantics>Δ op<annotation encoding="application/x-tex">\Delta^{op}</annotation></semantics>, where <semantics>M C<annotation encoding="application/x-tex">M^{C}</annotation></semantics> is the category of simplicial objects in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>, <semantics>L nA<annotation encoding="application/x-tex">L_n A</annotation></semantics> is the “object of degenerate <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-simplices” whereas <semantics>M nA<annotation encoding="application/x-tex">M_n A</annotation></semantics> is the “object of simplicial <semantics>(n1)<annotation encoding="application/x-tex">(n-1)</annotation></semantics>-spheres (potential boundaries for <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-simplices)”.

The fundamental observation which makes the Reedy model structure tick is that if we have a diagram <semantics>AM C n<annotation encoding="application/x-tex">A\in M^{C_n}</annotation></semantics>, then to extend it to a diagram defined at <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> as well, it is necessary and sufficient to give an object <semantics>A x<annotation encoding="application/x-tex">A_x</annotation></semantics> and a factorization <semantics>L xAA xM xA<annotation encoding="application/x-tex">L_x A \to A_x \to M_x A</annotation></semantics> of the canonical map <semantics>L xAM xA<annotation encoding="application/x-tex">L_x A \to M_x A</annotation></semantics> (and similarly for morphisms of diagrams). For <semantics>Δ op<annotation encoding="application/x-tex">\Delta^{op}</annotation></semantics>, this means that if we have a partially defined simplicial object with objects of <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics>-simplices for all <semantics>k<n<annotation encoding="application/x-tex">k\lt n</annotation></semantics>, then to extend it with <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-simplices we have to give an object <semantics>A n<annotation encoding="application/x-tex">A_n</annotation></semantics>, a map <semantics>L nAA n<annotation encoding="application/x-tex">L_n A \to A_n</annotation></semantics> including the degeneracies, and a map <semantics>A nM nA<annotation encoding="application/x-tex">A_n \to M_n A</annotation></semantics> assigning the boundary of every simplex, such that the composite <semantics>L nAA nM nA<annotation encoding="application/x-tex">L_n A \to A_n \to M_n A</annotation></semantics> assigns the correct boundary to degenerate simplices.

Categorically speaking, this observation can be reformulated as follows. Given a natural transformation <semantics>α:FG<annotation encoding="application/x-tex">\alpha : F\to G</annotation></semantics> between parallel functors <semantics>F,G:MN<annotation encoding="application/x-tex">F,G:M\to N</annotation></semantics>, let us define the bigluing category <semantics>Gl(α)<annotation encoding="application/x-tex">Gl(\alpha)</annotation></semantics> to be the category of quadruples <semantics>(M,N,ϕ,γ)<annotation encoding="application/x-tex">(M,N,\phi,\gamma)</annotation></semantics> such that <semantics>MM<annotation encoding="application/x-tex">M\in M</annotation></semantics>, <semantics>NinN<annotation encoding="application/x-tex">N\inN</annotation></semantics>, and <semantics>ϕ:FMN<annotation encoding="application/x-tex">\phi:F M \to N</annotation></semantics> and <semantics>γ:NGM<annotation encoding="application/x-tex">\gamma : N \to G M</annotation></semantics> are a factorization of <semantics>α M<annotation encoding="application/x-tex">\alpha_M</annotation></semantics> through <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics>. (I call this “bigluing” because if <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> is constant at the initial object, then it reduces to the comma category <semantics>(Id/G)<annotation encoding="application/x-tex">(Id/G)</annotation></semantics>, which is sometimes called the gluing construction) The above observation is then that <semantics>M C xGl(α)<annotation encoding="application/x-tex">M^{C_x}\simeq Gl(\alpha)</annotation></semantics>, where <semantics>α:L xM x<annotation encoding="application/x-tex">\alpha: L_x \to M_x</annotation></semantics> is the canonical map between functors <semantics>M C nM<annotation encoding="application/x-tex">M^{C_n} \to M</annotation></semantics> and <semantics>C x<annotation encoding="application/x-tex">C_x</annotation></semantics> is the full subcategory of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> on <semantics>C n{x}<annotation encoding="application/x-tex">C_n \cup \{x\}</annotation></semantics>. Moreover, it is an easy exercise to reformulate the usual construction of the Reedy model structure as a theorem that if <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> and <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> are model categories and <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> are left and right Quillen respectively, then <semantics>Gl(α)<annotation encoding="application/x-tex">Gl(\alpha)</annotation></semantics> inherits a model structure.

Therefore, our answer to the question “what is a Reedy model structure?” is that it is one obtained by repeatedly (perhaps transfinitely) bigluing along a certain kind of transformation between functors <semantics>M CM<annotation encoding="application/x-tex">M^C \to M</annotation></semantics> (where <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a category playing the role of <semantics>C n<annotation encoding="application/x-tex">C_n</annotation></semantics> previously). This motivates us to ask, given <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, how can we find functors <semantics>F,G:M CM<annotation encoding="application/x-tex">F,G : M^{C}\to M</annotation></semantics> and a map <semantics>α:FG<annotation encoding="application/x-tex">\alpha : F \to G</annotation></semantics> such that <semantics>Gl(α)<annotation encoding="application/x-tex">Gl(\alpha)</annotation></semantics> is of the form <semantics>M C<annotation encoding="application/x-tex">M^{C'}</annotation></semantics> for some new category <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics>?

Of course, we expect <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics> to be obtained from <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> by adding one new object “<semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>”. Thus, it stands to reason that <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics>, <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, and <semantics>α<annotation encoding="application/x-tex">\alpha</annotation></semantics> will have to specify, among other things, the morphisms from <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> to objects in <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>, and the morphisms to <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> from objects of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>. These two collections of morphisms form diagrams <semantics>W:CSet<annotation encoding="application/x-tex">W:C\to\Set</annotation></semantics> and <semantics>U:C opSet<annotation encoding="application/x-tex">U:C^{op} \to \Set</annotation></semantics>, respectively; and given such <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> and <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> we do have canonical functors <semantics>F<annotation encoding="application/x-tex">F</annotation></semantics> and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, namely the <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>-weighted colimit and the <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>-weighted limit. Moreover, a natural transformation from the <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>-weighted colimit to the <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>-weighted limit can naturally be specified by giving a map <semantics>W×UC(,)<annotation encoding="application/x-tex">W\times U \to C(-,-)</annotation></semantics> in <semantics>Set C op×C<annotation encoding="application/x-tex">\Set^{C^{op}\times C}</annotation></semantics>. In <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics>, this map will supply the composition of morphisms through <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>. (A triple consisting of <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics>, <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>, and a map <semantics>W×UC(,)<annotation encoding="application/x-tex">W\times U \to C(-,-)</annotation></semantics> is also known as an object of the Isbell envelope of <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics>.)

It remains only to specify the hom-set <semantics>C(x,x)<annotation encoding="application/x-tex">C'(x,x)</annotation></semantics> (and the relevant composition maps), and for this there is a “universal choice”: we take <semantics>C(x,x)=(W CU){id x}<annotation encoding="application/x-tex">C'(x,x) = (W \otimes_C U) \sqcup \{\id_x\}</annotation></semantics>. That is, we throw in composites of morphisms <semantics>xyx<annotation encoding="application/x-tex">x\to y \to x</annotation></semantics>, freely subject to the associative law, and also an identity morphism. This <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics> has a universal property (it is a “collage” in the bicategory of profunctors) which ensures that the resulting biglued category is indeed equivalent to <semantics>M C<annotation encoding="application/x-tex">M^{C'}</annotation></semantics>.

A category with degrees assigned to its objects can be obtained by iterating this construction if and only if any nonidentity morphism between objects of the same degree factors uniquely-up-to-zigzags through an object of strictly lesser degree (i.e. the category of such factorizations is connected). What remains is to ensure that the resulting latching and matching objects are left and right Quillen. It turns out that this is equivalent to requiring that morphisms between objects of different degrees also have connected or empty categories of factorizations through objects of strictly lesser degree.

I call a category satisfying these conditions almost-Reedy. This doesn’t look much like the usual definition of Reedy category, but it turns out to be very close to it. If <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is almost-Reedy, let <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics> (resp. <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics>) be the class of morphisms <semantics>f:xy<annotation encoding="application/x-tex">f:x\to y</annotation></semantics> such that <semantics>deg(x)deg(y)<annotation encoding="application/x-tex">\deg(x)\le \deg(y)</annotation></semantics> (resp. <semantics>deg(y)deg(x)<annotation encoding="application/x-tex">\deg(y)\le \deg(x)</annotation></semantics>) and that do not factor through any object of strictly lesser degree than <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>. Then we can show that just as in a Reedy category, every morphism factors uniquely into a <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics>-morphism followed by a <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics>-morphism.

The only thing missing from the usual definition of a Reedy category, therefore, is that <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics> and <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics> be subcategories, i.e. closed under composition. And indeed, this can fail to be true; but it is all that can go wrong: <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> is a Reedy category if and only if it is an almost-Reedy category such that <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics> and <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics> are closed under composition. (In particular, this means that <semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics> and <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics> don’t have to be given as data in the definition of a Reedy category; they are recoverable from the degree function. This was also noticed by Riehl and Verity.)

In other words, the notion of Reedy category (very slightly generalized) is essentially inevitable. Moreover, as often happens, once we understand a definition more conceptually, it is easier to generalize further. The same analysis can be repeated in other contexts, yielding the existing notions of generalized Reedy category and enriched Reedy category, as well as new generalizations such as a combined notion of “enriched generalized Reedy category”.

(I should note that some of the ideas in this paper were noticed independently, and somewhat earlier, by Richard Garner. He also pointed out that the bigluing model structure is a special case of the “Grothendieck construction” for model categories.)

This paper is, I think, slightly unusual, for a paper in category theory, in that one of its main results (unique <semantics>C +<annotation encoding="application/x-tex">C_+</annotation></semantics>-<semantics>C <annotation encoding="application/x-tex">C_-</annotation></semantics>-factorization in an almost-Reedy category) depends on a sequence of technical lemmas, and as far as I know there is no particular reason to expect it to be true. This made me worry that I’d made a mistake somewhere in one of the technical lemmas that might bring the whole theorem crashing down. After I finished writing the paper, I thought this made it a good candidate for an experiment in computer formalization of some non-HoTT mathematics.

Verifying all the results of the paper would have required a substantial library of basic category theory, but fortunately the proof in question (including the technical lemmas) is largely elementary, requiring little more than the definition of a category. However, formalizing it nevertheless turned out to be much more time-consuming that I had hoped, and as a result I’m posting this paper quite some months later than I might otherwise have. But the result I was worried about turned out to be correct (here is the Coq code, which unlike the HoTT Coq library requires only a standard Coq v8.4 install), and now I’m much more confident in it. So was it worth it? Would I choose to do it again if I knew how much work it would turn out to be? I’m not sure.

Having this formalization does provide an opportunity for another interesting experiment. As I said, the theorem turned out to be correct; but the process of formalization did uncover a few minor errors, which I corrected before posting the paper. I wonder, would those errors have been caught by a human referee? And you can help answer that question! I’ve posted a version without these corrections, so you can read it yourself and look for the mistakes. The place to look is Theorem 7.16, its generalization Theorem 8.26, and the sequences of lemmas leading up to them (starting with Lemmas 7.12 and 8.15). The corrected version that I linked to up top mentions all the errors at the end, so you can see how many of them you caught — then post your results in the comments! You do, of course, have the advantage over an ordinary referee that I’m telling you there is at least one error to find.

Of course, you can also try to think of an easier proof, or a conceptual reason why this theorem ought to be true. If you find one (or both), I will be both happy (for obvious reasons) and sad (because of all the time I wasted…).

Let me end by mentioning one other thing I particularly enjoyed about this paper: it uses two bits of very pure category theory in its attempt to explain an apparently ad hoc definition from homotopy theory.

The first of these bits is “tight lax colimits of diagrams of profunctors”. It so happens that an object <semantics>(U,W,α)<annotation encoding="application/x-tex">(U,W,\alpha)</annotation></semantics> of the Isbell envelope can also be regarded as a special sort of lax diagram in <semantics>Prof<annotation encoding="application/x-tex">Prof</annotation></semantics>, and the category <semantics>C<annotation encoding="application/x-tex">C'</annotation></semantics> constructed from it is its lax colimit. Moreover, the universal property of this lax colimit — or more precisely, its stronger universal property as a “tight colimit” in the equipment <semantics>Prof<annotation encoding="application/x-tex">Prof</annotation></semantics> — is precisely what we need in order to conclude that <semantics>M C<annotation encoding="application/x-tex">M^{C'}</annotation></semantics> is the desired bigluing category.

The second of these bits is an absolute coequalizer that is not split. The characterization of non-split absolute coequalizers seemed like a fairly esoteric and very pure bit of category theory when I first learned it. I don’t, of course, mean this in any derogatory way; I just didn’t expect to ever need to use it in an application to, say, homotopy theory. But it turned out to be exactly what I needed at one point in this paper, to “enrich” an argument involving a two-step zigzag (whose unenriched version I learned from Riehl-Verity).

by shulman (viritrilbia@gmail.com) at June 29, 2015 05:48 PM

Jaques Distler - Musings

Asymptotic Safety and the Gribov Ambiguity

Recently, an old post of mine about the Asymptotic Safety program for quantizing gravity received a flurry of new comments. Inadvertently, one of the pseudonymous commenters pointed out yet another problem with the program, which deserves a post all its own.

Before launching in, I should say that

  1. Everything I am about to say was known to Iz Singer in 1978. Though, as with the corresponding result for nonabelian gauge theory, the import seems to be largely unappreciated by physicists working on the subject.
  2. I would like to thank Valentin Zakharevich, a very bright young grad student in our Math Department for a discussion on this subject, which clarified things greatly for me.

Yang-Mills Theory

Let’s start by reviewing Singer’s explication of the Gribov ambiguity.

Say we want to do the path integral for Yang-Mills Theory, with compact semi-simple gauge group <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. For definiteness, we’ll talk about the Euclidean path integral, and take <semantics>M=S 4<annotation encoding="application/x-tex">M= S^4</annotation></semantics>. Fix a principal <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>-bundle, <semantics>PM<annotation encoding="application/x-tex">P\to M</annotation></semantics>. We would like to integrate over all connections, <semantics>A<annotation encoding="application/x-tex">A</annotation></semantics>, on <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, modulo gauge transformations, with a weight given by <semantics>e S YM(A)<annotation encoding="application/x-tex">e^{-S_{\text{YM}}(A)}</annotation></semantics>. Let <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics> be the space of all connections on <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics> the (infinite dimensional) group of gauge transformations (automorphisms of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> which project to the identity on <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>), and <semantics>=𝒜/𝒢<annotation encoding="application/x-tex">\mathcal{B}=\mathcal{A}/\mathcal{G}</annotation></semantics>, the gauge equivalence classes of connections.

“Really,” what we would like to do is integrate over <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>. In practice, what we actually do is fix a gauge and integrate over actual connections (rather than equivalence classes thereof). We could, for instance, choose background field gauge. Pick a fiducial connection, <semantics>A¯<annotation encoding="application/x-tex">\overline{A}</annotation></semantics>, on <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, and parametrize any other connection <semantics>A=A¯+Q<annotation encoding="application/x-tex"> A= \overline{A}+Q </annotation></semantics> with <semantics>Q<annotation encoding="application/x-tex">Q</annotation></semantics> a <semantics>𝔤<annotation encoding="application/x-tex">\mathfrak{g}</annotation></semantics>-valued 1-form on <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>. Background field gauge is

(1)<semantics>D A¯*Q=0<annotation encoding="application/x-tex">D_{\overline{A}}* Q = 0 </annotation></semantics>

which picks out a linear subspace <semantics>𝒬𝒜<annotation encoding="application/x-tex">\mathcal{Q}\subset\mathcal{A}</annotation></semantics>. The hope is that this subspace is transverse to the orbits of <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics>, and intersects each orbit precisely once. If so, then we can do the path integral by integrating1 over <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics>. That is, <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> is the image of a global section of the principal <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics>-bundle, <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}\to \mathcal{B}</annotation></semantics> and integrating over <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> is equivalent to integrating over its image, <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics>.

What Gribov found (in a Coulomb-type gauge) is that <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> intersects a given gauge orbit more than once. Singer explained that this is not some accident of Coulomb gauge. The bundle <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}\to \mathcal{B}</annotation></semantics> is nontrivial and no global gauge choice (section) exists.

A small technical point: <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics> doesn’t act freely on <semantics>𝒜<annotation encoding="application/x-tex">\mathcal{A}</annotation></semantics>. Except for the case2 <semantics>G=SU(2)<annotation encoding="application/x-tex">G=SU(2)</annotation></semantics>, there are reducible connections, which are fixed by a subgroup of <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics>. Because of the presence of reducible connections, we should interpret <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics> as a stack. However, to prove the nontriviality, we don’t need to venture into the stacky world; it suffices to consider the irreducible connections, <semantics>𝒜 0𝒜<annotation encoding="application/x-tex">\mathcal{A}_0\subset \mathcal{A}</annotation></semantics>, on which <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics> acts freely. We then have <semantics>𝒜 0 0<annotation encoding="application/x-tex">\mathcal{A}_0\to \mathcal{B}_0</annotation></semantics> of which <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics> acts freely on the fibers. If we were able to find a global section of <semantics>𝒜 0 0<annotation encoding="application/x-tex">\mathcal{A}_0\to \mathcal{B}_0</annotation></semantics>, then we would have established <semantics>𝒜 0 0×𝒢<annotation encoding="application/x-tex"> \mathcal{A}_0\cong \mathcal{B}_0\times \mathcal{G} </annotation></semantics> But Singer proves that

  1. <semantics>π k(𝒜 0)=0,k>0<annotation encoding="application/x-tex">\pi_k(\mathcal{A}_0)=0,\,\forall k\gt 0</annotation></semantics>. But
  2. <semantics>π k(𝒢)0<annotation encoding="application/x-tex">\pi_k(\mathcal{G})\neq 0</annotation></semantics> for some <semantics>k>0<annotation encoding="application/x-tex">k\gt 0</annotation></semantics>.

Hence <semantics>𝒜 0 0×𝒢<annotation encoding="application/x-tex"> \mathcal{A}_0\ncong \mathcal{B}_0\times \mathcal{G} </annotation></semantics> and no global gauge choice is possible.

What does this mean for Yang-Mills Theory?

  • If we’re working on the lattice, then <semantics>𝒢=G N<annotation encoding="application/x-tex">\mathcal{G}= G^N</annotation></semantics>, where <semantics>N<annotation encoding="application/x-tex">N</annotation></semantics> is the number of lattice sites. We can choose not to fix a gauge and instead divide our answers by <semantics>Vol(G) N<annotation encoding="application/x-tex">Vol(G)^N</annotation></semantics>, which is finite. That is what is conventionally done.
  • In perturbation theory, of course, you never see any of this, because you are just working locally on <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>.
  • If we’re working in the continuum, and we’re trying to do something non-perturbative, then we just have to work harder. Locally on <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>, we can always choose a gauge (any principal <semantics>𝒢<annotation encoding="application/x-tex">\mathcal{G}</annotation></semantics>-bundle is locally-trivial). On different patches of <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>, we’ll have to choose different gauges, do the path integral on each patch, and then piece together our answers on patch overlaps using partitions of unity. This sounds like a pain, but it’s really no different from what anyone has to do when doing integration on manifolds.

Gravity

The Asymptotic Freedom people want to do the path-integral over metrics and search for a UV fixed point. As above, they work in Euclidean signature, with <semantics>M=S 4<annotation encoding="application/x-tex">M=S^4</annotation></semantics>. Let <semantics>ℳℯ𝓉<annotation encoding="application/x-tex">\mathcal{Met}</annotation></semantics> be the space of all metrics on <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>, <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> the group of diffeomorphism, and <semantics>=ℳℯ𝓉/𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{B}= \mathcal{Met}/\mathcal{Diff}</annotation></semantics> the space of metrics on <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> modulo diffeomorphisms.

Pick a (fixed, but arbitrary) fiducial metric, <semantics>g¯<annotation encoding="application/x-tex">\overline{g}</annotation></semantics>, on <semantics>S 4<annotation encoding="application/x-tex">S^4</annotation></semantics>. Any metric, <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>, can be written as <semantics>g μν=g¯ μν+h μν<annotation encoding="application/x-tex"> g_{\mu\nu} = \overline{g}_{\mu\nu}+ h_{\mu\nu} </annotation></semantics> They use background field gauge,

(2)<semantics>¯ μh μν12¯ ν(h μ μ )=0<annotation encoding="application/x-tex">\overline{\nabla}^\mu h_{\mu\nu}-\tfrac{1}{2}\overline{\nabla}_\nu(\tensor{h}{^\mu_\mu}) = 0 </annotation></semantics>

where <semantics>¯<annotation encoding="application/x-tex">\overline{\nabla}</annotation></semantics> is the Levi-Cevita connection for <semantics>g¯<annotation encoding="application/x-tex">\overline{g}</annotation></semantics>, and indices are raised and lowered using <semantics>g¯<annotation encoding="application/x-tex">\overline{g}</annotation></semantics>. As before, (2) defines a subspace <semantics>𝒬ℳℯ𝓉<annotation encoding="application/x-tex">\mathcal{Q}\subset \mathcal{Met}</annotation></semantics>. If it happens to be true that <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> is everywhere transverse to the orbits of <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> and meets every <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> orbit precisely once, then we can imagine doing the path integral over <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> instead of over <semantics><annotation encoding="application/x-tex">\mathcal{B}</annotation></semantics>.

In addition to the other problems with the asymptotic safety program (the most grievous of which is that the infrared regulator used to define <semantics>Γ k(g¯)<annotation encoding="application/x-tex">\Gamma_k(\overline{g})</annotation></semantics> is not BRST-invariant, which means that their prescription doesn’t even give the right path-integral measure locally on <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics>), the program is saddled with the same Gribov problem that we just discussed for gauge theory, namely that there is no global section of <semantics>ℳℯ𝓉<annotation encoding="application/x-tex">\mathcal{Met}\to\mathcal{B}</annotation></semantics>, and hence no global choice of gauge, along the lines of (2).

As in the gauge theory case, let <semantics>ℳℯ𝓉 0<annotation encoding="application/x-tex">\mathcal{Met}_0</annotation></semantics> be the metrics with no isometries3. <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> acts freely on the fibers of <semantics>ℳℯ𝓉 0 0<annotation encoding="application/x-tex">\mathcal{Met}_0\to \mathcal{B}_0</annotation></semantics>. Back in his 1978 paper, Singer already noted that

  1. <semantics>π k(ℳℯ𝓉 0)=0,k>0<annotation encoding="application/x-tex">\pi_k(\mathcal{Met}_0)=0,\,\forall k\gt 0</annotation></semantics>, but
  2. <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics> has quite complicated homotopy-type.

Of course, none of this matters perturbatively. When <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> is small, i.e. for <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics> close to <semantics>g¯<annotation encoding="application/x-tex">\overline{g}</annotation></semantics>, (2) is a perfectly good gauge choice. But the claim of the Asymptotic Safety people is that they are doing a non-perturbative computation of the <semantics>β<annotation encoding="application/x-tex">\beta</annotation></semantics>-functional, and that <semantics>h<annotation encoding="application/x-tex">h</annotation></semantics> is not assumed to be small. Just as in gauge theory, there is no global gauge choice (whether (2) or otherwise). And that should matter to their analysis.


Note: Since someone will surely ask, let me explain the situation in the Polyakov string. There, the gauge group isn’t <semantics>𝒟𝒾𝒻𝒻<annotation encoding="application/x-tex">\mathcal{Diff}</annotation></semantics>, but rather the larger group, <semantics>𝒢=𝒟𝒾𝒻𝒻Weyl<annotation encoding="application/x-tex">\mathcal{G}= \mathcal{Diff}\ltimes \text{Weyl}</annotation></semantics>. And we only do a partial gauge-fixing: we don’t demand a metric, but rather only a Weyl equivalence-class of metrics. That is, we demand a section of <semantics>ℳℯ𝓉/Weylℳℯ𝓉/𝒢<annotation encoding="application/x-tex">\mathcal{Met}/\text{Weyl} \to \mathcal{Met}/\mathcal{G}</annotation></semantics>. And that can be done: in <semantics>d=2<annotation encoding="application/x-tex">d=2</annotation></semantics>, every metric is diffeomorphic to a Weyl-rescaling of a constant-curvature metric.


1 To get the right measure on <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics>, we need to use the Fadeev-Popov trick. But, as long as <semantics>𝒬<annotation encoding="application/x-tex">\mathcal{Q}</annotation></semantics> is transverse to the gauge orbits, that’s all fine, and the prescription can be found in any textbook.

2 For more general choice of <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>, we would also have to require <semantics>H 2(M,)=0<annotation encoding="application/x-tex">H^2(M,\mathbb{Z})=0</annotation></semantics>.

3 When <semantics>dim(M)>1<annotation encoding="application/x-tex">dim(M)\gt 1</annotation></semantics>, <semantics>ℳℯ𝓉 0(M)<annotation encoding="application/x-tex">\mathcal{Met}_0(M)</annotation></semantics> is dense in <semantics>ℳℯ𝓉(M)<annotation encoding="application/x-tex">\mathcal{Met}(M)</annotation></semantics>. But for <semantics>dim(M)=1<annotation encoding="application/x-tex">dim(M)=1</annotation></semantics>, <semantics>ℳℯ𝓉 0=<annotation encoding="application/x-tex">\mathcal{Met}_0=\emptyset</annotation></semantics>. In that case, we actually can choose a global section of <semantics>ℳℯ𝓉(S 1)ℳℯ𝓉(S 1)/𝒟𝒾𝒻𝒻(S 1)<annotation encoding="application/x-tex">\mathcal{Met}(S^1) \to \mathcal{Met}(S^1)/\mathcal{Diff}(S^1)</annotation></semantics>.

by distler (distler@golem.ph.utexas.edu) at June 29, 2015 05:47 PM

The n-Category Cafe

Feynman's Fabulous Formula

Guest post by Bruce Bartlett.

There is a beautiful formula at the heart of the Ising model; a formula emblematic of all of quantum field theory. Feynman, the king of diagrammatic expansions, recognized its importance, and distilled it down to the following combinatorial-geometric statement. He didn’t prove it though — but Sherman did.

Feynman’s formula. Let <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> be a planar finite graph, with each edge <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> regarded as a formal variable denoted <semantics>x e<annotation encoding="application/x-tex">x_e</annotation></semantics>. Then the following two polynomials are equal:

<semantics> H evenGx(H)= [γ]P(G)(1(1) w[γ]x[γ])<annotation encoding="application/x-tex">\displaystyle \sum_{H \subseteq_{even} G} x(H) = \prod_{[\vec{\gamma}] \in P(G)} \left(1 - (-1)^{w[\vec{\gamma}]} x[\vec{\gamma}]\right) </annotation></semantics>

pic

I will explain this formula and its history below. Then I’ll explain a beautiful generalization of it to arbitrary finite graphs, expressed in a form given by Cimasoni.

What the formula says

The left hand side of Feynman’s formula is a sum over all even subgraphs <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, including the empty subgraph. An even subgraph <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics> is one which has an even number of half-edges emanating from each vertex. For each even subgraph <semantics>H<annotation encoding="application/x-tex">H</annotation></semantics>, we multiply the variables <semantics>x e<annotation encoding="application/x-tex">x_e</annotation></semantics> of all the edges <semantics>eH<annotation encoding="application/x-tex">e \in H</annotation></semantics> together to form <semantics>x(H)<annotation encoding="application/x-tex">x(H)</annotation></semantics>. So, the left hand side is a polynomial with integer coefficients in the variables <semantics>x e i<annotation encoding="application/x-tex">x_{e_i}</annotation></semantics>.

The right hand side is a product over all <semantics>γP(G)<annotation encoding="application/x-tex">\vec{\gamma} \in P(G)</annotation></semantics>, where <semantics>P(G)<annotation encoding="application/x-tex">P(G)</annotation></semantics> is the set of all prime, reduced, unoriented, closed paths in <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. That’s a bit subtle, so let me define it carefully. Firstly, our graph is not oriented. But, by an oriented edge <semantics>e<annotation encoding="application/x-tex">\mathbf{e}</annotation></semantics>, I mean an unoriented edge <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics> equipped with an orientation. An oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> is a word of composable oriented edges <semantics>e 1e n<annotation encoding="application/x-tex">\mathbf{e_1} \cdots \mathbf{e_n}</annotation></semantics>; we consider <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> up to cyclic ordering of the edges. The oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> is called called reduced if it never backtracks, that is, if no oriented edge <semantics>e<annotation encoding="application/x-tex">\mathbf{e}</annotation></semantics> is immediately followed by the oriented edge <semantics>e 1<annotation encoding="application/x-tex">\mathbf{e^{-1}}</annotation></semantics>. The oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> is called prime if, when viewed as a cyclic word, it cannot be expressed as the product <semantics>δ r<annotation encoding="application/x-tex">\vec{\delta}^r</annotation></semantics> of a given oriented closed path <semantics>δ<annotation encoding="application/x-tex">\vec{\delta}</annotation></semantics> for any <semantics>r2<annotation encoding="application/x-tex">r \geq 2</annotation></semantics>. Note that the oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> is reduced (resp. prime) if and only if <semantics>γ 1<annotation encoding="application/x-tex">\vec{\gamma}^{-1}</annotation></semantics> is. It therefore makes sense to talk about prime reduced unoriented closed paths <semantics>[γ]<annotation encoding="application/x-tex">[\vec{\gamma}]</annotation></semantics>, by which we mean simply an equivalence class <semantics>[γ]=[γ 1]<annotation encoding="application/x-tex">[\vec{\gamma}] = [\vec{\gamma}^{-1}]</annotation></semantics>.

Suppose <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is embedded in the plane, so that each edge forms a smooth curve. Then given an oriented closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics>, we can compute the winding number <semantics>w(γ)<annotation encoding="application/x-tex">w(\vec{\gamma})</annotation></semantics> of the tangent vector along the curve. We need to fix a convention about what happens at vertices, where we pass from the tangent vector <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> at the target of <semantics>e i<annotation encoding="application/x-tex">\mathbf{e_i}</annotation></semantics> to the tangent vector <semantics>v<annotation encoding="application/x-tex">v'</annotation></semantics> at the source of <semantics>e i+1<annotation encoding="application/x-tex">\mathbf{e_{i+1}}</annotation></semantics>. We choose to rotate <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> into <semantics>v<annotation encoding="application/x-tex">v'</annotation></semantics> by the angle less than <semantics>π<annotation encoding="application/x-tex">\pi</annotation></semantics> in absolute value.

Note that <semantics>w(γ)=w(γ)<annotation encoding="application/x-tex">w(-\vec{\gamma}) = -w(\vec{\gamma})</annotation></semantics>, so that its parity <semantics>(1) w[γ]<annotation encoding="application/x-tex">(-1)^{w[\vec{\gamma}]}</annotation></semantics> makes sense for unoriented paths. Finally, by <semantics>x[γ]<annotation encoding="application/x-tex">x[\vec{\gamma}]</annotation></semantics> we simply mean the product of all the variables <semantics>x e i<annotation encoding="application/x-tex">x_{e_i}</annotation></semantics> for <semantics>e i<annotation encoding="application/x-tex">e_i</annotation></semantics> along <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics>.

The product on the right hand side is infinite, since <semantics>P(G)<annotation encoding="application/x-tex">P(G)</annotation></semantics> is infinite in general (we will shortly do some examples). But, we regard the product as a formal power series in the terms <semantics>x e 1x e 2x e n<annotation encoding="application/x-tex">x_{e_1} x_{e_2} \cdots x_{e_n}</annotation></semantics>, each of which only receives finitely many contributions (there are only finitely many paths of a given length), so the right hand side is a well-defined formal power series.

Examples

Let’s do some examples, taken from Sherman. Suppose <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> is a graph with one vertex <semantics>v<annotation encoding="application/x-tex">v</annotation></semantics> and one edge <semantics>e<annotation encoding="application/x-tex">e</annotation></semantics>:

pic

Write <semantics>x=x(e)<annotation encoding="application/x-tex">x = x(e)</annotation></semantics>. There are two even subgraphs — the empty one, and <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> itself. So the sum over even subgraphs gives <semantics>1+x<annotation encoding="application/x-tex">1+x</annotation></semantics>. There is only a single closed path in <semantics>P(G)<annotation encoding="application/x-tex">P(G)</annotation></semantics>, namely <semantics>[e]<annotation encoding="application/x-tex">[\mathbf{e}]</annotation></semantics>, with odd winding number, so the sum over paths also gives <semantics>1+x<annotation encoding="application/x-tex">1+x</annotation></semantics>. Hooray!

Now let’s consider a graph with two loops:

pic

There are 4 even subgraphs, and the left hand side of the formula is <semantics>1+x 1+x 2+x 1x 2<annotation encoding="application/x-tex">1 + x_1 + x_2 + x_1x_2</annotation></semantics>. Now let’s count closed paths <semantics>γP(G)<annotation encoding="application/x-tex">\gamma \in P(G)</annotation></semantics>. There are infinitely many; here is a table. Let <semantics>e 1<annotation encoding="application/x-tex">\mathbf{e_1}</annotation></semantics> and <semantics>e 2<annotation encoding="application/x-tex">\mathbf{e_2}</annotation></semantics> be the counterclockwise oriented versions of <semantics>e 1<annotation encoding="application/x-tex">e_1</annotation></semantics> and <semantics>e 2<annotation encoding="application/x-tex">e_2</annotation></semantics>. <semantics>[γ] 1(1) w[γ]x[γ] [e 1] 1+x 1 [e 2] 1+x 2 [e 1e 2] 1+x 1x 2 [e 1e 2 1] 1x 1x 2 [e 1 2e 2] 1x 1 2x 2 [e 1 2e 2 1] 1+x 1 2x 2 [e 1e 2 2] 1x 1x 2 2 [e 1 1e 2 2] 1+x 1x 2 2 <annotation encoding="application/x-tex"> \begin{array}{cc} [\vec{\gamma}] & 1 - (-1)^{w[\vec{\gamma}]} x[\vec{\gamma}] \\ ------ & ------ \\ [\mathbf{e_1}] & 1 + x_1 \\ [\mathbf{e_2}] & 1 + x_2 \\ [\mathbf{e_1 e_2}] & 1 + x_1 x_2 \\ [\mathbf{e_1 e_2^{-1}}] & 1 - x_1 x_2 \\ [\mathbf{e_1^2 e_2}] & 1 - x_1^2 x_2 \\ [\mathbf{e_1^2 e_2^{-1}}] & 1 + x_1^2 x_2 \\ [\mathbf{e_1 e_2^2}] & 1 - x_1 x_2^2 \\ [\mathbf{e_1^{-1} e_2^2}] & 1 + x_1 x_2^2 \\ \cdots & \cdots \end{array} </annotation></semantics> If we multiply out the terms the right hand side gives <semantics>(1+x 1+x 2+x 1x 2)(1x 1 2x 2 2)(1x 1 4x 2 2)(1x 1 2x 2 4)<annotation encoding="application/x-tex"> (1 + x_1 + x_2 + x_1 x_2) (1-x_1^2 x_2^2) (1-x_1^4 x_2^2)(1-x_1^2x_2^4) \cdots </annotation></semantics> In order for this to equal <semantics>1+x 1+x 2+x 1x 2<annotation encoding="application/x-tex"> 1 + x_1 + x_2 + x_1x_2 </annotation></semantics> we will need some miraculous cancellation in the higher powers to occur! And indeed this is what happens. The minus signs from the winding numbers conspire to cancel the remaining terms. Even in this simple example, the mechanism is not obvious — but it does happen.

Pondering the meaning of the formula

Let’s ponder the formula. Why do I say it is so beautiful?

Well, the left hand side is combinatorial — it has only to do with the abstract graph <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>, having the property that it is embeddable in the plane (this property can be abstractly encoded via Kuratowski’s theorem). The right hand side is geometric — we fix some embedding of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> in the plane, and then compute winding numbers of tangent vectors! So, the formula expresses a combinatorial (or topological) property of the graph in terms of geometry.

Ok… but why is this formula emblematic of all of quantum field theory? Well, summing over all loops is what the path integral in quantum mechanics is all about. (See Witten’s IAS lectures on the Dirac index on manifolds, for example.) Note that the quantum mechanics path integral has recently been made rigorous in the work of Baer and Pfaffle, as well as Fine and Sawin.

Also, I think the formula has overtones of the linked-cluster theorem in perturbative quantum field theory, which relates the generating function for all Feynman diagrams (similar to the even subgraphs) to the generating function for connected Feynman diagrams (similar to the closed paths). You can see why Feynman was interested!

History of the formula

One beautiful way of computing the partition function in the Ising model, due to Kac and Ward, is to express it as a square root of a certain determinant. (I hope to explain this next time.) To do this though, they needed a “topological theorem” about planar graphs. Their theorem was actually false in general, as shown by Sherman. It was Feynman who reformulated it in the above form. From Mark Kac’s autobiography (clip):

The two-dimensional case for so-called nearest neighbour interactions was solved by Lars Onsager in 1944. Onsager’s solution, a veritable tour de force of mathematical ingenuity and inventiveness, uncovered a number of suprising features and started a series of investigations which continue to this day. The solution was difficult to understand and George Uhlenbeck urged me to try to simplify it. “Make it human” was the way he put it …. At the Institute [for Advanced Studies at Princeton] I met John C. Ward … we succeeded in rederiving Onsager’s result. Our success was in large measure due to knowing the answer; we were, in fact, guided by this knowledge. But our solution turned out to be incomplete… it took several years and the effort of several people before the gap in the derivation was filled. Even Feynman got into the act. He attended two lectures I gave in 1952 at Caltech and came up with the clearest and sharpest formulation of what was needed to fill the gap. The only time I have ever seen Feynman take notes was during the two lectures. Usually, he is miles ahead of the speaker but following combinatorial arguments is difficult for all mortals.

Feynman’s formula for general graphs

Every finite graph can be embedded in some closed oriented surface of high enough genus. So there should be a generalization of the formula to all finite graphs, not just planar ones. But on the right hand side of the formula, how do we compute the winding number of a closed path on a general surface? The answer, in the formulation of Cimasoni, is beautiful: we should sum over spin structures on the surface, each weighted by their Arf invariant!

Generalized Feynman formula. Let <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> be a finite graph of genus <semantics>g<annotation encoding="application/x-tex">g</annotation></semantics>. Then the following two polynomials are equal: <semantics> H evenGx(H)=12 g λSpin(Σ)(1) Arf(λ) [γ]P(G)(1(1) w λ[γ]x[γ])<annotation encoding="application/x-tex"> \sum_{H \subseteq_{even} G} x(H) = \frac{1}{2^g} \sum_{\lambda \in Spin(\Sigma)} (-1)^{Arf(\lambda)} \prod_{[\vec{\gamma}] \in P(G)} (1 - (-1)^{w_\lambda[\vec{\gamma}]} x[\vec{\gamma}]) </annotation></semantics>

The point is that a spin structure on <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics> can be represented as a nonzero vector field <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics> on <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics> minus a finite set of points, with even index around these points. (Of course, a nonzero vector field on the whole of <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics> won’t exist, except on the torus. That is why we need these points.) So, we can measure the winding number <semantics>w λ(γ)<annotation encoding="application/x-tex">w_\lambda(\vec{\gamma})</annotation></semantics> of a closed path <semantics>γ<annotation encoding="application/x-tex">\vec{\gamma}</annotation></semantics> with respect to this background vector field <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>.

The first version of this generalized Feynman formula was obtained by Loebl, in the case where all vertices have degree 2 or 4, and using the notion of Sherman rotation numbers instead of spin structures (see also Loebl and Somberg). In independent work, Cimasoni formulated it differently using the language of spin structures and Arf invariants, and proved it in the slightly more general case of general graphs, though his proof is not a direct one. Also, in unpublished work, Masbaum and Loebl found a direct combinatorial argument (in the style of Sherman’s proof of the planar case) to prove this general, spin-structures version.

Last thoughts

I find the generalized Feynman’s formula to be very beautiful. The left hand side is completely combinatorial / topological, manifestly only depending on <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>. The right hand picks some embedding of the graph in a surface, and is very geometric, referring to high-brow things such as spin structures and Arf invariants! Who knew that there was such an elegant geometric theorem lurking behing arbitrary finite graphs?

Moreover, it is all part of a beautiful collection of ideas relating the Ising model to the free fermion conformal field theory. (Indeed, the appearance of spin structures and winding numbers is telling us we are dealing with fermions.) Of course, physicists knew this for ages, but it hasn’t been clear to mathematicians exactly what they meant :-)

But in recent times, mathematicians are making this all precise, and beautiful geometry is emerging, like the above formula. There’s even a Fields medal in the mix. It’s all about discrete complex analysis, spinors on Riemann surfaces, the discrete Dirac equation, isomonodromic deformation of flat connections, heat kernels, conformal invariance, Pfaffians, and other amazing things (here is a recent expository talk of mine). I hope to explain some of this story next time.

by willerton (S.Willerton@sheffield.ac.uk) at June 29, 2015 01:18 PM

Clifford V. Johnson - Asymptotia

Warm…
So far, the Summer has not been as brutal in the garden as it was last year. Let's hope that continues. I think that late rain we had last month (or earlier this month?) helped my later planting get a good start too. This snap of a sunflower was taken on a lovely warm evening in the garden the other day, after a (only slightly too) hot day... sunflower_june_2015 -cvj Click to continue reading this post

by Clifford at June 29, 2015 01:17 PM

June 28, 2015

Tommaso Dorigo - Scientificblogging

In Memory Of David Cline
I was saddened today to hear of the death of David Cline. I do not have much to say here - I am not good with obituaries - but I do remember meeting him at a conference in Albuquerque in 2008, where we chatted on several topics, among them the history of the CDF experiment, a topic on which I had just started to write a book. 

Perhaps the best I can do here as a way to remember Cline, whose contributions to particle physics can and will certainly be better described by many others, is to report a quote from a chapter of the book, which describes a funny episode on the very early days of CDF. I think he did have a sense of humor, so he might not dislike it if I do.

---

read more

by Tommaso Dorigo at June 28, 2015 03:59 PM

June 26, 2015

Clifford V. Johnson - Asymptotia

Naddy
Yesterday, an interesting thing happened while I was out in my neighbourhood walking my son for a good hour or more (covered, in a stroller - I was hoping he'd get some sleep), visiting various shops, running errands. Before describing it, I offer two bits of background information as (possibly relevant?) context. (1) I am black. (2) I live in a neighbourhood where there are very few people of my skin colour as residents. Ok, here's the thing: * * * I'm approaching two young (mid-to-late teens?) African-American guys, sitting at a bus stop, chatting and laughing good-naturedly. As I begin to pass them, nodding a hello as I push the stroller along, one of them stops me. [...] Click to continue reading this post

by Clifford at June 26, 2015 11:33 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

Robert Boyle Summer School 2015

This weekend, I’m at the Robert Boyle Summer School, an annual 3-day science festival in Lismore, Co. Waterford in Ireland. It’s my favourite annual conference by some margin – a small number of talks by some highly eminent scholars of the history and philosophy of science, aimed at curious academics and the public alike, with lots of time for questions and discussion after each presentation.

Born in Lismore Castle into a wealthy landowning family, Boyle  became one of the most important figures in the Scientific Revolution,  well-known for his scientific discoveries, his role in the Royal Society and his influence in promoting the new ‘experimental philosophy’ in science.

boyle

The Irish-born scientist and aristocrat Robert Boyle   

IMG_0902[1]

Lismore Castle in Co. Waterford , the birthplace of Robert Boyle

As ever, the summer school takes place in Lismore, the beautiful town that is the home of Lismore Castle where Boyle was born. This year, the conference commemorates the 350th anniversary of the Philosophical Transactions of the Royal Society by considering the history of the publication of scientific work, from the first issue of  Phil. Trans. to the problem of fraud in scientific publication today.

The first talk this morning was ‘Robert Boyle, Philosophical Transactions and Scientific Communication’ by Professor Michael Hunter of Birbeck College. Professor Hunter is one of the world’s foremost experts on Boyle, and he gave a thorough overview of Boyle’s use of the Phil. Trans to disseminate his findings. Afterwards, Dr. Aileen Fyfe  of the University of St Andrews gave the talk ‘Peer Review: A History From 1665′ carefully charting how the process of peer review evolved over time from Boyle’s time to today.

IMG_9853

The renowned Boyle scholar Professor Michael Hunter of Birbeck College, UCL, in action

This afternoon, we had the wonderful talk ‘Lady Ranelagh, the Hartlib Circle and Networks for Scientific Correspondence’  in the spectacular setting of St Carthage’s Cathedral, given by Dr.Michelle DiMeo of the Chemical Heritage Foundation.  I knew nothing of Lady Ranelagh or the notion of a Republic of Letters  before this. The Hartlib Circle was clearly an important  forerunner of the Philosophical Transactions and Lady Ranelagh’s role in the Circle and in Boyle’s scientific life has been greatly overlooked.

eire06-041e

St Carthage’s Cathedral in Lismore

IMG_0328

Professor DiMeo unveiling a plaque in memory of Lady Ranelagh at the Castle. The new plaque is on the right, to accompany the existing plaque in memory of Robert Boyle on the left 

Tomorrow will see talks by Professor Dorothy Bishop (Oxford) and Sir John Pethica (Trinity College Dublin), but in the meanwhile I need to catch some sleep before tonight’s barbecue in Lismore Castle!

a700f8da8bdea9b0a9ad5044f71de8c5

Off to the Castle for dinner

Update

We had some great music up at the Castle last night, followed by an impromptu session in one of the village pubs. The highlight for many was when Sir John Pethica,  VP of the Royal Society, produced a fiddle from somewhere and joined in. As did his wife, Pam – talk about Renaissance men and women!

Turning to more serious topics, this morning Professor Bishop gave a frightening account of some recent cases of fraudulent publication in science – including a case she herself played a major part in exposing! However, not to despair, as Sir John noted in his presentation that the problem may be much more prevalent in some areas of science than others. This made sense to me, as my own experience of the publishing world in physics is that of very conservative editors that err on the side of caution. Indeed, it took a long time for our recent discovery of an unknown theory by Einstein to be accepted as such by the physics journals.

All in all, a superb conference in a beautiful setting.  On the last day, we were treated to a tour of the castle gardens, accompanied by Robert Boyle and his sister.

Copy of IMG_0572

Robert Boyle and his sister Lady Ranelagh picking flowers at the Castle on the last day of the conference

You can find the full conference programme here. The meeting was sponsored by Science Foundation Ireland, the Royal Society of Chemistry, the Institute of Chemisty (Ireland), the Institute of Physics (Ireland), the Robert Boyle Foundation,  i-scan, Abbott, Lismore Castle Arts and the Lismore House Hotel.


by cormac at June 26, 2015 04:27 PM

John Baez - Azimuth

Higher-Dimensional Rewriting in Warsaw (Part 2)

Today I’m going to this workshop:

Higher-Dimensional Rewriting and Applications, 28-29 June 2015, Warsaw, Poland.

Many of the talks will be interesting to people who are trying to use category theory as a tool for modelling networks!

For example, though they can’t actually attend, Lucius Meredith and my student Mike Stay hope to use Google Hangouts to present their work on Higher category models of the π-calculus. The π-calculus is a way of modelling networks where messages get sent here and there, e.g. the internet. Check out Mike’s blog post about this:

• Mike Stay, A 2-categorical approach to the pi calculus, The n-Category Café, 26 May 2015.

Krzysztof Bar, Aleks Kissinger and Jamie Vicary will be speaking about Globular, a proof assistant for computations in n-categories:

This talk is a progress report on Globular, an online proof assistant for semistrict higher-dimensional rewriting. We aim to produce a tool which can visualize higher-dimensional categorical diagrams, assist in their construction with a point-and-click interface, perform type checking to prevent incorrect composites, and automatically handle the interchanger data at each dimension. Hosted on the web, it will have a low barrier to use, and allow hyperlinking of formalized proofs directly from research papers. We outline the theoretical basis for the tool, and describe the challenges we have overcome in its design.

Eric Finster will be talking about another computer system for dealing with n-categories, based on the ‘opetopic’ formalism that James Dolan and I invented. And Jason Morton is working on a computer system for computation in compact closed categories! I’ve seen it, and it’s cool, but he can’t attend the workshop, so David Spivak will be speaking on his work with Jason on the theoretical foundations of this software:

We consider the linked problems of (1) finding a normal form for morphism expressions in a closed compact category and (2) the word problem, that is deciding if two morphism expressions are equal up to the axioms of a closed compact category. These are important ingredients for a practical monoidal category computer algebra system. Previous approaches to these problems include rewriting and graph-based methods. Our approach is to re-interpret a morphism expression in terms of an operad, and thereby obtain a single composition which is strictly associative and applied according to the abstract syntax tree. This yields the same final operad morphism regardless of the tree representation of the expression or order of execution, and solves the normal form problem up to automorphism.

Recently Eugenia Cheng has been popularizing category theory, touring to promote her book Cakes, Custard and Category Theory. But she’ll be giving two talks in Warsaw, I believe on distributive laws for Lawvere theories.

As for me, I’ll be promoting my dream of using category theory to understand networks in electrical engineering. I’ll be giving a talk on control theory and a talk on electrical circuits: two sides of the same coin, actually.

• John Baez, Jason Erbele and Nick Woods, Categories in control.

If you’ve seen a previous talk of mine with the same title, don’t despair—this one has new stuff! In particular, it talks about a new paper by Nick Woods and Simon Wadsley.

Abstract. Control theory is the branch of engineering that studies dynamical systems with inputs and outputs, and seeks to stabilize these using feedback. Control theory uses “signal-flow diagrams” to describe processes where real-valued functions of time are added, multiplied by scalars, differentiated and integrated, duplicated and deleted. In fact, these are string diagrams for the symmetric monoidal category of finite-dimensional vector spaces, but where the monoidal structure is direct sum rather than the usual tensor product. Jason Erbele has given a presentation for this symmetric monoidal category, which amounts to saying that it is the PROP for bicommutative bimonoids with some extra structure.

A broader class of signal-flow diagrams also includes “caps” and “cups” to model feedback. This amounts to working with a larger symmetric monoidal category where objects are still finite-dimensional vector spaces but the morphisms are linear relations. Erbele also found a presentation for this larger symmetric monoidal category. It is the PROP for a remarkable thing: roughly speaking, an object with two special commutative dagger-Frobenius structures, such that the multiplication and unit of either one and the comultiplication and counit of the other fit together to form a bimonoid.

• John Baez and Brendan Fong, Circuits, categories and rewrite rules.

Abstract. We describe a category where a morphism is an electrical circuit made of resistors, inductors and capacitors, with marked input and output terminals. In this category we compose morphisms by attaching the outputs of one circuit to the inputs of another. There is a functor called the ‘black box functor’ that takes a circuit, forgets its internal structure, and remembers only its external behavior. Two circuits have the same external behavior if and only if they impose same relation between currents and potentials at their terminals. This is a linear relation, so the black box functor goes from the category of circuits to the category of finite-dimensional vector spaces and linear relations. Constructing this functor makes use of Brendan Fong’s theory of ‘decorated cospans’—and the question of whether two ‘planar’ circuits map to the same relation has an interesting answer in terms of rewrite rules.

The answer to the last question, in the form of a single picture, is this:

(Click to enlarge.) How can you change an electrical circuit made out of resistors without changing what it does? 5 ways are shown here:

  1. You can remove a loop of wire with a resistor on it. It doesn’t do anything.
  2. You can remove a wire with a resistor on it if one end is unattached. Again, it doesn’t do anything.

  3. You can take two resistors in series—one after the other—and replace them with a single resistor. But this new resistor must have a resistance that’s the sum of the old two.

  4. You can take two resistors in parallel and replace them with a single resistor. But this resistor must have a conductivity that’s the sum of the old two. (Conductivity is the reciprocal of resistance.)

  5. Finally, the really cool part: the Y-Δ transform. You can replace a Y made of 3 resistors by a triangle of resistors But their resistances must be related by the equations shown here.

For circuits drawn on the plane, these are all the rules you need! This was proved here:

• Yves Colin de Verdière, Isidoro Gitler and Dirk Vertigan, Réseaux électriques planaires II.

It’s just the beginning of a cool story, which I haven’t completely blended with the categorical approach to circuits. Doing so clearly calls for 2-categories: those double arrows are 2-morphisms! For more, see:

• Joshua Alman, Carl Lian and Brandon Tran, Circular planar electrical networks I: The electrical poset EPn.


by John Baez at June 26, 2015 04:17 AM

June 25, 2015

Tommaso Dorigo - Scientificblogging

Early-Stage Researcher Positions To Open Soon
The Marie-Curie network I am coordinating, AMVA4NewPhysics, is going to start very soon, and with its start several things are going to happen. One you should not be concerned with is the arrival of the first tranche of the 2.4Meuros that the European Research Council has granted us. Something more interesting to you, if you have a degree in Physics or Statistics, is the fact that the network will soon start hiring ten skilled post-lauream researchers across Europe, with the aim of providing them with an exceptional plan of advanced training in particle physics, data analysis, statistics, machine learning, and more.

read more

by Tommaso Dorigo at June 25, 2015 06:57 PM

Symmetrybreaking - Fermilab/SLAC

Exploring dark energy with robots

The Dark Energy Spectroscopic Instrument will produce a 3-D space map using a ‘hive’ of robots. 

Five thousand pencil-shaped robots, densely nested in a metal hive, whir to life with a precise, dizzying choreography. Small U-shaped heads swivel into a new arrangement in a matter of seconds.

This preprogrammed routine will play out about four times per hour every night at the Dark Energy Spectroscopic Instrument. The robots of DESI will be used to produce a 3-D map of one-third of the sky. This will help DESI fulfill its primary mission of investigating dark energy, a mysterious force thought to be causing the acceleration of the expansion of the universe.

The tiny robots will be arranged in 10 wedge-shaped metal “petals” that together form a cylinder about 2.6 feet across. They will maneuver the ends of fiber-optic cables to point at sets of galaxies and other bright objects in the universe. DESI will determine their distance from Earth based on the light they emit.

DESI’s robots are in development at Lawrence Berkeley National Laboratory, the lead in the DESI collaboration, and at the University of Michigan.

Courtesy of: DESI collaboration

The robots—each about 8 millimeters wide in their main section and 8 inches long—will be custom-built around commercially available motors measuring just 4 millimeters in diameter. This type of precision motor, at this size, became commercially available in 2013 and is now manufactured by three companies. The motors have found use in medical devices such as insulin pumps, surgical robots and diagnostic tools.

At DESI, the robots will automate what was formerly a painstaking manual process used at previous experiments. At the Baryon Oscillation Spectroscopic Survey, or BOSS, which began in 2009, technicians must plug 1000 fibers by hand several times each day into drilled metal plates, like operators plugging cables into old-fashioned telephone switchboards.

“DESI is exciting because all of that work will be done robotically,” says Risa Wechsler, a co-spokesperson for DESI and an associate professor of the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and SLAC National Accelerator Laboratory. Using the robots, DESI will be able to redirect all of its 5000 fibers in an elaborate dance in less than 30 seconds (see video).

“DESI definitely represents a new era,” Wechsler says.

In addition to precisely measuring the color of light emitted by space objects, DESI will also measure how the clustering of galaxies and quasars, which are very distant and bright objects, has evolved over time. It will calculate the distance for up to 25 million space objects, compared to the fewer than 2 million objects examined by BOSS.

The robots are designed to both collect and transmit light. After each repositioning of fibers, a special camera measures the alignment of each robot’s fiber-optic cable within thousandths of a millimeter. If the robots are misaligned, they are automatically individually repositioned to correct the error.

Each robot has its own electronics board and can shut off and turn on independently, says Joe Silber, an engineer at Berkeley Lab who manages the system that includes the robotic array.

In seven successive generations of prototype designs, Silber has worked to streamline and simplify the robots, trimming down their design from 60 parts to just 18. “It took a long time to really understand how to make these things as cheap and simple as possible,” he says. “We were trying not to get too clever with them.”

The plan is for DESI to begin a 5-year run at Kitt Peak National Observatory near Tucson, Arizona, in 2019. Berkeley and Michigan scientists plan to build a test batch of 500 robots early next year, and to build the rest in 2017 and 2018.

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at June 25, 2015 01:00 PM

Clifford V. Johnson - Asymptotia

Speed Dating for Science!
youtubespace panelLast night was amusing. I was at the YouTubeLA space with 6 other scientists from various fields, engaging with an audience of writers and other creators for YouTube, TV, film, etc. It was an event hosted by the Science and Entertainment Exchange and Youtube/Google, and the idea was that we each had seven minutes to present in seven successive rooms with different audiences in each, so changing rooms each seven minutes. Of course, early on during the planning conference call for the event, one of the scientists asked why it was not more efficient to simply have one large [...] Click to continue reading this post

by Clifford at June 25, 2015 04:45 AM

June 24, 2015

Sean Carroll - Preposterous Universe

Algebra of the Infrared

In my senior year of college, when I was beginning to think seriously about graduate school, a magical article appeared in the New York Times magazine. Called “A Theory of Everything,” by KC Cole, it conveyed the immense excitement that had built in the theoretical physics community behind an idea that had suddenly exploded in popularity after burbling beneath the surface for a number of years: a little thing called “superstring theory.” The human-interest hook for the story was simple — work on string theory was being led by a brilliant 36-year-old genius, a guy named Ed Witten. It was enough to cement Princeton as the place I most wanted to go to for graduate school. (In the end, they didn’t let me in.)

Nearly thirty years later, Witten is still going strong. As evidence, check out this paper that recently appeared on the arxiv, with co-authors Davide Gaiotto and Greg Moore:

Algebra of the Infrared: String Field Theoretic Structures in Massive N=(2,2) Field Theory In Two Dimensions
Davide Gaiotto, Gregory W. Moore, Edward Witten

We introduce a “web-based formalism” for describing the category of half-supersymmetric boundary conditions in 1+1 dimensional massive field theories with N=(2,2) supersymmetry and unbroken U(1)R symmetry. We show that the category can be completely constructed from data available in the far infrared, namely, the vacua, the central charges of soliton sectors, and the spaces of soliton states on ℝ, together with certain “interaction and boundary emission amplitudes”. These amplitudes are shown to satisfy a system of algebraic constraints related to the theory of A∞ and L∞ algebras. The web-based formalism also gives a method of finding the BPS states for the theory on a half-line and on an interval. We investigate half-supersymmetric interfaces between theories and show that they have, in a certain sense, an associative “operator product.” We derive a categorification of wall-crossing formulae. The example of Landau-Ginzburg theories is described in depth drawing on ideas from Morse theory, and its interpretation in terms of supersymmetric quantum mechanics. In this context we show that the web-based category is equivalent to a version of the Fukaya-Seidel A∞-category associated to a holomorphic Lefschetz fibration, and we describe unusual local operators that appear in massive Landau-Ginzburg theories. We indicate potential applications to the theory of surface defects in theories of class S and to the gauge-theoretic approach to knot homology.

I cannot, in good conscience, say that I understand very much about this new paper. It’s a kind of mathematica/formal field theory that is pretty far outside my bailiwick. (This is why scientists roll their eyes when a movie “physicist” is able to invent a unified field theory, build a time machine, and construct nanobots that can cure cancer. Specialization is real, folks!)

But there are two things about the paper that I nevertheless can’t help but remarking on. One is that it’s 429 pages long. I mean, damn. That’s a book, not a paper. Scuttlebutt informs me that the authors had to negotiate specially with the arxiv administrators just to upload the beast. Most amusingly, they knew perfectly well that a 400+ page work might come across as a little intimidating, so they wrote a summary paper!

An Introduction To The Web-Based Formalism
Davide Gaiotto, Gregory W. Moore, Edward Witten

This paper summarizes our rather lengthy paper, “Algebra of the Infrared: String Field Theoretic Structures in Massive N=(2,2) Field Theory In Two Dimensions,” and is meant to be an informal, yet detailed, introduction and summary of that larger work.

This short, user-friendly introduction is a mere 45 pages — still longer than 95% of the papers in this field. After a one-paragraph introduction, the first words of the lighthearted summary paper are “Let X be a Kähler manifold, and W : X → C a holomorphic Morse function.” So maybe it’s not that informal.

The second remarkable thing is — hey look, there’s my name! Both of the papers cite one of my old works from when I was a grad student, with Simeon Hellerman and Mark Trodden. (A related paper was written near the same time by Gary Gibbons and Paul Townsend.)

Domain Wall Junctions are 1/4-BPS States
Sean M. Carroll, Simeon Hellerman, Mark Trodden

We study N=1 SUSY theories in four dimensions with multiple discrete vacua, which admit solitonic solutions describing segments of domain walls meeting at one-dimensional junctions. We show that there exist solutions preserving one quarter of the underlying supersymmetry — a single Hermitian supercharge. We derive a BPS bound for the masses of these solutions and construct a solution explicitly in a special case. The relevance to the confining phase of N=1 SUSY Yang-Mills and the M-theory/SYM relationship is discussed.

Simeon, who was a graduate student at UCSB at the time and is now faculty at the Kavli IPMU in Japan, was the driving force behind this paper. Mark and I had recently written a paper on different ways that topological defects could intersect and join together. Simeon, who is an expert in supersymmetry, noticed that there was a natural way to make something like that happen in supersymmetric theories: in particular, you could have domain walls (sheets that stretch through space, separating different possible vacuum states) could intersect at “junctions.” Even better, domain-wall junction configurations would break some of the supersymmetry but not all of it. Setups like that are known as BPS states, and are highly valued and useful to supersymmetry aficionados. In general, solutions to quantum field theories are very difficult to find and characterize with any precision, but the BPS property lets you invoke some of the magic of supersymmetry to prove results that would otherwise be intractable.

Admittedly, the above paragraph is likely to be just as opaque to the person on the street as the Gaiotto/Moore/Witten paper is to me. The point is that we were able to study the behavior of domain walls and how they come together using some simple but elegant techniques in field theory. Think of drawing some configuration of walls as a network of lines in a plane. (All of the configurations we studied were invariant along some “vertical” direction in space, as well as static in time, so all the action happens in a two-dimensional plane.) Then we were able to investigate the set of all possible ways such walls could come together to form allowed solutions. Here’s an example, using walls that separate four different possible vacuum states:

wall-moduli-3

As far as I understand it (remember — not that far!), this is a very baby version of what Gaiotto, Moore, and Witten have done. Like us, they look at a large-distance limit, worrying about how defects come together rather than the detailed profiles of the individual configurations. That’s the “infrared” in their title. Unlike us, they go way farther, down a road known as “categorification” of the solutions. In particular, they use a famous property of BPS states: you can multiply them together to get other BPS states. That’s the “algebra” of their title. To mathematicians, algebras aren’t just ways of “solving for x” in equations that tortured you in high school; they are mathematical structures describing sets of vectors that can be multiplied by each other to produce other vectors. (Complex numbers are an algebra; so are ordinary three-dimensional vectors, using the cross product operation.)

At this point you’re allowed to ask: Why should I care? At least, why should I imagine putting in the work to read a 429-page opus about this stuff? For that matter, why did these smart guys put in the work to write such an opus?

It’s a reasonable question, but there’s also a reasonable answer. In theoretical physics there are a number of puzzles and unanswered questions that we are faced with, from “Why is the mass of the Higgs 125 GeV?” to “How does information escape from black holes?” Really these are all different sub-questions of the big one, “How does Nature work?” By construction, we don’t know the answer to these questions — if we did, we’d move onto other ones. But we don’t even know the right way to go about getting the answers. When Einstein started thinking about fitting gravity into the framework of special relativity, Riemannian geometry was absolutely the last thing on his mind. It’s hard to tell what paths you’ll have to walk down to get to the final answer.

So there are different techniques. Some people will try a direct approach: if you want to know how information comes out of a black hole, think as hard as you can about what happens when black holes radiate. If you want to know why the Higgs mass is what it is, think as hard as you can about the Higgs field and other possible fields we haven’t yet found.

But there’s also a more patient, foundational approach. Quantum field theory is hard; to be honest, we don’t understand it all that well. There’s little question that there’s a lot to be learned by studying the fundamental behavior of quantum field theories in highly idealized contexts, if only to better understand the space of things that can possibly happen with an eye to eventually applying them to the real world. That, I suspect, is the kind of motivation behind a massive undertaking like this. I don’t want to speak for the authors; maybe they just thought the math was cool and had fun learning about these highly unrealistic (but still extremely rich) toy models. But the ultimate goal is to learn some basic wisdom that we will someday put to use in answering that underlying question: How does Nature work?

As I said, it’s not really my bag. I don’t have nearly the patience nor that mathematical aptitude that is required to make real progress in this kind of way. I’d rather try to work out on general principles what could have happened near the Big Bang, or how our classical world emerges out of the quantum wave function.

But, let a thousand flowers bloom! Gaiotto, Moore, and Witten certainly know what they’re doing, and hardly need to look for my approval. It’s one strategy among many, and as a community we’re smart enough to probe in a number of different directions. Hopefully this approach will revolutionize our understanding of quantum field theory — and at my retirement party everyone will be asking me why I didn’t stick to working on domain-wall junctions.

by Sean Carroll at June 24, 2015 09:48 PM

ZapperZ - Physics and Physicists

Gravitational Lensing
Here's a simple intro to gravitational lensing, if you are not familiar with it.



Zz.

by ZapperZ (noreply@blogger.com) at June 24, 2015 06:29 PM

Symmetrybreaking - Fermilab/SLAC

Seeing in gamma rays

The latest sky maps produced by Fermi Gamma-ray Space Telescope combine seven years of observations.

Maps from the Fermi Gamma-ray Space Telescope literally show the universe in a different light.

Today Fermi’s Large Area Telescope (LAT) collaboration released the latest data from nearly seven years of watching the universe at a broad range of gamma-ray energies.

Gamma rays are the highest-energy form of light in the cosmos. They come from jets of high-energy particles accelerated near supermassive black holes at the centers of galaxies, shock waves around exploded stars, and the intense magnetic fields of fast-spinning collapsed stars. On Earth, gamma rays are produced by nuclear reactors, lightning and the decay of radioactive elements.

From low-Earth orbit, the Fermi Gamma-ray Space Telescope scans the entire sky for gamma rays every three hours. It captures new and recurring sources of gamma rays at different energies, and it can be diverted from its usual course to fix on explosive events known as gamma-ray bursts.

Combining data collected over years, the LAT collaboration periodically creates gamma-ray maps of the universe. These colored maps plot the universe’s most extreme events and high-energy objects.

The all-sky maps typically portray the universe as an ellipse that shows the entire sky at once, as viewed from Earth. On the maps, the brightest gamma-ray light is shown in yellow and progressively dimmer gamma-ray light is shown in red, blue, and black. These are false colors, though; gamma-rays are invisible.

The maps are oriented with the center of the Milky Way at their center and the plane of our galaxy oriented horizontally across the middle.  The plane of the Milky Way is bright in gamma rays. Above and below the bright band, much of the gamma-ray light comes from outside of our galaxy.

“What you see in gamma rays is not so predictable,” says Elliott Bloom, a SLAC National Accelerator Laboratory professor and member of the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) who is part of a scientific collaboration supporting Fermi’s principal instrument, the Large Area Telescope.

Teams of researchers have identified mysterious, massive “bubbles” blooming 30,000 light-years outward from our galaxy’s center, for example, with most features appearing only at gamma-ray wavelengths.

Scientists create several versions of the Fermi sky maps. Some of them focus only on a specific energy range, says Eric Charles, another member of the Fermi collaboration who is also a KIPAC scientist.

“You learn a lot by correlating things in different energy ‘bins,’” he says. “If you look at another map and see completely different things, then there may be these different processes. What becomes useful is at different wavelengths you can make comparisons and correlate things.”

But sometimes what you need is the big picture, says Seth Digel, a SLAC senior staff scientist and a member of KIPAC and the Fermi team. “There are some aspects you can only study with maps, such as looking at the extended gamma-ray emissions—not just the point sources, but regions of the sky that are glowing in gamma rays for different reasons.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at June 24, 2015 04:13 PM

arXiv blog

How Machine Vision Solved One of the Great Mysteries of 20th-Century Surrealist Art

The great Belgian surrealist Magritte painted two versions of one of his masterpieces, and nobody has been able to distinguish the original from the copy. Until now.


In 1983, a painting by the Belgian surrealist René Magritte came up for auction in New York. The artwork was painted in 1948 and depicts a bird of prey morphing into a leaf which is being eaten by a caterpillar–perhaps an expression of sorrow for the Second World War, which Magritte spent in occupied Belgium.

June 24, 2015 04:00 PM

ATLAS Experiment

Faster and Faster!
Simon Ammann from Switzerland starts from the hill during the training jump of the second station of the four hills ski jumping tournament in Garmisch-Partenkirchen, southern Germany, on Thursday, Dec. 31, 2009. (AP Photo/  Matthias Schrader)

Simon Ammann from Switzerland starts from the hill during the training jump of the second station of the four hills ski jumping tournament in Garmisch-Partenkirchen, southern Germany, on Thursday, Dec. 31, 2009. (AP Photo/ Matthias Schrader)

Faster and Faster! This is how it gets as soon as LS1 ends and the first collisions of LHC Run 2 approaches. As you might have noticed, at particle physics experiments we LOVE acronyms! LS1 stands for the first Long Shutdown of the Large Hadron Collider.

After the end of Run 1 collisions in March 2013 we had two full years of repairs, consolidations and upgrades of the ATLAS detector. Elevators at P1 (that is Point 1, one of the 8 zones where we can get access to the LHC tunnel located 100 m underground) were once again as crowded as elevator shafts in a coal mine. Although all the activities were well programmed, during the last days the activity was frenetic and we had the impression that the number of things in our t0-do lists was increasing rather than reducing.

Finally, last week I was sitting in the ACR (Atlas Control Room) with experts, shifters, run coordinators, and the ATLAS spokesperson for the first fills of the LHC that produced “low luminosity collisions”. You might think that, for a collider that is designed to reach a record instantaneous luminosity (that is the rate of collisions in a given amount of time), last week’s collisions were just a warm up.

Well, this is not entirely true.

2015-06-11 17.16.49

Racks in USA15 (100m underground) hosting trigger electronics for the selection of minimum bias collisions (rack in foreground with brown cables). In background (with thick black cables), electronics for the calorimetric trigger. (Picture by the author.)

Last week we had the unique opportunity to collect data with very particular beam conditions that we call “low pile-up”. That means that every time the bunches of protons cross one through the other, the protons have a very small probability of actually colliding. What is important is that the probability of having two or more collisions at the same time is negligible, since we are only interested in collisions that are produced once for each bunch crossing. These data are fundamental for performing a variety of physics measurements.

Just to cite a few of them:

  • the measurement of the proton-proton cross section (“how large are the protons?”) at the new center of mass energy of 13 TeV;
  • the study of diffractive processes in proton-proton collisions (YES, protons are waves also!); and
  • the characterization of “minimum bias” collisions (these represent the overwhelming majority of collisions and are just the “opposite” of collisions that produce top quarks, Higgs and eventually exotic or supersymmetric particles) which are key ingredients for tuning our Monte Carlo simulations that will be used for all physics analysis in ATLAS (including Higgs physics and Beyond Standard Model searches).

Over the past few months, I’ve been coordinating a working group with people around the world (Italy, Poland, China, UK, and US) – none of them resident full time at CERN – who are responsible for the on-line selection of these events (we call this the trigger). Although we meet weekly (not trivial due to the different time zones), and we regularly exchange e-mails, I had never met with these people face to face. It was strange to finally see their faces in a meeting room at CERN, although I could recognize their voices.

Clint Eastwood in "Per Qualche Dollaro in piu`" movie (Director: Sergio Leone)

Clint Eastwood in “Per Qualche Dollaro in piu`” movie (Director: Sergio Leone) ( Produzioni Europee Associate and United Artists)

We have worked very hard for the last week of data-taking trying to be prepared for all possible scenarios and problems we might encounter. There were no room for mistakes that could spoil the quality of data.

We cannot press the “replay” button.

It was like “one shot, one kill”.

Luckily everything ran smoothly, and there weren’t too many issues and none of them severe.

This is only one of the activities where my institution, the Istituto Nazionale di Fisica NucleareSezione di Bologna, and the University of Bologna and the other 12 ATLAS Italian groups were involved during the Run 2 start up of LHC.


doc01069020150318091345_001 Antonio Sidoti is a physicist in Bologna (Italy) at the Istituto Nazionale Fisica Nucleare. His research include top quark associated with Higgs production searches, upgrade studies for the new inner tracker and trigger software development using Graphical Processing Units. He is coordinating the ATLAS Minimum Bias and Forward Detector Trigger Signature group and is now deputy coordinator of the physics analysis for the Italian groups in ATLAS. When he is not working he plays piano, runs marathons, skis or sails with a windsurf.

by Antonio Sidoti at June 24, 2015 01:47 PM

Lubos Motl - string vacua and pheno

CMS cooling magnet glitch not serious, detector will run


Two weeks ago, Adam Falkowski propagated the following Twitter rumor:
LHC rumor: serious problems with the CMS magnet. Possibly, little to none useful data from CMS this year.
Fortunately, this proposition seems to be heavily exaggerated fearmongering at this point.




On June 14th, CMS published this news:
CMS is preparing for high-luminosity run at 13 TeV
Jester's rumor was based on a true fact but all of its "important" claims were wrong.




The CMS detector has had a problem with the magnet cooling system. After some time, a problem was found in the machinery that feeds superconducting helium to the system, in particular, something was wrong with oil which reached the cold box, a component in the initial compression stage.

Repairs have shown that the CMS magnet itself has not been contaminated by oil which means that the problem was superficial and it has been hopefully fixed by now. Between June 15th and 19th, the LHC went to a "technical stop" – and the LHC schedule also says that since last Saturday to this Sunday, the LHC is scrubbing for the 50 ns operation – but once it is over, the CMS should be doing all of its work again.

Starting from next Monday, the LHC should be ramping up the intensity with the 50 ns beam for 3 weeks. July 20th-24th will be "machine development". Following 14 days will be "scrubbing for 25 ns operation". Intensity ramp-up with the 25 ns beam will begin on August 8th.

by Luboš Motl (noreply@blogger.com) at June 24, 2015 08:48 AM

June 23, 2015

Symmetrybreaking - Fermilab/SLAC

Bringing neutrino research back to India

The India-based Neutrino Observatory will provide a home base for Indian particle physicists.

Pottipuram, a village in southern India, is mostly known for its farming. Goats graze on the mountains and fields yield modest harvests of millets and pulses.

Earlier this year, Pottipuram became known for something else: The government announced that, nearby, scientists will construct a new research facility that will advance particle physics in India.

A legacy of discovery

From 1951 to 1992, Indian scientists studied neutrinos and muons in a facility located deep within what was then one of the largest active gold mines in the world, the Kolar Gold Fields.

The lab hosted international collaborations, including one that discovered atmospheric neutrinos—elusive particles that shoot out of collisions between cosmic rays and our atmosphere. The underground facility also served as a training ground for young and aspiring particle physicists.

But when the gold reserves dwindled, the mining operations pulled out. And the lab, unable to maintain a vast network of tunnels on its own, shut down, too. Indian particle physicists who wanted to do science in their country had to switch to a related field, such as nuclear physics or materials science.

Almost immediately after the closure of the Kolar lab, plans began to take shape to build a new place to study fundamental particles and forces. Physicist Naba Mondal of the Tata Institute of Fundamental Research in Mumbai, who had researched at Kolar, worked with other scientists to build a collaboration—informally at first, and then officially in 2002. They now count as partners scientists from 21 universities and research institutions across India.

The facility they plan to build is called the India-based Neutrino Observatory.

Mondal, who leads the INO collaboration, has high hopes the facility will give Indian particle physics students the chance to do first-class research at home.

“They can't all go to CERN or Fermilab,” he says. “If we want to attract them to science, we have to have experimental facilities right here in the country.”

INO collaboration meeting 2014, at Iichep Madurai.

Courtesy of: India-based Neutrino Observatory

Finding a place

INO will house large detectors that will catch particles called neutrinos.

Neutrinos are produced by a variety of processes in nature and hardly ever interact with other matter; they are constantly streaming through us. But they’re not the only particles raining down on us from space. There are also protons and atomic nuclei coming from cosmic rays.

To study neutrinos, scientists need a way to pick them out from the crowd. INO scientists want to do this by building their detectors inside a mountain, shielded by layers of rock that can stop cosmic ray particles but not the slippery neutrinos.

Rock is especially dense in the remote, monolithic hills near Pottipuram. So, the scientists set about asking the village for their blessing to build there.

This posed a challenge to Mondal. India is a large country with 22 recognized regional languages. Mondal grew up in West Bengal, near Kolkata, more than 1200 miles away from Pottipuram and speaks Bengali, Hindi and English. The residents of Pottipuram speak Tamil.

Luckily, some of Mondal’s colleagues speak Tamil, too.

One such colleague is D. Indumathi of the Institute of Mathematical Sciences in Chennai. Indumathi spent more than 5 years coordinating a physics subgroup working on designing INO’s proposed main detector, a 50,000-ton, magnetized stack of iron plates and scintillator. But her abilities and interests extend beyond the pure physics of the project.

“I like talking about science to people,” she says. “I get very involved, and I am very passionate about it. So in that sense [outreach] was also a role that I could naturally take up.”

She spent about one year talking with residents of Pottipuram, fielding questions about whether the experiment would produce a radiation hazard (it won’t) and whether the goats would continue to have access to the mountain (they will). In the end, the village consented to the construction.

Courtesy of: India-based Neutrino Observatory

Neutrino physics for a new generation

Young people have shown the most interest in INO, Indumathi says. Students in both college and high school are tantalized by these particles that might throw light on yet unanswered questions about the evolution of the universe. They enjoy discussing research ideas that haven’t even found their way into their textbooks.

“[There] is a tremendous feeling of wanting to participate—to be a part of this lab that is going to come up in their midst,” Indumathi says.

Student S. Pethuraj, from another village in Tamil Nadu, first heard about INO when he attended a series of lectures by Mondal and other scientists in his second year of what was supposed to be a terminal master’s degree at Madurai Kamaraj University.

Pethuraj connected with the professors and arranged to take a winter course from them on particle physics.

“After their lectures my mind was fully trapped in particle physics,” he says.

Pethuraj applied and was accepted to a PhD program expressly designed as preparation for INO studies at the Tata Institute for Fundamental Research. He is now completing coursework.

“INO is giving me cutting-edge research experience in experimental physics and instrumentation,” he says. “This experience creates in me a lot of confidence in handling and understanding the experiments.”

Other young people are getting involved with engineering at INO. The collaboration has already hired recent graduates to help design the many intricate detector systems involved in such a massive undertaking.

The impact of the INO will only increase after its construction, especially for those who will have the lab in their backyard, Mondal says.

“The students from the area—they will visit and talk to the scientists there and get an idea about how science is being done,” he says. “That will change even the culture of doing science.”

 

Like what you see? Sign up for a free subscription to symmetry!

by Troy Rummler at June 23, 2015 01:00 PM

June 22, 2015

Lubos Motl - string vacua and pheno

Strings 2015: India


I think that the cost-and-benefit analysis implies that it's not a good idea for me to describe most of the talks at the annual string theorists' conference. If there are volunteers, especially among the participants, I will be happy to publish their observations, however.




The conference is taking place this week, from Monday through Friday, at the Tata Institute in Bengalúru, India. I usually prefer the traditional "colonial" European names of major non-European cities – Peking, New Amsterdam etc. – but the Indian-Czech name Bengalúru simply sounds better than its old English parody (up to 2006), Bangalore. ;-)




Here are the basic URLs:
Strings 2015: main web page
Strings 2015: talk titles and links to PDF files (and links to separate pages of the talks, with coordinates etc.)
I am sure the readers and clickers who know how to read and click may find the other pages once they read and click. ;-) I have looked at several of the PDF files that have already appeared. They are very interesting. It is not yet clear to me whether videos will be posted somewhere, too.

There are rumors that a well-known author is just completing the book How the Conservatives [Not Hippies] Saved Physics but I am afraid that you shouldn't trust everything on the Internet. ;-)

by Luboš Motl (noreply@blogger.com) at June 22, 2015 05:08 PM

arXiv blog

Data Mining Reveals the Surprising Factors Behind Successful Movies

The secret to making profitable movies will amaze you. (Spoiler: it’s not hiring top box office stars.)

June 22, 2015 04:00 PM

Quantum Diaries

LARP completes first successful test of High-Luminosity LHC coil

This article appeared in Fermilab Today on June 22, 2015.

Steve Gould of the Fermilab Technical Division prepares a cold test of a short quadrupole coil. The coil is of the type that would go into the High-Luminosity LHC. Photo: Reidar Hahn

Steve Gould of the Fermilab Technical Division prepares a cold test of a short quadrupole coil. The coil is of the type that would go into the High-Luminosity LHC. Photo: Reidar Hahn

Last month, a group collaborating across four national laboratories completed the first successful tests of a superconducting coil in preparation for the future high-luminosity upgrade of the Large Hadron Collider, or HL-LHC. These tests indicate that the magnet design may be adequate for its intended use.

Physicists, engineers and technicians of the U.S. LHC Accelerator Research Program (LARP) are working to produce the powerful magnets that will become part of the HL-LHC, scheduled to start up around 2025. The plan for this upgrade is to increase the particle collision rate, or luminosity, by approximately a factor of 10, so expanding the collider’s physics reach by creating 10 times more data.

“The upgrade will help us get closer to new physics. If we see something with the current run, we’ll need more data to get a clear picture. If we don’t find anything, more data may help us to see something new,” said Technical Division’s Giorgio Ambrosio, leader of the LARP magnet effort.

LARP is developing more advanced quadrupole magnets, which are used to focus particle beams. These magnets will have larger beam apertures and the ability to produce higher magnetic fields than those at the current LHC.

The Department of Energy established LARP in 2003 to contribute to LHC commissioning and prepare for upgrades. LARP includes Brookhaven National Laboratory, Fermilab, Lawrence Berkeley National Laboratory and SLAC. Its members began developing the technology for advanced large-aperture quadrupole magnets around 2004.

The superconducting magnets currently in use at the LHC are made from niobium titanium, which has proven to be a very effective material to date. However, they will not be able to support the higher magnetic fields and larger apertures the collider needs to achieve higher luminosities. To push these limits, LARP scientists and engineers turned to a different material, niobium tin.

Niobium tin was discovered before niobium titanium. However, it has not yet been used in accelerators because, unlike niobium titanium, niobium tin is very brittle, making it susceptible to mechanical damage. To be used in high-energy accelerators, these magnets need to withstand large amounts of force, making them difficult to engineer.

LARP worked on this challenge for almost 10 years and went through a number of model magnets before it successfully started the fabrication of coils for 150-millimeter-aperture quadrupoles. Four coils are required for each quadrupole.

LARP and CERN collaborated closely on the design of the coils. After the first coil was built in the United States earlier this year, the LARP team successfully tested it in a magnetic mirror structure. The mirror structure makes possible tests of individual coils under magnetic field conditions similar to those of a quadrupole magnet. At 1.9 Kelvin, the coil exceeded 19 kiloamps, 15 percent above the operating current.

The team also demonstrated that the coil was protected from the stresses and heat generated during a quench, the rapid transition from superconducting to normal state.

“The fact that the very first test of the magnet was successful was based on the experience of many years,” said TD’s Guram Chlachidze, test coordinator for the magnets. “This knowledge and experience is well recognized by the magnet world.”

Over the next few months, LARP members plan to test the completed quadrupole magnet.

“This was a success for both the people building the magnets and the people testing the magnets,” said Fermilab scientist Giorgio Apollinari, head of LARP. “We still have a mountain to climb, but now we know we have all the right equipment at our disposal and that the first step was in the right direction.”

Diana Kwon

by Fermilab at June 22, 2015 03:56 PM

June 21, 2015

Tommaso Dorigo - Scientificblogging

Seeing Jupiter In Daylight
Have you ever seen Venus in full daylight ? It's a fun experience. Of course we are accustomed to see even a small crescent Moon in daylight -it is large and although of the same colour of clouds, it cannot be missed in a clear sky. But Venus is a small dot, and although it can be quite bright after the sunset or before dawn, during the day it is just a unconspicuous, tiny white dot which you never see, unless you look exactly in its direction.

read more

by Tommaso Dorigo at June 21, 2015 09:07 PM

The n-Category Cafe

What's so HoTT about Formalization?

In my last post I promised to follow up by explaining something about the relationship between homotopy type theory (HoTT) and computer formalization. (I’m getting tired of writing “publicity”, so this will probably be my last post for a while in this vein — for which I expect that some readers will be as grateful as I).

As a potential foundation for mathematics, HoTT/UF is a formal system existing at the same level as set theory (ZFC) and first-order logic: it’s a collection of rules for manipulating syntax, into which we can encode most or all of mathematics. No such formal system requires computer formalization, and conversely any such system can be used for computer formalization. For example, the HoTT Book was intentionally written to make the point that HoTT can be done without a computer, while the Mizar project has formalized huge amounts of mathematics in a ZFC-like system.

Why, then, does HoTT/UF seem so closely connected to computer formalization? Why do the overwhelming majority of publications in HoTT/UF come with computer formalizations, when such is still the exception rather than the rule in mathematics as a whole? And why are so many of the people working on HoTT/UF computer scientists or advocates of computer formalization?

To start with, note that the premise of the third question partially answers the first two. If we take it as a given that many homotopy type theorists care about computer formalization, then it’s only natural that they would be formalizing most of their papers, creating a close connection between the two subjects in people’s minds.

Of course, that forces us to ask why so many homotopy type theorists are into computer formalization. I don’t have a complete answer to that question, but here are a few partial ones.

  1. HoTT/UF is built on type theory, and type theory is closely connected to computers, because it is the foundation of typed functional programming languages like Haskell, ML, and Scala (and, to a lesser extent, less-functional typed programming languages like Java, C++, and so on). Thus, computer proof assistants built on type theory are well-suited to formal proofs of the correctness of software, and thus have received a lot of work from the computer science end. Naturally, therefore, when a new kind of type theory like HoTT comes along, the existing type theorists will be interested in it, and will bring along their predilection for formalization.

  2. HoTT/UF is by default constructive, meaning that we don’t need to assert the law of excluded middle or the axiom of choice unless we want to. Of course, most or all formal systems have a constructive version, but with type theories the constructive version is the “most natural one” due to the Curry-Howard correspondence. Moreover, one of the intriguing things about HoTT/UF is that it allows us to prove certain things constructively that in other systems require LEM or AC. Thus, it naturally attracts attention from constructive mathematicians, many of whom are interested in computable mathematics (i.e. when something exists, can we give an algorithm to find it?), which is only a short step away from computer formalization of proofs.

  3. One could, however, try to make similar arguments from the other side. For instance, HoTT/UF is (at least conjecturally) an internal language for higher topos theory and homotopy theory. Thus, one might expect it to attract an equal influx of higher topos theorists and homotopy theorists, who don’t care about computer formalization. Why hasn’t this happened? My best guess is that at present the traditional 1-topos theorists seem to be largely disjoint from the higher topos theorists. The former care about internal languages, but not so much about higher categories, while for the latter it is reversed; thus, there aren’t many of us in the intersection who care about both and appreciate this aspect of HoTT. But I hope that over time this will change.

  4. Another possible reason why the influx from type theory has been greater is that HoTT/UF is less strange-looking to type theorists (it’s just another type theory) than to the average mathematician. In the HoTT Book we tried to make it as accessible as possible, but there are still a lot of tricky things about type theory that one seemingly has to get used to before being able to appreciate the homotopical version.

  5. Another sociological effect is that Vladimir Voevodsky, who introduced the univalence axiom and is a Fields medalist with “charisma”, is also a very vocal and visible advocate of computer formalization. Indeed, his personal programme that he calls “Univalent Foundations” is to formalize all of mathematics using a HoTT-like type theory.

  6. Finally, many of us believe that HoTT is actually the best formal system extant for computer formalization of mathematics. It shares most of the advantages of type theory, such as the above-mentioned close connection to programming, the avoidance of complicated ZF-encodings for even basic concepts like natural numbers, and the production of small easily-verifiable “certificates” of proof correctness. (The advantages of some type theories that HoTT doesn’t yet share, like a computational interpretation, are work in progress.) But it also rectifies certain infelicious features of previously existing type theories, by specifying what equality of types means (univalence), including extensionality for functions and truth values, providing well-behaved quotient types (HITs), and so on, making it more comfortable for ordinary mathematicians. (I believe that historically, this was what led Voevodsky to type theory and univalence in the first place.)

There are probably additional reasons why HoTT/UF attracts more people interested in computer formalization. (If you can think of others, please share them in the comments.) However, there is more to it than this, as one can guess from the fact that even people like me, coming from a background of homotopy theory and higher category theory, tend to formalize a lot of our work on HoTT. Of course there is a bit of a “peer pressure” effect: if all the other homotopy type theorists formalize their papers, then it starts to seem expected in the subject. But that’s far from the only reason; here are some “real” ones.

  1. Computer formalization of synthetic homotopy theory (the “uniquely HoTT” part of HoTT/UF) is “easier”, in certain respects, than most computer formalization of mathematics. In particular, it requires less infrastructure and library support, because it is “closer to the metal” of the underlying formal system than is usual for actually “interesting” mathematics. Thus, formalizing it still feels more like “doing mathematics” than like programming, making it more attractive to a mathematician. You really can open up a proof assistant, load up no pre-written libraries at all, and in fairly short order be doing interesting HoTT. (Of course, this doesn’t mean that there is no value in having libraries and in thinking hard about how best to design those libraries, just that the barrier to entry is lower.)

  2. Precisely because, as mentioned above, type theory is hard to grok for a mathematician, there is a significant benefit to using a proof assistant that will automatically tell you when you make a mistake. In fact, messing around with a proof assistant is one of the best ways to learn type theory! I posted about this almost exactly four years ago.

  3. I think the previous point goes double for homotopy type theory, because it is an unfamiliar new world for almost everyone. The types of HoTT/UF behave kind of like spaces in homotopy theory, but they have their own idiosyncracies that it takes time to develop an intuition for. Playing around with a proof assistant is a great way to develop that intuition. It’s how I did it.

  4. Moreover, because that intuition is unique and recently developed for all of us, we may be less confident in the correctness of our informal arguments than we would be in classical mathematics. Thus, even an established “homotopy type theorist” may be more likely to want the comfort of a formalization.

  5. Finally, there is an additional benefit to doing mathematics with a proof assistant (as opposed to formalizing mathematics that you’ve already done on paper), which I think is particularly pronounced for type theory and homotopy type theory. Namely, the computer always tells you what you need to do next: you don’t need to work it out for yourself. A central part of type theory is inductive types, and a central part of HoTT is higher inductive types; both of which are characterized by an induction principle (or “eliminator”) which says that in order to prove a statement of the form “for all <semantics>x:W<annotation encoding="application/x-tex">x:W</annotation></semantics>, <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics>”, it suffices to prove some number of other statements involving the predicate <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>. The most familiar example is induction on the natural numbers, which says that in order to prove “for all <semantics>n<annotation encoding="application/x-tex">n\in \mathbb{N}</annotation></semantics>, <semantics>P(n)<annotation encoding="application/x-tex">P(n)</annotation></semantics>” it suffices to prove <semantics>P(0)<annotation encoding="application/x-tex">P(0)</annotation></semantics> and “for all <semantics>n<annotation encoding="application/x-tex">n\in \mathbb{N}</annotation></semantics>, if <semantics>P(n)<annotation encoding="application/x-tex">P(n)</annotation></semantics> then <semantics>P(n+1)<annotation encoding="application/x-tex">P(n+1)</annotation></semantics>”. When using proof by induction, you need to isolate <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> as a predicate on <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>, specialize to <semantics>n=0<annotation encoding="application/x-tex">n=0</annotation></semantics> to check the base case, write down <semantics>P(n)<annotation encoding="application/x-tex">P(n)</annotation></semantics> as the inductive hypothesis, then replace <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> by <semantics>n+1<annotation encoding="application/x-tex">n+1</annotation></semantics> to find what you have to prove in the induction step. The students in an intro to proofs class have trouble with all of these steps, but professional mathematicians have learned to do them automatically. However, for a general inductive or higher inductive type, there might instead be four, six, ten, or more separate statements to prove when applying the induction principle, many of which involve more complicated transformations of <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, and it’s common to have to apply several such inductions in a nested way. Thus, when doing HoTT on paper, a substantial amount of time is sometimes spent simply figuring out what has to be proven. But a proof assistant equipped with a unification algorithm can do that for you automatically: you simply say “apply induction for the type <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>” and it immediately decides what <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is and presents you with a list of the remaining goals that have to be proven.

To summarize this second list, then, I think it’s fair to say that compared to formalizing traditional mathematics, formalizing HoTT tends to give more benefit at lower cost. However, that cost is still high, especially when you take into account the time spent learning to use a proof assistant, which is often not the most user-friendly of software. This is why I always emphasize that HoTT can perfectly well be done without a computer, and why we wrote the book the way we did.

by shulman (viritrilbia@gmail.com) at June 21, 2015 06:01 AM

June 20, 2015

ZapperZ - Physics and Physicists

Quantum Superposition Destroyed By Gravitational Time Dilation?
This is another interesting take on why we see our world classically and not quantum mechanically. Gravitational time dilation is enough to destroy coherent states that maintain superposition.

With this premise, the team worked out that even the Earth's gravitational field is strong enough to cause decoherence in quite small objects across measurable timescales. The researchers calculated that an object that weighs a gram and exists in two quantum states, separated vertically by a thousandth of a millimetre, should decohere in around a millisecond. 

I think this is similar to Penrose's claim that gravity is responsible for decoherence of quantum states. It will be interesting if anyone can experimentally verify this latest theoretical finding.

Zz.

by ZapperZ (noreply@blogger.com) at June 20, 2015 02:26 AM

June 19, 2015

Tommaso Dorigo - Scientificblogging

ATLAS Pictures Colour Flow Between Quarks
In 1992 I started working at my undergraduate thesis, the search for all-hadronic top quark pairs in CDF data. The CDF experiment was just starting to collect proton-antiproton collision data with the brand-new silicon vertex detector in what was called Run 1a, which ended in 1993 and produced the data on which the first evidence claim of top quarks was based. But I was still working on the Run 0 data: 4 inverse picobarns of collisions -the very first collisions at the unprecedented energy of 1.8 TeV. And I was not alone: many analyses of those data were still in full swing.

read more

by Tommaso Dorigo at June 19, 2015 06:46 PM