Particle Physics Planet


March 05, 2015

Emily Lakdawalla - The Planetary Society Blog

Space Advocates Descend on Capitol Hill
The Space Exploration Alliance wrapped up its most recent 'legislative blitz' last week. Nearly 70 individuals participated in the democratic process, speaking to nearly 168 difference offices in Congress. Nearly half of those individuals were Planetary Society members.

March 05, 2015 12:52 AM

March 04, 2015

Christian P. Robert - xi'an's og

accelerating Metropolis-Hastings algorithms by delayed acceptance

Marco Banterle, Clara Grazian, Anthony Lee, and myself just arXived ou paper “Accelerating Metropolis-Hastings algorithms by delayed acceptance“, which is an major revision and upgrade of our “Delayed acceptance with prefetching” paper of last June. Paper that we submitted at the last minute to NIPS, but which did not get accepted. The difference with this earlier version is the inclusion of convergence results, in particular that, while the original Metropolis-Hastings algorithm dominates the delayed version in Peskun ordering, the later can improve upon the original for an appropriate choice of the early stage acceptance step. We thus included a new section on optimising the design of the delayed step, by picking the optimal scaling à la Roberts, Gelman and Gilks (1997) in the first step and by proposing a ranking of the factors in the Metropolis-Hastings acceptance ratio that speeds up the algorithm.  The algorithm thus got adaptive. Compared with the earlier version, we have not pursued the second thread of prefetching as much, simply mentioning that prefetching and delayed acceptance could be merged. We have also included a section on the alternative suggested by Philip Nutzman on the ‘Og of using a growing ratio rather than individual terms, the advantage being the probability of acceptance stabilising when the number of terms grows, with the drawback being that expensive terms are not always computed last. In addition to our logistic and mixture examples, we also study in this version the MALA algorithm, since we can postpone computing the ratio of the proposals till the second step. The gain observed in one experiment is of the order of a ten-fold higher efficiency. By comparison, and in answer to one comment on Andrew’s blog, we did not cover the HMC algorithm, since the preliminary acceptance step would require the construction of a proxy to the acceptance ratio, in order to avoid computing a costly number of derivatives in the discretised Hamiltonian integration.


Filed under: Books, Statistics, University life Tagged: Andrew Gelman, Hamiltonian Monte Carlo, MALA, Metropolis-Hastings algorithm, Montréal, NIPS, Peskun ordering, prefetching, University of Warwick

by xi'an at March 04, 2015 11:14 PM

ZapperZ - Physics and Physicists

Physics of Crashes
This type of class is an excellent opportunity to teach students physics and traffic safety at the same time.

Teacher Sheryl Cordivari said teaching students about Newton’s laws, inertia and the science behind how seat belts and airbags save lives helps prepare them for problem solving in real life.
“I go over the equations in class … I think this helps show them that what I’m teaching them is not just to make their lives difficult and drive them crazy, but to show them this is real and has a real-life applications,” she said.
.
.
.
Although high speeds can be a factor in many accidents, even crashes at 40 mph are extremely dangerous.

“Driving at 40 miles per hour, does it feel dangerous?” he asked the students. “But would you jump off a five-story building?”
The speeds at impact are similar, he said.
There are a lot of things here that the students can relate to, and that is the best way to teach new things if you want them to sink in. In fact, I know of many adults who could stand to have such lessons as well.

Zz.

by ZapperZ (noreply@blogger.com) at March 04, 2015 08:57 PM

John Baez - Azimuth

Melting Permafrost (Part 4)

 

Russian scientists have recently found more new craters in Siberia, apparently formed by explosions of methane. Three were found last summer. They looked for more using satellite photos… and found more!

“What I think is happening here is, the permafrost has been acting as a cap or seal on the ground, through which gas can’t permeate,” says Paul Overduin, a permafrost expert at the Alfred Wegener Institute in Germany. “And it reaches a particular temperature where there’s not enough ice in it to act that way anymore. And then gas can rush out.”

It’s rather dramatic. Some Russian villagers have even claimed to see flashes in the distance when these explosions occur. But how bad is it?

The Siberian Times

An English-language newspaper called The Siberian Times has a good article about these craters, which I’ll quote extensively:

• Anna Liesowska Dozens of new craters suspected in northern Russia, The Siberian Times, 23 February 2015.


B1 – famous Yamal hole in 30 kilometers from Bovanenkovo, spotted in 2014 by helicopter pilots. Picture: Marya Zulinova, Yamal regional government press service.

Respected Moscow scientist Professor Vasily Bogoyavlensky has called for ‘urgent’ investigation of the new phenomenon amid safety fears.

Until now, only three large craters were known about in northern Russia with several scientific sources speculating last year that heating from above the surface due to unusually warm climatic conditions, and from below, due to geological fault lines, led to a huge release of gas hydrates, so causing the formation of these craters in Arctic regions.

Two of the newly-discovered large craters—also known as funnels to scientists—have turned into lakes, revealed Professor Bogoyavlensky, deputy director of the Moscow-based Oil and Gas Research Institute, part of the Russian Academy of Sciences.

Examination using satellite images has helped Russian experts understand that the craters are more widespread than was first realised, with one large hole surrounded by as many as 20 mini-craters, The Siberian Times can reveal.


Four Arctic craters: B1 – famous Yamal hole in 30 kilometers from Bovanenkovo, B2 – recently detected crater in 10 kilometers to the south from Bovanenkovo, B3 – crater located in 90 kilometers from Antipayuta village, B4 – crater located near Nosok village, on the north of Krasnoyarsk region, near Taimyr Peninsula. Picture: Vasily Bogoyavlensky.

‘We know now of seven craters in the Arctic area,’ he said. ‘Five are directly on the Yamal peninsula, one in Yamal Autonomous district, and one is on the north of the Krasnoyarsk region, near the Taimyr peninsula.

‘We have exact locations for only four of them. The other three were spotted by reindeer herders. But I am sure that there are more craters on Yamal, we just need to search for them.

‘I would compare this with mushrooms: when you find one mushroom, be sure there are few more around. I suppose there could be 20 to 30 craters more.’

He is anxious to investigate the craters further because of serious concerns for safety in these regions.

The study of satellite images showed that near the famous hole, located in 30 kilometres from Bovanenkovo are two potentially dangerous objects, where the gas emission can occur at any moment.


Satellite image of the site before the forming of the Yamal hole (B1). K1 and the red outline show the hillock (pingo) formed before the gas emission. Yellow outlines show the potentially dangerous objects. Picture: Vasily Bogoyavlensky.

He warned: ‘These objects need to be studied, but it is rather dangerous for the researchers. We know that there can occur a series of gas emissions over an extended period of time, but we do not know exactly when they might happen.

‘For example, you all remember the magnificent shots of the Yamal crater in winter, made during the latest expedition in Novomber 2014. But do you know that Vladimir Pushkarev, director of the Russian Centre of Arctic Exploration, was the first man in the world who went down the crater of gas emission?

‘More than this, it was very risky, because no one could guarantee there would not be new emissions.’

Professor Bogoyavlensky told The Siberian Times: ‘One of the most interesting objects here is the crater that we mark as B2, located 10 kilometres to the south of Bovanenkovo. On the satellite image you can see that it is one big lake surrounded by more than 20 small craters filled with water.

‘Studying the satellite images we found out that initially there were no craters nor a lake. Some craters appeared, then more. Then, I suppose that the craters filled with water and turned to several lakes, then merged into one large lake, 50 by 100 metres in diameter.

‘This big lake is surrounded by the network of more than 20 ‘baby’ craters now filled with water and I suppose that new ones could appear last summer or even now. We now counting them and making a catalogue. Some of them are very small, no more than 2 metres in diameter.’

‘We have not been at the spot yet,’ he said. ‘Probably some local reindeer herders were there, but so far no scientists.’

He explained: ‘After studying this object I am pretty sure that there was a series of gas emissions over an extended period of time. Sadly, we do not know, when exactly these emissions occur, i.e. mostly in summer, or in winter too. We see only the results of this emissions.’

The object B2 is now attracting special attention from the researchers as they seek to understand and explain the phenomenon. This is only 10km from Bovanenkovo, a major gas field, developed by Gazprom, in the Yamalo-Nenets Autonomous Okrug. Yet older satellite images do not show the existence of a lake, nor any craters, in this location.

Not only the new craters constantly forming on Yamal show that the process of gas emission is ongoing actively.

Professor Bogoyavlensky shows the picture of one of the Yamal lakes, taken by him from the helicopter and points on the whitish haze on its surface.


Yamal lake with traces of gas emissions. Picture: Vasily Bogoyavlensky.

He commented: ‘This haze that you see on the surface shows that gas seeps that go from the bottom of the lake to the surface. We call this process ‘degassing’.

‘We do not know, if there was a crater previously and then turned to lake, or the lake formed during some other process. More important is that the gases from within are actively seeping through this lake.

‘Degassing was revealed on the territory of Yamal Autonomous District about 45 years ago, but now we think that it can give us some clues about the formation of the craters and gas emissions. Anyway, we must research this phenomenon urgently, to prevent possible disasters.’

Professor Bogoyavlensky stressed: ‘For now, we can speak only about the results of our work in the laboratory, using the images from space.

‘No one knows what is happening in these craters at the moment. We plan a new expedition. Also we want to put not less than four seismic stations in Yamal district, so they can fix small earthquakes, that occur when the crater appears.

‘In two cases locals told us that they felt earth tremors. The nearest seismic station was yet too far to register these tremors.

‘I think that at the moment we know enough about the crater B1. There were several expeditions, we took probes and made measurements. I believe that we need to visit the other craters, namely B2, B3 and B4, and then visit the rest three craters, when we will know their exact location. It will give us more information and will bring us closer to understanding the phenomenon.’

He urged: ‘It is important not to scare people, but to understand that it is a very serious problem and we must research this.’

In an article for Drilling and Oil magazine, Professor Bogoyavlensky said the parapet of these craters suggests an underground explosion.

‘The absence of charred rock and traces of significant erosion due to possible water leaks speaks in favour of mighty eruption (pneumatic exhaust) of gas from a shallow underground reservoir, which left no traces on soil which contained a high percentage of ice,’ he wrote.

‘In other words, it was a gas-explosive mechanism that worked there. A concentration of 5-to-16% of methane is explosive. The most explosive concentration is 9.5%.’

Gas probably concentrated underground in a cavity ‘which formed due to the gradual melting of buried ice’. Then ‘gas was replacing ice and water’.

‘Years of experience has shown that gas emissions can cause serious damage to drilling rigs, oil and gas fields and offshore pipelines,’ he said. ‘Yamal craters are inherently similar to pockmarks.

‘We cannot rule out new gas emissions in the Arctic and in some cases they can ignite.’

This was possible in the case of the crater found at Antipayuta, on the Yamal peninsula.

‘The Antipayuta residents told how they saw some flash. Probably the gas ignited when appeared the crater B4, near Taimyr peninsula. This shows us, that such explosion could be rather dangerous and destructive.

‘We need to answer now the basic questions: what areas and under what conditions are the most dangerous? These questions are important for safe operation of the northern cities and infrastructure of oil and gas complexes.’


Crater B3 located in 90 kilometres from Antipayuta village, Yamal district. Picture: local residents.


Crater B4 located near Nosok village, on the north of Krasnoyarsk region, near Taimyr Peninsula. Picture: local residents.

How bad is it?

Since methane is a powerful greenhouse gas, some people are getting nervous. If global warming releases the huge amounts of methane trapped under permafrost, will that create more global warming? Could we be getting into a runaway feedback loop?

The Washington Post has a good article telling us to pay attention, but not panic:

• Chris Mooney, Why you shouldn’t freak out about those mysterious Siberian craters, Chris Mooney, 2 March 2015.

David Archer of the University of Chicago, a famous expert on climate change and the carbon cycle, took a look at thes craters and did some quick calculations. He estimated that “it would take about 20,000,000 such eruptions within a few years to generate the standard Arctic Methane Apocalypse that people have been talking about.”

More importantly, people are measuring the amount of methane in the air. We know how it’s doing. For example, you can make graphs of methane concentration here:

• Earth System Research Laboratory, Global Monitoring Division, Data visualization.

Click on a northern station like Alert, the scary name of a military base and research station in Nunavut—the huge northern province in Canada that’s so inhospitable they let the native Americans run it.

(Alert is on the very top, near Greenland.)

Choose Carbon cycle gases from the menu at right, and click on Time series. You’ll go to another page, and then choose Methane—the default choice is carbon dioxide. Go to the bottom of the page and click Submit and you’ll get a graph like this:

Methane has gone up from about 1750 to 1900 nanomoles per mole from 1985 to 2015. That’s a big increase—but not a sign of incipient disaster.

A larger perspective might help. Apparently from 1750 to 2007 the atmospheric CO2 concentration increased about 40% while the methane concentration has increased about 160%. The amount of additional radiative forcing due to CO2 is about 1.6 watts per square meter, while for methane it’s about 0.5:

Greenhouse gas: natural and anthropogenic sources, Wikipedia.

So, methane is significant, and increasing fast. So far CO2 is the 800-pound gorilla in the living room. But I’m glad Russian scientists are doing things like this:





The latest expedition to Yamal crater was initiated by the Russian Center of Arctic Exploration in early November 2014. The researchers were first in the world to enter this crater. Pictures: Vladimir Pushkarev/Russian Center of Arctic Exploration

Previous posts

For previous posts in this series, see:

Melting Permafrost (Part 1).

Melting Permafrost (Part 2).

Melting Permafrost (Part 3).


by John Baez at March 04, 2015 08:43 PM

Georg von Hippel - Life on the lattice

Perspectives and Challenges in Lattice Gauge Theory, Day Four
Today was dedicated to topics and issues related to finite temperature and density. The first speaker of the morning was Prasad Hegde, who talked about the QCD phase diagram. While the general shape of the Columbia plot seems to be fairly well-established, there is now a lot of controversy over the details. For example, the two-flavour chiral limit seems to be well-described by either the O(4) or O(2) universality class, it isn't currently possible to exclude that it might be Z(2), and while the three-flavour transition appears to be known to be Z(2), simulations with staggered and Wilson quarks give disagreeing results for its features. Another topic that gets a lot of attention is the question of U(1)A restoration; of course, U(1)A is broken by the axial anomaly, which arises from the path integral measure and is present at all temperatures, so it cannot be expected to be restored in the same sense that chiral symmetry is, but it might be that as the temperature gets larger, the influence of the anomaly on the Dirac eigenvalue spectrum gets outvoted by the temporal boundary conditions, so that the symmetry violation might disappear from the correlation functions of interest. However, numerical studies using domain-wall fermions suggest that this is not the case. Finally, the equation of state can be obtained from stout or HISQ smearing with very similar results and appears well-described by a hadron resonance gas at low T, and to match reasonably well to perturbation theory at high T.

The next speaker was Saumen Datta speaking on studies of the QCD plasma using lattice correlators. While the short time extent of finite-temperature lattices makes it hard to say much about the spectrum without the use of techniques such as the Maximum Entropy Method, correlators in the spatial directions can be readily used to obtain screening masses. Studies of the spectral function of bottomonium in the Fermilab formalism suggest that the Y(1S) survives up to at least twice the critical temperature.

Sorendu Gupta spoke next about the equation of state in dense QCD. Using the Taylor expansion (which was apparently first invented in the 14th-15th century by the Indian mathematician Madhava) method together with Padé approximants to reconstruct the function from the truncated series, it is found that the statistical errors on the reconstruction blow up as one nears the suspected critical point. This can be understood as a specific instance of the "no-free-lunch theorem", because a direct simulation (were it possible) would suffer from critical slowing down as the critical point is approached, which would likewise lead to large statistical errors from a fixed number of configurations.

The last talk before lunch was Bastian Brandt with an investigation of an alternative formulation of pure gauge theory using auxiliary bosonic fields in an attempt to render the QCD action amenable to a dual description that might allow to avoid the sign problem at finite baryon chemical potential. The alternative formulation appears to describe exactly the same physics as the standard Wilson gauge action at least for SU(2) in 3D, and in 2D and/or in certain limits, its a continuum limit is in fact known to be Yang-Mills theory. However, when fermions are introduced, the dual formulation still suffers from a sign problem, but it is hoped that any trick that might avoid this sign problem would then also avoid the finite-μ one.

After lunch, there were two non-lattice talks. The first one was given by Gautam Mandal, who spoke about thermalisation in integrable models and conformal field theories. In CFTs, it can be shown that for certain initial states, the expectation value of an operator equilibrates to a certain "thermal" expectation value, and a generalisation to integrable models, where the "thermal" density operator includes chemical potentials for all (infinitely many) conserved charges, can also be given.

The last talk of the day was a very lively presentation of the fluid-gravity correspondence by Shiraz Minwalla, who described how gravity in Anti-deSitter space asymptotically goes over to Navier-Stokes hydrodynamics in some sense.

In the evening, the conference banquet took place on the roof terrace of a very nice restaurant serving very good European-inspired cuisine and Indian red wine (also rather nice -- apparently the art of winemaking has recently been adapted to the Indian climate, e.g. the growing season is during the cool season, and this seems to work quite well).

by Georg v. Hippel (noreply@blogger.com) at March 04, 2015 05:30 PM

Emily Lakdawalla - The Planetary Society Blog

Mars Orbiter Mission Methane Sensor for Mars is at work
After several months of near-silence, ISRO's Mars Orbiter Mission has released on Facebook the first data product from its Methane Sensor For Mars. Don't get too excited about methane yet: there is no positive or negative detection. The news here is that the Methane Sensor for Mars is working, systematically gathering data. They also released several new photos of Mars.

March 04, 2015 04:50 PM

Tommaso Dorigo - Scientificblogging

The Borexino Detector And Its Physics Results

At the XVI Neutrino Telescopes conference going on this week in Venice there was a nice presentation on the results of the Borexino experiment. The text below is a writeup of the highlights from the talk, given by Cristiano Galbiati from Princeton University.

read more

by Tommaso Dorigo at March 04, 2015 03:55 PM

astrobites - astro-ph reader's digest

Hot Jupiters Are Very Bad Neighbors

Title: The destruction of inner planetary systems during high-eccentricity migration of gas giants
Authors: Alexander J. Mustill, Melvyn B. Davies, Anders Johansen
First author’s institution: Lund Observatory, Department of Astronomy & Theoretical Physics, Lund University
Status: Submitted to ApJ

Artist's conception of a hot Jupiter. (NASA)

Artist’s conception of a hot Jupiter with no other planets to keep it company. (NASA)

Hot Jupiters are weird and lonely little planets. Well, they’re huge. But otherwise: They’re weird in that they surprised astronomers when we started finding them, giant planets orbiting improbably close in to their stars, as close as 0.015 AU from their stars. (Earth, remember, orbits at 1 AU. Mercury’s at 0.307.) Their presence there clashed with all our ideas about planet formation, modeled on our own solar system.

According to current theories of planet formation, there’s not nearly enough material close to a star to form a planet of Jupiter-ish size. But current theories of planet formation also allow for migration, so there you go—hot Jupiters could have formed farther out from their stars, where raw materials are plentiful, and then later they can migrate in.

Complicating the picture, though, is the fact that hot Jupiters are usually found alone. Small, low-mass planets are common enough in the close orbits that hot Jupiters frequent, except when a hot Jupiter is there.

On the migration end of things, there are, broadly speaking, two possible explanations for hot Jupiters’ positions: Type II migration during planet-formation, moving inward through the gas-rich protoplanetary disk; or, later on, once planets have formed, through gravitational scattering as a giant planet in an eccentric orbit interacts with small planets closer in to the star. But Type II migration doesn’t explain hot Jupiters’ lonely neighborhoods—that migration would happen early enough to leave plenty of planet-forming material undisturbed in its wake. Hot Jupiters’ lack of nearby companions could, of course, be explained by other means, but today’s paper tests the idea that hot Jupiters could be killing two birds with one migrational stone: could late migration via gravitational interactions bring hot Jupiters into their tight orbits while also getting rid of any small, close-in companion planets?

The authors of this paper tested a series of scenarios that are a mix of known circumstance and hypothetical orchestration. They chose four real-world Kepler candidate systems of three low-mass, close-in planets, and added to the mix an imaginary Jupiter on an eccentric orbit with a small pericenter (meaning that on its closest approach to the star, it came very close). For each system-plus-Jupiter, the authors calculated a suite of simulations, each time testing slightly different properties for the giant planet (pericenters ranging from 0.01 AU to 0.25 AU, prograde and retrograde orbits) as well as variations in the small planets’ orbital inclinations.

Even with all of those variations, most of the simulations ended in one of two scenarios as the giant planet migrated in from an eccentric orbit toward a nearly circular one very close to the star: either the low-mass planets were destroyed and the hot Jupiter was left alone, or the giant planet was ejected and one to three low-mass planets were left behind. (In both cases, low-mass planets could be lost by collisions with the star, each other, or the giant planet itself. It’s worth noting that the giant planet’s eccentric orbit often crossed the paths of the orbits of the other planets.) Basically, if a giant planet migrates in to a tight enough orbit to be called a hot Jupiter, it will, in the process, get rid of any other tight-orbit planets. If a giant planet doesn’t eject or destroy those other planets, it won’t become a hot Jupiter.

Most giant planets in the simulation don’t end up in narrow enough orbits to become true “hot Jupiters.” If a giant planet accreted one or more of the inner planets in its migration, it was more likely to become a hot Jupiter, although those interactions were relatively rare.

Most giant planets in the simulation don’t end up in narrow enough orbits to become true “hot Jupiters.” If a giant planet accreted one or more of the inner planets in its migration, it was more likely to become a hot Jupiter, although those interactions were relatively rare.

The mysteries of hot Jupiters are by no means all settled. This is just one proposition, and even at that, it starts with the giant planet in an eccentric orbit, not testing the plausibility of the starting conditions (although they are implied by earlier research). The more we learn about exoplanet systems, the more the diversity of the cosmos and perhaps our own solar system’s weirdness become apparent.

by Jaime Green at March 04, 2015 02:56 PM

Peter Coles - In the Dark

The Law of Averages

Just a couple of weeks ago I found myself bemoaning my bad luck in the following terms

A few months have passed since I last won a dictionary as a prize in the Independent Crossword competition. That’s nothing remarkable in itself, but since my average rate of dictionary accumulation has been about one a month over the last few years, it seems a bit of a lull.  Have I forgotten how to do crosswords and keep sending in wrong solutions? Is the Royal Mail intercepting my post? Has the number of correct entries per week suddenly increased, reducing my odds of winning? Have the competition organizers turned against me?

In fact, statistically speaking, there’s nothing significant in this gap. Even if my grids are all correct, the number of correct grids has remained constant, and the winner is pulled at random  from those submitted (i.e. in such a way that all correct entries are equally likely to be drawn) , then a relatively long unsuccessful period such as I am experiencing at the moment is not at all improbable. The point is that such runs are far more likely in a truly random process than most people imagine, as indeed are runs of successes. Chance coincidence happen more often than you think.

Well, as I suspected would happen soon my run of ill fortune came to an end today with the arrival of this splendid item in the mail:

dictionary_beel

It’s the prize for winning Beelzebub 1303, the rather devilish prize cryptic in the Independent on Sunday Magazine. It’s nice to get back to winning ways. Now what’s the betting I’ll now get a run of successes?

P.S. I used the title “Law of Averages” just so I could point out in a footnote that there’s actually no such thing.


by telescoper at March 04, 2015 01:25 PM

Georg von Hippel - Life on the lattice

QNP 2015, Day Two
Hello again from Valparaíso. Today's first speaker was Johan Bijnens with a review of recent results from chiral perturbation theory in the mesonic sector, including recent results for charged pion polarisabilities and for finite-volume corrections to lattice measurements. To allow others to perform their own calculations for their own specific needs (which might include technicolor-like theories, which will generally have different patterns of chiral symmetry breaking, but otherwise work just the same way), Bijnens & Co. have recently published CHIRON, a general two-loop mesonic χPT package. The leading logarithms have been determined to high orders, and it has been found that the speed of convergence depends both on the observable and on whether the leading-order or physical pion decay constant is used.

Next was Boris Grube, who presented some recent results from light-meson spectroscopy. The light mesons are generally expected to be some kind of superpositions of quark-model states, hybrids, glueballs, tetraquark and molecular states, as may be compatible with their quantum numbers in each case. The most complex sector is the 0++ sector of f0 mesons, in which the lightest glueball state should lie. While the γγ width of the f0(1500) appears to be compatible with zero, which would agree with the expectations for a glueball, whereas the f0(1710) has a photonic width more in agreement with being an s-sbar state, in J/ψ -> γ (ηη), which as a gluon-rich process should couple strongly to glueball resonances, little or no f0(1500) is seen, whereas a glueball nature for the f0(1710) would be supported by these results. New data to come from GlueX, and later from PANDA, should help to clarify things.

The next speaker was Paul Sorensen with a talk on the search for the critical point in the QCD phase diagram. The quark-gluon plasma at RHIC is not only a man-made system that is over 300 times hotter than the centre of the Sun, it is also the most perfect fluid known, as it close to saturates the viscosity bound η/s > 1/(4π). Studying it experimentally is quite difficult, however, since one must extrapolate back to a small initial fireball, or "little bang", from correlations between thousands of particle tracks in a detector, not entirely dissimilar from the situation in cosmology, where the properties of the hot big bang (and previous stages) are inferred from angular correlations in the cosmic microwave background. Beam energy scans find indications that the phase transition becomes first-order at higher densities, which would indicate the existence of a critical endpoint, but more statistics and more intermediate energies are needed.

After the coffee break, François-Xavier Girod spoke about Generalised Parton Distributions (GPDs) and deep exclusive processes. GPDs, which reduce to form factors and to parton distributions upon integrating out the unneeded variables in each case, correspond to a three-dimensional image of the nucleon performed in the longitudinal momentum fraction and the transverse impact parameter, and their moments are related to matrix elements of the energy-momentum tensor. Experimentally, they are probed using deeply virtual Compton scattering (DVCS); the 12 GeV upgrade at Jefferson Lab will increase the coverage in both Bjørken-x and Q2, and the planned electron-ion collider is expected to allow probing the sea and gluon GPDs as well.

After the lunch break, there were parallel sessions. I chaired the parallel session on lattice and other perturbative methods, with presentations of lattice results by Eigo Shintani and Tereza Mendes, as well as a number of AdS/QCD-related results by various others.

by Georg v. Hippel (noreply@blogger.com) at March 04, 2015 12:06 PM

Peter Coles - In the Dark

Sonny Rollins’ letter to Coleman Hawkins

telescoper:

I couldn’t resist reblogging this wonderful letter from one great saxophonist, Sonny Rollins, to another, Coleman Hawkins.

The letter was written in 1962. You can find here on Youtube a recording of the two of them playing the great Jerome Kern tune All The Things You Are at the Newport Jazz Festival just a few months later in summer 1963. The title seems to match the sentiments of the letter rather nicely!

Originally posted on Simon Purcell Online:

Do read this, a touching letter from Sonny Rollins to Coleman Hawkins in 1962 (from the website www.jazzclef.com). The greatest players possess not only self-discipline and powers of concentration, but generally, great humility.

View original


by telescoper at March 04, 2015 10:47 AM

Lubos Motl - string vacua and pheno

Abolished Big Bang is all around us
A rant on the unlimited stupidity of the masses

Three weeks ago, I discussed a media storm following the publication of a lousy article by Ali and Das that was interpreted as a disproof of the big bang theory. Lots of the Czech media joined at that time.



However, what I didn't expect was that this insanity was going to continue for weeks. On Monday, when I opened the novinky.cz [translation of the name of the server: news.cz] app on my smartphone, it immediately impressed me by the article called There Has Never Been Any Big Bang, a New Study Claims. Holy cow. There was an easy way to save my life: to drag my finger from the left side of the screen to the right one. The terrible thing disappeared.




But it seems to me that while novinky.cz is copying similar junk from some Western "MSM" sources, there are many smaller news outlets in the food chain that similarly depend on novinky.cz.




So just minutes after I turned on the radio, Rádio Impuls, today in the morning, I *heard* the new gospel in the audio form, too. After half a minute, I had to turn it off and switch to another station.

Among the uncountable Czech commercial radios, Radio Impuls has the largest audience. They have their own traveler, George Kolbaba, to mention an example of their power, who has been to every country of the globe about 5 times in average. You may win lots of money and houses if you reply "Hello, this is the Impuls family" if they call your number.

And they also need to show how intellectual they can be, it seems, so their 1 million listeners (who listen to the radio at least at one point on a day) were offered this great news that the big bang was wrong, too.

I don't want to flood you with irrelevant data about a radio station in a foreign country. These details are not important, you are not interested in them, and you live in slightly different environments where the essence of the situation is similar.

My broader point is that at least when it comes to science, the public is ready to buy an arbitrarily stinky and random piece of šit, devour it, and smack their lips, and the longer the food chain that improves the šittiness of the "products" is, the better for the public.

The average people are staggering idiots. While they love to celebrate junk, the most valuable science gets completely unnoticed or is even deliberately attacked. And of course that the masses don't really distinguish science from religion or from the latest superstitions about "which food is healthy for you".

You know, the paper by Ali and Das is a bad paper and the authors would have virtually no chance to get a postdoc job at a good place these days. But their another bad paper – because of the "ambitious" interpretation it has been linked to – is able to convince almost all the mass media working for the average people that it is a game-changer in science.

In the hierarchy of the mass media, there are some influential ones, like Reuters or AP etc. What they wrote about that paper was already atrocious. (Novinky.cz seems to cite "Science Alert" as their source – that's also a bizarre source of information for the journalists.) But what's worse is that this stuff is getting even worse as it travels through the food chain, when the "story" is being copied from one place to another with additional mutations which are mostly making the story even "more popular" (i.e. more šitty).



97% of the people simply have no capability or no intent or no desire to distinguish gold from cr*p, not even in the most obvious situation. This problem isn't restricted to physics or cosmology. I can't resist to mention another thing I received by e-mail days ago – an e-mail about an April 2007 experiment by the Washington Post.

Joshua Bell, one of the top U.S. violinists, was placed to a subway station in Washington D.C. for 43 minutes, during the rush hour, and played 6 classical pieces that he routinely plays in concert halls. He used an instrument produced by Antonio Stradivari himself, one whose price is about $3.5 million.

A ticket to his concert costs about $100. How much did he earn among those 10,000 or so passengers of the U.S. capital? 27 people – virtually none of whom spent too much time with the virtuoso – gave him money and in total, he collected slightly over $32. Thirty-two damn dollars. Couldn't they hear that this one wasn't just another homeless guy begging for money?

Some of these people must be the same ones who pay the $100 tickets in Boston's Symphony Hall and elsewhere. I think that the average people's evaluation of the reality – and of music and science, among other things – depends on group think and hype to an overwhelming extent.

I think that most of the people go to the concerts of classical music in order to improve their image. They don't give a damn about classical music itself. In the same way, most people talk about the big bang in order to look smart. They don't give a damn about science or the Universe and they have no clue what is likely and what is almost certainly rubbish. And they don't need to care about the actual science because the only arbiters of their "science" are similar uneducated morons as themselves.

Almost universally, cr*p is gold and gold is cr*p for these people. A little bit of P.R. work would probably be enough to sell the Czech "Annie Dido" ["Dajdou", to be more precise] as one of the greatest musicians of all ages.

Another similar story that I found in the mailbox today – one combining human stupidity and corruption in a proportion that is hard to measure – is about the Czech public spending. We may often ask why the roads in Czechia are more expensive than those in Germany and Austria – even though the salaries in Austria and Germany are significantly higher than ours.

The hamster (Cricetus cricetus) is to blame.

Last September, they were going to build a new speed highway in Moravia – but it was found out that there were hamsters over there. So an "environmental" company called Ekoteam (which is really one person, Mr Vladimír Ludvík) was hired to count the burrows of hamsters between two villages (named Třebětice and Alexovice). His result was that there were 73 burrows including 45 active ones.

The research was summarized in this 9-page preprint. The document is really composed of 3 or so pages of text combined with lots of easy-to-create images – pieces of maps with a few symbols added. It's in Czech but you may have a look and see what's roughly inside and how hard it is to produce such a preprint.

Now, the question is how much "Ekoteam" was paid for this "research" by the Director Bureau of the Roads and Highways in the Czech Republic, a state-controlled company attached to the Ministry of Transportation. You may leave a comment with your guess how much you thought it was, and you may also say how much it should cost. If you would be able to count 73 holes and write a 9-page Word file with a few paragraphs and several images with maps, how much would you want as compensation?

The answer is that Ekoteam received $100,000. One hundred thousand damn dollars (well, 2.4 million Czech crowns which is the same thing). Look at the invoice. Can you imagine that? Think how much hard work and very complicated pages of research a HEP physics postdoc has to do per year to remain in the business.

Instead, if he or she were connected to some easier sources of money, he or she could have counted 73 holes near a village, write a simple crackpot-like Word document, and guarantee income which would be enough for two or more years.

I think that such things would probably not happen if the Director Bureau of the Roads and Highways were a private company. Individual people and, to a slightly lesser extent, companies may understand the value of their money. But if one is deciding about other people's money, it's not difficult to be generous – especially if the recipient is a friend or if he has something else to offer elsewhere.

by Luboš Motl (noreply@blogger.com) at March 04, 2015 08:01 AM

The n-Category Cafe

Lebesgue's Universal Covering Problem

Lebesgue’s universal covering problem is famously difficult, and a century old. So I’m happy to report some progress:

• John Baez, Karine Bagdasaryan and Philip Gibbs, Lebesgue’s universal covering problem.

But we’d like you to check our work! It will help if you’re good at programming. As far as the math goes, it’s just high-school geometry… carried to a fanatical level of intensity.

Here’s the story:

A subset of the plane has diameter 1 if the distance between any two points in this set is <semantics>1<annotation encoding="application/x-tex">\le 1</annotation></semantics>. You know what a circle of diameter 1 looks like. But an equilateral triangle with edges of length 1 also has diameter 1:

After all, two points in this triangle are farthest apart when they’re at two corners.

Note that this triangle doesn’t fit inside a circle of diameter 1:

There are lots of sets of diameter 1, so it’s interesting to look for a set that can contain them all.

In 1914, the famous mathematician Henri Lebesgue sent a letter to a pal named Pál. And in this letter he challenged Pál to find the convex set with smallest possible area such that every set of diameter 1 fits inside.

More precisely, he defined a universal covering to be a convex subset of the plane that can cover a translated, reflected and/or rotated version of every subset of the plane with diameter 1. And his challenge was to find the universal covering with the least area.

Pál worked on this problem, and 6 years later he published a paper on it. He found a very nice universal covering: a regular hexagon in which one can inscribe a circle of diameter 1. This has area

<semantics>32=0.86602540<annotation encoding="application/x-tex"> \frac{\sqrt{3}}{2} = 0.86602540 \dots </annotation></semantics>

But he also found a universal covering with less area, by removing two triangles from this hexagon—for example, the triangles C1C2C3 and E1E2E3 here:

Our paper explains why you can remove these triangles, assuming the hexagon was a universal covering in the first place. The resulting universal covering has area

<semantics>223=0.84529946<annotation encoding="application/x-tex"> 2 - \frac{2}{\sqrt{3}} = 0.84529946 \dots </annotation></semantics>

In 1936, Sprague went on to prove that more area could be removed from another corner of Pál’s original hexagon, giving a universal covering of area

<semantics>0.8441377708435<annotation encoding="application/x-tex"> 0.8441377708435 \dots </annotation></semantics>

In 1992, Hansen took these reductions even further by removing two more pieces from Pál’s hexagon. Each piece is a thin sliver bounded by two straight lines and an arc. The first piece is tiny. The second is downright microscopic!

Hansen claimed the areas of these regions were <semantics>410 11<annotation encoding="application/x-tex">4 \cdot 10^{-11}</annotation></semantics> and <semantics>610 18<annotation encoding="application/x-tex">6 \cdot 10^{-18}</annotation></semantics>. However, our paper redoes his calculation and shows that the second number is seriously wrong. The actual areas are <semantics>3.750710 11<annotation encoding="application/x-tex">3.7507 \cdot 10^{-11}</annotation></semantics> and <semantics>8.446010 21<annotation encoding="application/x-tex">8.4460 \cdot 10^{-21}</annotation></semantics>.

Philip Gibbs has created a Java applet illustrating Hansen’s universal cover. I urge you to take a look! You can zoom in and see the regions he removed:

• Philip Gibbs, Lebesgue’s universal covering problem.

I find that my laptop, a Windows machine, makes it hard to view Java applets because they’re a security risk. I promise this one is safe! To be able to view it, I had to go to the “Search programs and files” window, find the “Configure Java” program, go to “Security”, and add

http://gcsealgebra.uk/lebesgue/hansen

to the “Exception Site List”. It’s easy once you know what to do.

And it’s worth it, because only the ability to zoom lets you get a sense of the puny slivers that Hansen removed! One is the region XE2T here, and the other is T′C3V:

You can use this picture to help you find these regions in Philip Gibbs’ applet. But this picture is not in scale! In fact the smaller region, T′C3V, has length <semantics>3.710 7<annotation encoding="application/x-tex">3.7 \cdot 10^{-7}</annotation></semantics> and maximum width <semantics>1.410 14<annotation encoding="application/x-tex">1.4 \cdot 10^{-14}</annotation></semantics>, tapering down to a very sharp point.

That’s about a few atoms wide if you draw the whole hexagon on paper! And it’s about 30 million times longer than it is wide. This is the sort of thing you can only draw with the help of a computer.

Anyway, Hansen’s best universal covering had an area of

<semantics>0.844137708416<annotation encoding="application/x-tex"> 0.844137708416 \dots </annotation></semantics>

This tiny improvement over Sprague’s work led Klee and Wagon to write:

it does seem safe to guess that progress on [this problem], which has been painfully slow in the past, may be even more painfully slow in the future.

However, our new universal covering removes about a million times more area than Hansen’s larger region: a whopping <semantics>2.23310 5<annotation encoding="application/x-tex">2.233 \cdot 10^{-5}</annotation></semantics>. So, we get a universal covering with area

<semantics>0.844115376859<annotation encoding="application/x-tex"> 0.844115376859 \dots </annotation></semantics>

The key is to slightly rotate the dodecagon shown in the above pictures, and then use the ideas of Pál and Sprague.

There’s a lot of room between our number and the best lower bound on this problem, due to Brass and Sharifi:

<semantics>0.832<annotation encoding="application/x-tex"> 0.832 </annotation></semantics>

So, one way or another, we can expect a lot of progress now that computers are being brought to bear. Philip Gibbs has a heuristic computer calculation pointing toward a value of

<semantics>0.84408<annotation encoding="application/x-tex"> 0.84408 </annotation></semantics>

so perhaps that’s what we should shoot for.

Read our paper for the details! If you want to check our work, we’ll be glad to answer lots of detailed questions. We want to rotate the dodecagon by an amount that minimizes the area of the universal covering we get, so we use a program to compute the area for many choices of rotation angle:

• Philip Gibbs, Java program.

The program is not very long—please study it or write your own, in your own favorite language! The output is here:

• Philip Gibbs, Java program output.

and as explained at the end of our paper, the best rotation angle is about <semantics>1.3 <annotation encoding="application/x-tex">1.3^\circ</annotation></semantics>.

by john (baez@math.ucr.edu) at March 04, 2015 01:34 AM

March 03, 2015

Christian P. Robert - xi'an's og

Overfitting Bayesian mixture models with an unknown number of components

During my Czech vacations, Zoé van Havre, Nicole White, Judith Rousseau, and Kerrie Mengersen1 posted on arXiv a paper on overfitting mixture models to estimate the number of components. This is directly related with Judith and Kerrie’s 2011 paper and with Zoé’s PhD topic. The paper also returns to the vexing (?) issue of label switching! I very much like the paper and not only because the author are good friends!, but also because it brings a solution to an approach I briefly attempted with Marie-Anne Gruet in the early 1990’s, just before finding about the reversible jump MCMC algorithm of Peter Green at a workshop in Luminy and considering we were not going to “beat the competition”! Hence not publishing the output of our over-fitted Gibbs samplers that were nicely emptying extra components… It also brings a rebuke about a later assertion of mine’s at an ICMS workshop on mixtures, where I defended the notion that over-fitted mixtures could not be detected, a notion that was severely disputed by David McKay…

What is so fantastic in Rousseau and Mengersen (2011) is that a simple constraint on the Dirichlet prior on the mixture weights suffices to guarantee that asymptotically superfluous components will empty out and signal they are truly superfluous! The authors here cumulate the over-fitted mixture with a tempering strategy, which seems somewhat redundant, the number of extra components being a sort of temperature, but eliminates the need for fragile RJMCMC steps. Label switching is obviously even more of an issue with a larger number of components and identifying empty components seems to require a lack of label switching for some components to remain empty!

When reading through the paper, I came upon the condition that only the priors of the weights are allowed to vary between temperatures. Distinguishing the weights from the other parameters does make perfect sense, as some representations of a mixture work without those weights. Still I feel a bit uncertain about the fixed prior constraint, even though I can see the rationale in not allowing for complete freedom in picking those priors. More fundamentally, I am less and less happy with independent identical or exchangeable priors on the components.

Our own recent experience with almost zero weights mixtures (and with Judith, Kaniav, and Kerrie) suggests not using solely a Gibbs sampler there as it shows poor mixing. And even poorer label switching. The current paper does not seem to meet the same difficulties, maybe thanks to (prior) tempering.

The paper proposes a strategy called Zswitch to resolve label switching, which amounts to identify a MAP for each possible number of components and a subsequent relabelling. Even though I do not entirely understand the way the permutation is constructed. I wonder in particular at the cost of the relabelling.


Filed under: Statistics Tagged: component of a mixture, Czech Republic, Gibbs sampling, label switching, Luminy, mixture estimation, Peter Green, reversible jump, unknown number of components

by xi'an at March 03, 2015 11:15 PM

Emily Lakdawalla - The Planetary Society Blog

Watch Ceres rotate: A guide to interpreting Dawn's images
NASA held a press briefing on the Dawn mission yesterday, sharing some new images and early interpretations of them. I see lots of things that intrigue me, and I'm looking forward to Dawn investigating them in more detail. I invite you to check out these photos yourself, and offer you some guidance on things to look for.

March 03, 2015 08:55 PM

astrobites - astro-ph reader's digest

A new tool for hunting exoplanetary rings
  • A Novel Method for identifying Exoplanetary Rings
  • Authors: J. I. Zuluaga, D. M. Kipping, M. Sucerquia and J. A. Alvarado
  • First Author’s Institutions: 1) Harvard-Smithsonian Center for Astrophysics, 2) FACom – Instituto de Fisica – FCEN, Universidad de Antioquia, Colombia,  3) Fulbright Visitor Scholar

Today’s question: Do you like rings?

Let’s start this Astrobite a bit different than usual. Before you read on, please click here and tell us about your favorite planet… Are you done? Good! The reason why I’m asking has to do with the ring structure around Saturn. Assuming you like Saturn’s rings, you are probably also curious whether exoplanets reveal ring structures, too, and how those can be detected. The answer to the first question is ‘Yes!’ and if you like Saturn, you probably fell in love with this planet. As Ruth told you, the authors put a lot of work into finding an explanation for the observed profile by comparing different transit profiles before they concluded that the planet hosts circumplanetary rings. That’s the point where today’s authors come into play. They present a model, which simplifies the detection of exorings.

exoring_transit_sketch

Figure 1: Sketch of a transit of a planet with rings (top) and the corresponding schematic illustration of the observed flux (bottom). T_{14} corresponds to the entire transiting interval of the planet with/without rings, while T_{23} corresponds to the time interval of the full transit of the planet with/without rings. The figure corresponds to Fig. 1 in the letter.

Characteristic features of planets hosting circumplanetary rings

Figure 1 illustrates their underlying thoughts. The yellow area represents the host star and I guess it is easy to spot the planet with its circumplanetary rings moving from left to right. When the planet moves together with the rings to the right of position x_3 (when it starts its egress), one side of the rings will not hide the light of the host star anymore and thus the observed flux from the host star slowly increases. The planet itself leaves the transiting area a bit later, which then leads to a steeper increase in flux until the planet does not hide any of the light anymore. At that point the increase in flux is shallower again due to the other side of the ring structure, which still covers part of the light from the host star before this side also stops transiting at position x_4. When the planet and its rings start transiting (the ingress), the process is reversed. The illuminating area of the star gets more hidden and the flux decreases. Principally there are two different intervals for a transit of a planet:

  • The time from the entering of the planet in the transiting region until the time the planet does not cover any light of the host anymore (corresponding to T_{14} or just “transit”).
  • The time in which the entire planet covers the light of the host (corresponding to T_{23} or “full transit”).

Considering now also the rings, four time intervals exist, namely T_{14,ring}, T_{14,planet}, T_{23,planet} and T_{23,ring} (as shown in the graph of Fig. 1). The relative flux difference during the transit of the planet with and without the rings compared to the unperturbed flux of the star is called transit depth (\delta=(F-F_0)/F_0). In practice, the slopes corresponding to the ingress/egress of the ring and the ingress/egress of the planet are difficult to distinguish from each other. The authors stress that exoplanets with rings could be mistakenly interpreted as ringless planets. Assuming the star and the planet as being spherical and a uniform shape of the rings, the transit depth simply becomes the ratio of the area hidden by the planet including the rings and the projected surface area of the star (\delta=A_{rp}/A_{\ast}). If the rings’ plane is perpendicular to the orbiting direction and the observed transit depth of the ring is interpreted as the transit depth of a ringless planet, the overestimated radius of the planet leads to an underestimation of the planetary density (as shown in Fig. 2).

Figure 2: Illustration of the effect of the projected inclination of the of the rings on the ratio of observed to true planetary ratio. The degree of inclination is illustrated by the black dots and their surrounding rings. The different colors represent different transit depths. The figure corresponds to the upper panel in Fig. 2 of the letter.

Figure 2: Illustration of the effect of the projected inclination of the of the rings on the ratio of observed to true planetary ratio. The degree of inclination is illustrated by the black dots and their surrounding rings. The different colors represent different transit depths. The figure corresponds to the upper panel in Fig. 2 of the letter.

Additionally, the authors repeat the derivation that stellar density \rho_{\ast} is proportional to \delta_{0.75} = (T_{14}^2-T_{23}^2)^{-1.5} reveals another potential misinterpretation. Figure 3 illustrates the effect on stellar density of different ring inclinations. In the case of a ring plane perpendicular to the orbital direction, the stellar density would be overestimated. However, in the more common case of alignment of the rings’ plane with the orbiting plane, the increased difference T_{14}-T_{23} leads to an underestimation of the stellar density.

A publicly available code allows for hunting 

Taking into account the described phenomena of anomalous depth and the photo-ring effect to estimate probability distribution functions for the occurrence of the effects, the authors developed a computer code, which you can use to go out hunting for exoring candidates! They suggest that you focus on planets (candidates) with low densities and use their publicly available code (http://github.org/facom/exorings) to do so. But here’s a disclaimer: The code can only find candidates. To confirm their existence you still need to do a complex fit of the light curve. That’s something the code cannot do for you.

stellar_density_plot

Figure 3: Illustration of the photo-ring effect. The color scale displays the relative difference of observed radiation to stellar radiation. The two axis represent the tilt of the rings with respect to the two planets. Note that the y-axis corresponds to the actual angle, while the projected inclination on the x-axis is a cosine. The black crosses represent a sub-sample of observed transits with low obliquity. The figure corresponds to the upper panel of Fig. 3 in the letter.

 

 

 

 

by Michael Küffmeier at March 03, 2015 07:44 PM

ZapperZ - Physics and Physicists

Two Quantum Properties Teleported Simultaneously
People all over the net are going ga-ga over the report on the imaging of the wave-particle behavior of light at the same time. I, on the other hand, am more fascinated by the report that two different quantum properties have been teleported simultaneously for the very first time.

The values of two inherent properties of one photon – its spin and its orbital angular momentum – have been transferred via quantum teleportation onto another photon for the first time by physicists in China. Previous experiments have managed to teleport a single property, but scaling that up to two properties proved to be a difficult task, which has only now been achieved. The team's work is a crucial step forward in improving our understanding of the fundamentals of quantum mechanics and the result could also play an important role in the development of quantum communications and quantum computers. 

 See if you can view the actual Nature paper here. I'm not sure how long the free access will last.

Zz.

by ZapperZ (noreply@blogger.com) at March 03, 2015 04:48 PM

Quantum Diaries

Detecting something with nothing

This article appeared in Fermilab Today on March 3, 2015.

From left: Jason Bono (Rice University), Dan Ambrose (University of Minnesota) and Richie Bonventre (Lawrence Berkeley National Laboratory) work on the Mu2e straw chamber tracker unit at Lab 3. Photo: Reidar Hahn

From left: Jason Bono (Rice University), Dan Ambrose (University of Minnesota) and Richie Bonventre (Lawrence Berkeley National Laboratory) work on the Mu2e straw chamber tracker unit at Lab 3. Photo: Reidar Hahn

Researchers are one step closer to finding new physics with the completion of a harp-shaped prototype detector element for the Mu2e experiment.

Mu2e will look for the conversion of a muon to only an electron (with no other particles emitted) — something predicted but never before seen. This experiment will help scientists better understand how these heavy cousins of the electron decay. A successful sighting would bring us nearer to a unifying theory of the four forces of nature.

The experiment will be 10,000 times as sensitive as other experiments looking for this conversion, and a crucial part is the detector that will track the whizzing electrons. Researchers want to find one whose sole signature is its energy of 105 MeV, indicating that it is the product of the elusive muon decay.

In order to measure the electron, scientists track the helical path it takes through the detector. But there’s a catch. Every interaction with detector material skews the path of the electron slightly, disturbing the measurement. The challenge for Mu2e designers is thus to make a detector with as little material as possible, says Mu2e scientist Vadim Rusu.

“You want to detect the electron with nothing — and this is as close to nothing as we can get,” he said.

So how to detect the invisible using as little as possible? That’s where the Mu2e tracker design comes in. Panels made of thin straws of metalized Mylar, each only 15 microns thick, will sit inside a cylindrical magnet. Rusu says that these are the thinnest straws that people have ever used in a particle physics experiment.

These straws, filled with a combination of argon and carbon dioxide gas and threaded with a thin wire, will wait in vacuum for the electrons. Circuit boards placed on both ends of the straws will gather the electrical signal produced when electrons hit the gas inside the straw. Scientists will measure the arrival times at each end of the wire to help accurately plot the electron’s overall trajectory.

“This is another tricky thing that very few have attempted in the past,” Rusu said.

The group working on the Mu2e tracker electronics have also created the tiny, low-power circuit boards that will sit at the end of each straw. With limited space to run cooling lines, necessary features that whisk away heat that would otherwise sit in the vacuum, the electronics needed to be as cool and small as possible.

“We actually spent a lot of time designing very low-power electronics,” Rusu said.

This first prototype, which researchers began putting together in October, gives scientists a chance to work out kinks, improve design and assembly procedures, and develop the necessary components.

One lesson already learned? Machining curved metal with elongated holes that can properly hold the straws is difficult and expensive. The solution? Using 3-D printing to make a high-tech, transparent plastic version instead.

Researchers also came up with a system to properly stretch the straws into place. While running a current through the straw, they use a magnet to pluck the straw — just like strumming a guitar string — and measure the vibration. This lets them set the proper tension that will keep the straw straight throughout the lifetime of the experiment.

Although the first prototype of the tracker is complete, scientists are already hard at work on a second version (using the 3D-printed plastic), which should be ready in June or July. The prototype will then be tested for leaks and to see if the electronics pick up and transmit signals properly.

A recent review of Mu2e went well, and Rusu expects work on the tracker construction to begin in 2016.

Lauren Biron

by Fermilab at March 03, 2015 03:22 PM

Georg von Hippel - Life on the lattice

QNP 2015, Day One
Hello from Valparaíso, where I continue this year's hectic conference circuit at the 7th International Conference on Quarks and Nuclear Physics (QNP 2015). Except for some minor inconveniences and misunderstandings, the long trip to Valparaíso (via Madrid and Santiago de Chile) went quite smoothly, and so far, I have found Chile a country of bright sunlight and extraordinarily helpful and friendly people.

The first speaker of the conference was Emanuele Nocera, who reviewed nucleon and nuclear parton distributions. The study of parton distributions become necessary because hadrons are really composed not simply of valence quarks, as the quark model would have it, but of an indefinite number of (sea) quarks, antiquarks and gluons, any of which can contribute to the overall momentum and spin of the hadron. In an operator product expansion framework, hadronic scattering amplitudes can then be factorised into Wilson coefficients containing short-distance (perturbative) physics and parton distribution functions containing long-distance (non-perturbative) physics. The evolution of the parton distribution functions (PDFs) with the momentum scale is given by the DGLAP equations containing the perturbatively accessible splitting functions. The PDFs are subject to a number of theoretical constraints, of which the sum rules for the total hadronic momentum and valence quark content are the most prominent. For nuclei, on can assume that a similar factorisation as for hadrons still holds, and that the nuclear PDFs are linear combinations of nucleon PDFs modified by multiplication with a binding factor; however, nuclei exhibit correlations between nucleons, which are not well-described in such an approach. Combining all available data from different sources, global fits to PDFs can be performed using either a standard χ2 fit with a suitable model, or a neural network description. There are far more and better data on nucleon than nuclear PDFs, and for nucleons the amount and quality of the data also differs between unpolarised and polarised PDFs, which are needed to elucidate the "proton spin puzzle".

Next was the first lattice talk of the meeting, given by Huey-Wen Lin, who gave a review of the progress in lattice studies of nucleon structure. I think Huey-Wen gave a very nice example by comparing the computational and algorithmic progress with that in videogames (I'm not an expert there, but I think the examples shown were screenshots of Nethack versus some modern first-person shooter), and went on to explain the importance of controlling all systematic errors, in particular excited-state effects, before reviewing recent results on the tensor, scalar and axial charges and the electromagnetic form factors of the nucleon. As an outlook towards the current frontier, she presented the inclusion of disconnected diagrams and a new idea of obtaining PDFs from the lattice more directly rather than through their moments.

The next speaker was Robert D. McKeown with a review of JLab's Nuclear Science Programme. The CEBAF accelerator has been upgraded to 12 GeV, and a number of experiments (GlueX to search for gluonic excitations, MOLLER to study parity violation in Møller scattering, and SoLID to study SIDIS and PVDIS) are ready to be launched. A number of the planned experiments will be active in areas that I know are also under investigation by experimental colleagues in Mainz, such as a search for the "dark photon" and a study of the running of the Weinberg angle. Longer-term plans at JLab include the design of an electron-ion collider.

After a rather nice lunch, Tomofumi Nagae spoke about the hadron physics programme an J-PARC. In spite of major setbacks by the big earthquake and a later radiation accident, progress is being made. A search for the Θ+ pentaquark did not find a signal (which I personally do not find surprising, since the whole pentaquark episode is probably of more immediate long-term interest to historians and sociologists of science than to particle physicists), but could not completely exclude all of the discovery claims.

This was followed by a take by Jonathan Miller of the MINERνA collaboration presenting their programme of probing nuclei with neutrinos. Major complications include the limited knowledge of the incoming neutrino flux and the fact that final-state interactions on the nuclear side may lead to one process mimicking another one, making the modelling in event generators a key ingredient of understanding the data.

Next was a talk about short-range correlations in nuclei by Or Henn. Nucleons subject to short-range correlations must have high relative momenta, but a low center-of-mass momentum. The experimental studies are based on kicking a proton out of a nucleus with an electron, such that both the momentum transfer (from the incoming and outgoing electron) and the final momentum of the proton are known, and looking for a nucleon with a momentum close to minus the difference between those two (which must be the initial momentum of the knocked-out proton) coming out. The astonishing result is that at high momenta, neutron-proton pairs dominate (meaning that protons, being the minority, have a much larger chance of having high momenta) and are linked by a tensor force. Similar results are known from other two-component Fermi systems, such as ultracold atomic gases (which are of course many, many orders of magnitude less dense than nuclei).

After the coffee break, Heinz Clement spoke about dibaryons, specifically about the recently discovered d*(2380) resonance, which taking all experimental results into account may be interpreted as a ΔΔ bound state

The last talk of the day was by André Walker-Loud, who reviewed the study of nucleon-nucleon interactions and nuclear structure on the lattice, starting with a very nice review of the motivations behind such studies, namely the facts that big-bang nucleosynthesis is very strongly dependent on the deuterium binding energy and the proton-neutron mass difference, and this fine-tuning problem needs to be understood from first principles. Besides, currently the best chance for discovering BSM physics seems once more to lie with low-energy high-precision experiments, and dark matter searches require good knowledge of nuclear structure to control their systematics. Scattering phase shifts are being studied through the Lüscher formula. Current state-of-the-art studies of bound multi-hadron systems are related to dibaryons, in particular the question of the existence of the H-dibaryon at the physical pion mass (note that the dineutron, certainly unbound in the real world, becomes bound at heavy enough pion masses), and three- and four-nucleon systems are beginning to become treatable, although the signal-to-noise problem gets worse as more baryons are added to a correlation function, and the number of contractions grows rapidly. Going beyond masses and binding energies, the new California Lattice Collaboration (CalLat) has preliminary results for hadronic parity violation in the two-nucleon system, albeit at a pion mass of 800 MeV.

by Georg v. Hippel (noreply@blogger.com) at March 03, 2015 02:38 PM

Clifford V. Johnson - Asymptotia

dublab at LAIH
LAIH_Mark_McNeill_27th_Feb_2015 (Click for larger view.) Mark ("Frosty") McNeill gave us a great overview of the work of the dublab collective at last Friday's LAIH luncheon. As I said in my introduction:
... dublab shows up as part of the DNA of many of the most engaging live events around the City (at MOCA, LACMA, Barnsdall, the Hammer, the Getty, the Natural History Museum, the Hollywood Bowl… and so on), and dublab is available in its core form as a radio project any time you like if you want to listen online. [...] dublab is a "non-profit web radio collective devoted to the growth of  positive music, arts and culture."
Frosty is a co-founder of dublab, and he told us a bit about its history, activities, and their new wonderful project called "Sound Share LA" which will be launching soon: They are creating a multimedia archive of Los Angeles based [...] Click to continue reading this post

by Clifford at March 03, 2015 02:06 PM

Symmetrybreaking - Fermilab/SLAC

A telescope that tells you when to look up

The LSST system will alert scientists to changes in space in near-real time.

A massive digital camera will begin taking detailed snapshots from a mountaintop telescope in Chile in 2021. In just a few nights, the Large Synoptic Survey Telescope will amass more data than the Hubble Space Telescope gathered in its first 20 years of operation.

This unprecedented stream of images will trigger up to 10 million automated alerts each night, an average of about 10,000 per minute. The alerts will point out objects that appear to be changing in brightness, color or position—candidates for fast follow-up viewing using other telescopes.

To be ready for this astronomical flood of data, scientists are already working out the details of how to design the alert system to be widely and rapidly accessible.

“The number of alerts is far more than humans can filter manually,” says Jeff Kantor, LSST Data Management project manager. “Automated filters will be required to pick out the alerts of interest for any given scientist or project.”

The alerts will provide information on the properties of newly discovered asteroids, supernovae, gamma-ray bursts, galaxies and stars with variable brightness, and other short-lived phenomena, Kantor says.

The alerts could come in the form of emails or other notifications on the Web or smartphone apps—and could be made accessible to citizen scientists as well, says Aaron Roodman, a SLAC National Accelerator Laboratory scientist working on the LSST camera.

Artwork by: Sandbox Studio, Chicago

“I think there actually will be fantastic citizen science coming out of this,” he says. “I think it will be possible for citizen scientists to create unique filters, identify new objects—I think it’s ripe for that. I think the immediacy of the data will be great.”

LSST’s camera will produce images with 400 times more pixels than those produced by the camera in the latest-model iPhone. It is designed to capture pairs of 15-second exposures before moving to the next position, recording subtle changes between the paired images and comparing them to previous images taken at the same position. LSST will cover the entire visible sky twice a week.

Alerts will be generated within about a minute of each snapshot, which is good news for people interested in studying ephemeral phenomena such as supernovae, says Alex Kim, a scientist at Lawrence Berkeley National Laboratory who is a member of the LSST collaboration.

“The very first light that you get from a supernova—that sharp flash—only lasts from minutes to days,” Kim says. “It’s very important to have an immediate response before that flash disappears.”

The alerts will be distributed by a common astronomical alert system like today’s Virtual Observatory Event distribution networks, says SLAC scientist Kian-Tat Lim, LSST data management system architect.

CalTech scientist Ashish Mahabal, a co-chair of the LSST transients and variables science working group, says that the alerts system will need to be ready well before LSST construction is complete. It will be tested through simulations and could borrow from alert systems designed for other surveys.

The system that analyzes images to generate the LSST alerts will need to be capable of making about 40 trillion calculations per second. Mahabal says a basic system will likely be in place in the next two or three years.

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at March 03, 2015 02:00 PM

CERN Bulletin

Lecture | CERN prepares its long-term future: a 100-km circular collider to follow the LHC? | CERN Globe | 11 March
Particle physics is a long-term field of research: the LHC was originally conceived in the 1980s, but did not start running until 25 years later. An accelerator unlike any other, it is now just at the start of a programme that is set to run for another 20 years.   Frédérick Bordry. While the LHC programme is already well defined for the next two decades, it is now time to look even further ahead, and so CERN is initiating an exploratory study for a future long-term project centred on a next-generation circular collider with a circumference of 80 to 100 kilometres. A worthy successor to the LHC, whose collision energies will reach 13 TeV in 2015, such an accelerator would allow particle physicists to push the boundaries of knowledge even further. The Future Circular Collider (FCC) programme will focus especially on studies for a hadron collider, like the LHC, capable of reaching unprecedented energies in the region of 100 TeV. Opening with an introduction to the LHC and its physics programme, this lecture will then focus on the feasibility of designing, building and operating a machine approaching 100 km in length and the biggest challenges that this would pose, as well as the different options for such a machine (proton-proton, electron-positron or electron-proton collisions). Lecture in French, accompanied by slides in English. 18:30-19:30: talk: CERN prepares its future:a 100-km circular collider to follow the LHC? 19:30- 20:00 Questions and Answers Speaker: Frederick Bordry, CERN Director for Accelerators and Technology Entrance is free, but registration is mandatory: http://iyl.eventbrite.com As Director for Accelerators and Technology, Frédérick Bordry is in charge of the operation of the whole CERN accelerator complex, with a special focus on the LHC (Large Hadron Collider), and the development of post-LHC projects and technologies. He is a graduate of the École Nationale Supérieure d’Électronique, d’Électrotechnique, d’Informatique et d’Hydraulique de Toulouse (ENSEEIHT) and earned the titles of docteur-ingénieur and docteur ès sciences at the Institut National Polytechnique de Toulouse (INPT). He worked for ten years as a teaching researcher in both those institutes and later held a professorship for two years at the Université Fédérale de Santa Catarina, Florianópolis, Brazil (1979-1981). Since joining CERN in 1986, he has fulfilled several roles, most notably in accelerator design and energy conversion. Always a strong believer in the importance of international exchange in culture, politics and science, he has devoted time to reflecting on issues relating to education, research and multilingualism. He is also convinced of the importance of pooling financial and human resources, especially at the European level.

March 03, 2015 11:15 AM

Tommaso Dorigo - Scientificblogging

Recent Results From Super-Kamiokande

(The XVIth edition of "Neutrino Telescopes" is going on in Venice this week. The writeup below is from a talk by M.Nakahata at the morning session today. For more on the conference and the results shown and discussed there, see the conference blog.)

read more

by Tommaso Dorigo at March 03, 2015 10:20 AM

astrobites - astro-ph reader's digest

How did the Universe cool over time?

Title: Constraining the redshift evolution of the Cosmic Microwave Background black-body temperature with PLANCK data
Authors: I. Martino et al.
First Author’s Institution: Fisica Teorica, Universidad de Salamanca

While numerous cosmological models have been proposed to describe the early history and evolution of the Universe, the Big Bang model is by far in the best agreement with current observations. Different cosmological models predict different behaviors for the temperature evolution of the cosmic microwave background over time, and the Big Bang model predicts that the CMB should cool adiabatically via expansion (i.e. without any the addition or removal of any heat). We can describe the “overall” temperature of the Universe by measuring the blackbody spectrum of the cosmic microwave background, as it is believed that the early Universe was in thermal equlibrium (i.e. same temperature) as these CMB photons. By measuring the CMB temperature evolution over a range of redshifts (or equivalently, over cosmic history), we can test the consistency of the prevailing Big Bang model and search for any deviations from adiabatic expansion that might suggest new additions to our model.

In this paper, the authors combine CMB data from the Planck mission and previously collected X-ray observations of galaxy clusters to obtain constraints on this adiabatic temperature change. Instead of measuring the CMB directly, the authors make use of the thermal Sunyaev-Zeldovich (tSZ) effect. This effect causes high energy electrons from the intracluster gas contained within a galaxy cluster to boost the energy of CMB photons through electron-photon scattering. The authors use Planck data to subtract the background CMB signal from the X-ray emission of the galaxy clusters to measure the boost in CMB photon energy induced by these clusters. These galaxy clusters have known redshifts, and this redshift data combined with CMB temperature measurements yield the history of CMB temperature changes over time.

The aim of this study is to measure the CMB temperature evolution over time, but this is indirectly done by first measuring the tSZ effect of galaxy clusters at various redshifts. The authors measure the CMB temperature shifts caused by the tSZ effect by subtracting the CMB background, modeling the frequency-dependence of the tSZ effect, and and selecting the best fitting model to the cluster. This tSZ effect measurement then yields the CMB temperature at a certain redshift. The resulting temperature evolution measurements are consistent with the CMB temperature evolving adiabatically over time, and are consistent with previous attempts to quantify this adiabatic cooling. Fig. 1 plots the redshift evolution of the inferred CMB temperature (scaled with respect to redshift and the current CMB temperature). The shaded region and blue points represent the measurements done in this paper, and this is consistent with the hotizontal red line representing adiabatic evolution.

Fig. 1:

Fig. 1: CMB temperature (normalized with respect to redshift and present-day CMB temperature) plotted against redshift. The blue square points and shaded regions correspond to the measurements performed in this paper, which are in good agreement with the horizontal red line representing adiabatic cooling. The black dots are previous measurements done by a previous study.

While this agreement is perhaps not too surprising given the spectacularly good predictions made by the Big Bang model, this kind of consistency check is important for maintaining our confidence in the Big Bang model and ruling out other potential cosmologies. For further validation, the authors plan to continue this analysis by expanding their sample of galaxy clusters to higher redshifts for additional consistency checks.

by Anson Lam at March 03, 2015 07:46 AM

March 02, 2015

Christian P. Robert - xi'an's og

Is Jeffreys’ prior unique?

“A striking characterisation showing the central importance of Fisher’s information in a differential framework is due to Cencov (1972), who shows that it is the only invariant Riemannian metric under symmetry conditions.” N. Polson, PhD Thesis, University of Nottingham, 1988

Following a discussion on Cross Validated, I wonder whether or not the affirmation that Jeffreys’ prior was the only prior construction rule that remains invariant under arbitrary (if smooth enough) reparameterisation. In the discussion, Paulo Marques mentioned Nikolaj Nikolaevič Čencov’s book, Statistical Decision Rules and Optimal Inference, Russian book from 1972, of which I had not heard previously and which seems too theoretical [from Paulo’s comments] to explain why this rule would be the sole one. As I kept looking for Čencov’s references on the Web, I found Nick Polson’s thesis and the above quote. So maybe Nick could tell us more!

However, my uncertainty about the uniqueness of Jeffreys’ rule stems from the fact that, f I decide on a favourite or reference parametrisation—as Jeffreys indirectly does when selecting the parametrisation associated with a constant Fisher information—and on a prior derivation from the sampling distribution for this parametrisation, I have derived a parametrisation invariant principle. Possibly silly and uninteresting from a Bayesian viewpoint but nonetheless invariant.


Filed under: Books, Statistics, University life Tagged: cross validated, Harold Jeffreys, Jeffreys priors, NIck Polson, Nikolaj Nikolaevič Čencov, Russian mathematicians

by xi'an at March 02, 2015 11:15 PM

Emily Lakdawalla - The Planetary Society Blog

Understanding why our most Earth-like neighbor, Venus, is so different
Van Kane introduces us to EnVision—a proposed European mission to help improve our understanding of Venus.

March 02, 2015 06:43 PM

Lubos Motl - string vacua and pheno

Brian Greene's 14-hour audiobook
I admit that I have never bought an audiobook. And I don't even know what kind of devices or apps are able to play them. But most of us are able to play YouTube videos. And a user who is either a friend of Brian Greene or a pirate posted the full audiobook of "The Hidden Reality" to YouTube a month ago.




If you have spare 13 hours and 49 minutes today (and tomorrow, not to mention the day after tomorrow), here is the first 8.3 hours:



And when you complete this one, you should continue.




Here are the remaining 5.5 hours:



If you haven't tried the videos yet, you may be puzzled: Who can be narrator? Where in the world can you find someone who is able to read and flawlessly speak for 14 hours, without any need to eat, drink, use a toilet, or breath in between? Actors don't have these physical abilities, have they?

Maybe the actors don't but some physicists do. Brian Greene has recorded the audiobook version of "The Hidden Reality" himself!

If you are as impressed as I am, you may go to the Amazon.com page at the top and register for a free 30-day trial of the "Audible" program which allows you to download two books for free. After 30 days, you may continue with "Audible" at $14.95 a month.

Alternatively, you will always be able to buy the audiobooks individually. "The Hidden Reality" audiobook is available via one click for $26.95.

I am amazed by this because whenever I record something, like my versions of songs or anything else, it takes at least 10 times more time to produce the thing than the final duration of the audio file. My guess is that it would take me something like 140 hours to record that and the quality wouldn't be anywhere close to Brian's recitation (not even in Czech).

by Luboš Motl (noreply@blogger.com) at March 02, 2015 06:42 PM

Sean Carroll - Preposterous Universe

Guest Post: An Interview with Jamie Bock of BICEP2

Jamie Bock If you’re reading this you probably know about the BICEP2 experiment, a radio telescope at the South Pole that measured a particular polarization signal known as “B-modes” in the cosmic microwaves background radiation. Cosmologists were very excited at the prospect that the B-modes were the imprint of gravitational waves originating from a period of inflation in the primordial universe; now, with more data from the Planck satellite, it seems plausible that the signal is mostly due to dust in our own galaxy. The measurements that the team reported were completely on-target, but our interpretation of them has changed — we’re still looking for direct evidence for or against inflation.

Here I’m very happy to publish an interview that was carried out with Jamie Bock, a professor of physics at Caltech and a senior research scientist at JPL, who is one of the leaders of the BICEP2 collaboration. It’s a unique look inside the workings of an incredibly challenging scientific effort.


New Results from BICEP2: An Interview with Jamie Bock

What does the new data from Planck tell you? What do you know now?

A scientific race has been under way for more than a decade among a dozen or so experiments trying to measure B-mode polarization, a telltale signature of gravitational waves produced from the time of inflation. Last March, BICEP2 reported a B-mode polarization signal, a twisty polarization pattern measured in a small patch of sky. The amplitude of the signal we measured was surprisingly large, exceeding what we expected for galactic emission. This implied we were seeing a large gravitational wave signal from inflation.

We ruled out galactic synchrotron emission, which comes from electrons spiraling in the magnetic field of the galaxy, using low-frequency data from the WMAP [Wilkinson Microwave Anisotropy Probe] satellite. But there were no data available on polarized galactic dust emission, and we had to use models. These models weren’t starting from zero; they were built on well-known maps of unpolarized dust emission, and, by and large, they predicted that polarized dust emission was a minor constituent of the total signal.

Obviously, the answer here is of great importance for cosmology, and we have always wanted a direct test of galactic emission using data in the same piece of sky so that we can test how much of the BICEP2 signal is cosmological, representing gravitational waves from inflation, and how much is from galactic dust. We did exactly that with galactic synchrotron emission from WMAP because the data were public. But with galactic dust emission, we were stuck, so we initiated a collaboration with the Planck satellite team to estimate and subtract polarized dust emission. Planck has the world’s best data on polarized emission from galactic dust, measured over the entire sky in multiple spectral bands. However, the polarized dust maps were only recently released.

On the other side, BICEP2 gives us the highest-sensitivity data available at 150 GHz to measure the CMB. Interestingly, the two measurements are stronger in combination. We get a big boost in sensitivity by putting them together. Also, the detectors for both projects were designed, built, and tested at Caltech and JPL, so I had a personal interest in seeing that these projects worked together. I’m glad to say the teams worked efficiently and harmoniously together.

What we found is that when we subtract the galaxy, we just see noise; no signal from the CMB is detectable. Formally we can say at least 40 percent of the total BICEP2 signal is dust and less than 60 percent is from inflation.

How do these new data shape your next steps in exploring the earliest moments of the universe?

It is the best we can do right now, but unfortunately the result with Planck is not a very strong test of a possible gravitational wave signal. This is because the process of subtracting galactic emission effectively adds more noise into the analysis, and that noise limits our conclusions. While the inflationary signal is less than 60 percent of the total, that is not terribly informative, leaving many open questions. For example, it is quite possible that the noise prevents us from seeing part of the signal that is cosmological. It is also possible that all of the BICEP2 signal comes from the galaxy. Unfortunately, we cannot say more because the data are simply not precise enough. Our ability to measure polarized galactic dust emission in particular is frustratingly limited.

Figure 1:  Maps of CMB polarization produced by BICEP2 and Keck Array.  The maps show the  ‘E-mode’ polarization pattern, a signal from density variations in the CMB, not gravitational  waves.  The polarization is given by the length and direction of the lines, with a coloring to better  show the sign and amplitude of the E-mode signal.  The tapering toward the edges of the map is  a result of how the instruments observed this region of sky.  While the E-mode pattern is about 6  times brighter than the B-mode signal, it is still quite faint.  Tiny variations of only 1 millionth of  a degree kelvin are faithfully reproduced across these multiple measurements at 150 GHz, and in  new Keck data at 95 GHz still under analysis.  The very slight color shift visible between 150  and 95 GHz is due to the change in the beam size.

Figure 1: Maps of CMB polarization produced by BICEP2 and Keck Array.  The maps show the
‘E-mode’ polarization pattern, a signal from density variations in the CMB, not gravitational
waves.  The polarization is given by the length and direction of the lines, with a coloring to better
show the sign and amplitude of the E-mode signal.  The tapering toward the edges of the map is
a result of how the instruments observed this region of sky.  While the E-mode pattern is about 6
times brighter than the B-mode signal, it is still quite faint.  Tiny variations of only 1 millionth of
a degree kelvin are faithfully reproduced across these multiple measurements at 150 GHz, and in
new Keck data at 95 GHz still under analysis. The very slight color shift visible between 150
and 95 GHz is due to the change in the beam size.

However, there is good news to report. In this analysis, we added new data obtained in 2012–13 from the Keck Array, an instrument with five telescopes and the successor to BICEP2 (see Fig. 1). These data are at the same frequency band as BICEP2—150 GHz—so while they don’t help subtract the galaxy, they do increase the total sensitivity. The Keck Array clearly detects the same signal detected by BICEP2. In fact, every test we can do shows the two are quite consistent, which demonstrates that we are doing these difficult measurements correctly (see Fig. 2). The BICEP2/Keck maps are also the best ever made, with enough sensitivity to detect signals that are a tiny fraction of the total.

A power spectrum of the B-mode polarization signal that plots the strength of the signal as a function of angular frequency.  The data show a signal significantly above what is expected for a universe without gravitational waves, given by the red line.  The excess peaks at angular scales of about 2 degrees.  The independent measurements of BICEP2 and Keck Array shown in red and blue are consistent within the errors, and their combination is shown in black.  Note the sets of points are slightly shifted along the x-axis to avoid overlaps.

Figure 2: A power spectrum of the B-mode polarization signal that plots the strength of the signal as a function of angular frequency. The data show a signal significantly above what is expected for a universe without gravitational waves, given by the red line. The excess peaks at angular scales of about 2 degrees. The independent measurements of BICEP2 and Keck Array shown in red and blue are consistent within the errors, and their combination is shown in black. Note the sets of points are slightly shifted along the x-axis to avoid overlaps.

In addition, Planck’s measurements over the whole sky show the polarized dust is fairly well behaved. For example, the polarized dust has nearly the same spectrum across the sky, so there is every reason to expect we can measure and remove dust cleanly.

To better subtract the galaxy, we need better data. We aren’t going to get more data from Planck because the mission has finished. The best way is to measure the dust ourselves by adding new spectral bands to our own instruments. We are well along in this process already. We added a second band to the Keck Array last year at 95 GHz and a third band this year at 220 GHz. We just installed the new BICEP3 instrument at 95 GHz at the South Pole (see Fig. 3). BICEP3 is single telescope that will soon be as powerful as all five Keck Array telescopes put together. At 95 GHz, Keck and BICEP3 should surpass BICEP2’s 150 GHz sensitivity by the end of this year, and the two will be a very powerful combination indeed. If we switch the Keck Array entirely over to 220 GHz starting next year, we can get a third band to a similar depth.

BICEP3 installed and carrying out calibration measurements off a reflective mirror placed above the receiver. The instrument is housed within a conical reflective ground shield to minimize the brightness contrast between the warm earth and cold space.  This picture was taken at the beginning of the winter season, with no physical access to the station for the next 8 months, when BICEP3 will conduct astronomical observations (Credit:  Sam Harrison

Figure 3: BICEP3 installed and carrying out calibration measurements off a reflective mirror placed above the receiver. The instrument is housed within a conical reflective ground shield to minimize the brightness contrast between the warm earth and cold space. This picture was taken at the beginning of the winter season, with no physical access to the station for the next 8 months, when BICEP3 will conduct astronomical observations (Credit: Sam Harrison)

Finally, this January the SPIDER balloon experiment, which is also searching the CMB for evidence of inflation, completed its first flight, outfitted with comparable sensitivity at 95 and 150 GHz. Because SPIDER floats above the atmosphere (see Fig. 4), we can also measure the sky on larger spatial scales. This all adds up to make the coming years very exciting.

View of the earth and the edge of space, taken from an optical camera on the SPIDER gondola at float altitude shortly after launch. Clearly visible below is Ross Island, with volcanos Mt. Erebus and Mt. Terror and the McMurdo Antarctic base, the Royal Society mountain range to the left, and the edge of the Ross permanent ice shelf.   (Credit:  SPIDER team).

Figure 4: View of the earth and the edge of space, taken from an optical camera on the SPIDER gondola at float altitude shortly after launch. Clearly visible below is Ross Island, with volcanos Mt. Erebus and Mt. Terror and the McMurdo Antarctic base, the Royal Society mountain range to the left, and the edge of the Ross permanent ice shelf. (Credit: SPIDER team).

Why did you make the decision last March to release results? In retrospect, do you regret it?

We knew at the time that any news of a B-mode signal would cause a great stir. We started working on the BICEP2 data in 2010, and our standard for putting out the paper was that we were certain the measurements themselves were correct. It is important to point out that, throughout this episode, our measurements basically have not changed. As I said earlier, the initial BICEP2 measurement agrees with new data from the Keck Array, and both show the same signal. For all we know, the B-mode polarization signal measured by BICEP2 may contain a significant cosmological component—that’s what we need to find out.

The question really is, should we have waited until better data were available on galactic dust? Personally, I think we did the right thing. The field needed to be able to react to our data and test the results independently, as we did in our collaboration with Planck. This process hasn’t ended; it will continue with new data. Also, the searches for inflationary gravitational waves are influenced by these findings, and it is clear that all of the experiments in the field need to focus more resources on measuring the galaxy.

How confident are you that you will ultimately find conclusive evidence for primordial gravitational waves and the signature of cosmic inflation?

I don’t have an opinion about whether or not we will find a gravitational wave signal—that is why we are doing the measurement! But any result is so significant for cosmology that it has to be thoroughly tested by multiple groups. I am confident that the measurements we have made to date are robust, and the new data we need to subtract the galaxy more accurately are starting to pour forth. The immediate path forward is clear: we know how to make these measurements at 150 GHz, and we are already applying the same process to to the new frequencies. Doing the measurements ourselves also means they are uniform so we understand all of the errors, which, in the end, are just as important.

What will it mean for our understanding of the universe if you don’t find the signal?

The goal of this program is to learn how inflation happened. Inflation requires matter-energy with an unusual repulsive property in order to rapidly expand the universe. The physics are almost certainly new and exotic, at energies too high to be accessed with terrestrial particle accelerators. CMB measurements are one of the few ways to get at the inflationary physics, and we need to squeeze them for all they are worth. A gravitational wave signal is very interesting because it tells us about the physical process behind inflation. A detection of the polarization signal at a high level means that the certain models of inflation, perhaps along the lines of the models first developed, are a good explanation.

But here again is the real point: we also learn more about inflation if we can rule out polarization from gravitational waves. No detection at 5 percent or less of the total BICEP2 signal means that inflation is likely more complicated, perhaps involving multiple fields, although there are certainly other possibilities. Either way is a win, and we’ll find out more about what caused the birth of the universe 13.8 billion years ago.

Our team dedicated itself to the pursuit of inflationary polarization 15 years ago fully expecting a long and difficult journey. It is exciting, after all this work, to be at this stage where the polarization data are breaking into new ground, providing more information about gravitational waves than we learned before. The BICEP2 signal was a surprise, and its ultimate resolution is still a work in progress. The data we need to address these questions about inflation are within sight, and whatever the answers are, they are going to be interesting, so stay tuned.

by Sean Carroll at March 02, 2015 04:05 PM

Tommaso Dorigo - Scientificblogging

Francis Halzen On Cosmogenic Neutrinos
During the first afternoon session of the XVI Neutrino Telescopes conference (here is the conference blog, which contains a report of most of the lectures and posters as they are presented) Francis Halzen gave a very nice account of the discovery of cosmogenic neutrinos by the IceCube experiment, and its implications. Below I offer a writeup - apologizing to Halzen if I misinterpreted anything.

read more

by Tommaso Dorigo at March 02, 2015 04:00 PM

arXiv blog

The Curious Adventures of an Astronomer-Turned-Crowdfunder

Personal threats, legal challenges, and NASA’s objections were just a few of the hurdles Travis Metcalfe faced when he set up a crowdfunding website to help pay for his astronomical research.

If you want to name a star or buy a crater on the moon or own an acre on Mars, there are numerous websites that can help. The legal status of such “ownership” is far from clear but the services certainly allow for a little extraterrestrial fun.

March 02, 2015 03:54 PM

Quantum Diaries

30 reasons why you shouldn’t be a particle physicist

1. Some people think that physics is exciting.

(ATLAS)

(ATLAS)

2. They say “There’s nothing like the thrill of discovery”.

(ALICE Masterclass)

(ALICE Masterclass)

3. But that feeling won’t prepare you for the real world.

(CERN)

(CERN)

4. Discoveries only happen once. Do you really want to be in the room when they happen?

(CERN)

(CERN)

5. It’s not as though people queue overnight for the big discoveries.

(CERN)

(CERN)

6. CERN’s one of the biggest labs in the world. It’s like Disneyland, but for physicists.

7. The machines are among the most complex in the world.

(Francois Becler)

(Francois Becler)

8. Seriously, don’t mess with those machines.

(CERN)

(CERN)

9. They’re not even nice to look at.

(Michael Hoch, Maximilien Brice)

(Michael Hoch, Maximilien Brice)

10. The machines are so big you have to drive through the French countryside to get from one side to the other.

11. There’s nothing beautiful about the French countryside.

12. And there’s nothing cool about working on the world’s biggest computing grid with some of most powerful supercomputers ever created.

(CERN)

(CERN)

13. A dataset so big you can’t fit it all in one place? Please.

(CERN)

(CERN)

14. So you can do your analysis from anywhere in the world? Lame!

(CERN Courier)

(CERN Courier)

15. And our conferences always take place in strange places.

16. Who has time to travel?

17. Some people even take time away from the lab to go skiing.

(LHCb)

(LHCb)

18. Physicists have been working on this stuff for decades. Nobody remembers any of these people:

(Wikipedia)

(Wikipedia)

19. But particle physics is only about understanding the universe on the most fundamental level.

20. We don’t even have a well stocked library to help us when things get tough.

21. Or professors and experts to explain things to us.

22. And the public don’t care about what we do.

(CERN)

(CERN)

23. Even the press don’t pay any attention.

(Sean Treacy)

(Sean Treacy)

24. And who wants to contribute to the sum of human knowledge anyway?

(STFC)

(STFC)

25. There’s nothing exciting about being on shift in the Control Room either.

(ATLAS)

(ATLAS)

26. Or travelling the world to collaborate.

27. Or meeting hundreds of people, each with their own story and background.

28. You never get to meet any interesting people.

(CERN)

(CERN)

29. And physicists have no sense of humour.

30. Honestly, who would want to be a physicist?

(CMS)

(CMS)

References:

  • http://www.atlas.ch/news/2008/first-beam-and-event.html
  • http://opendata.cern.ch/collection/ALICE-Learning-Resources
  • http://cds.cern.ch/record/1406060?ln=en
  • http://cds.cern.ch/record/1459634
  • http://cds.cern.ch/record/1459503?ln=en
  • http://cds.cern.ch/record/1474902/files/
  • https://cds.cern.ch/record/1643071/
  • http://cds.cern.ch/record/1436153?ln=en
  • http://home.web.cern.ch/about/computing
  • http://home.web.cern.ch/about/computing/grid-software-middleware-hardware
  • http://cerncourier.com/cws/article/cern/52744
  • http://lhcb.web.cern.ch/lhcb/fun/FunNewPage/album-crozet-jan2012/index.html
  • http://en.wikipedia.org/wiki/Solvay_Conference
  • http://home.web.cern.ch/about/updates/2014/05/cern-celebrates-its-anniversary-its-neighbours
  • https://atlas-service-enews.web.cern.ch/atlas-service-enews/2009/news_09/news_beam09.php
  • http://the-sieve.com/2012/07/06/higgsmania/
  • http://www.stfc.ac.uk/imagelibrary/displayImage.aspx?p=593
  • http://press.highenergyphysicsmedia.com/ichep-2012-cern-announcment.html
  • http://cds.cern.ch/record/1965972?ln=en
  • http://cds.cern.ch/record/1363014/

by Aidan Randle-Conde at March 02, 2015 03:16 PM

Peter Coles - In the Dark

Uncertainty, Risk and Probability

Last week I attended a very interesting event on the Sussex University campus, the Annual Marie Jahoda Lecture which was given this year by Prof. Helga Nowotny a distinguished social scientist. The title of the talk was A social scientist in the land of scientific promise and the abstract was as follows:

Promises are a means of bringing the future into the present. Nowhere is this insight by Hannah Arendt more applicable than in science. Research is a long and inherently uncertain process. The question is open which of the multiple possible, probable or preferred futures will be actualized. Yet, scientific promises, vague as they may be, constitute a crucial link in the relationship between science and society. They form the core of the metaphorical ‘contract’ in which support for science is stipulated in exchange for the benefits that science will bring to the well-being and wealth of society. At present, the trend is to formalize scientific promises through impact assessment and measurement. Against this background, I will present three case studies from the life sciences: assisted reproductive technologies, stem cell research and the pending promise of personalized medicine. I will explore the uncertainty of promises as well as the cunning of uncertainty at work.

It was a fascinating and wide-ranging lecture that touched on many themes. I won’t try to comment on all of them, but just pick up on a couple that struck me from my own perspective as a physicist. One was the increasing aversion to risk demonstrated by research funding agencies, such as the European Research Council which she helped set up but described in the lecture as “a clash between a culture of trust and a culture of control”. This will ring true to any scientist applying for grants even in “blue skies” disciplines such as astronomy: we tend to trust our peers, who have some control over funding decisions, but the machinery of control from above gets stronger every day. Milestones and deliverables are everything. Sometimes I think in order to get funding you have to be so confident of the outcomes of your research to that you have to have already done it, in which case funding isn’t even necessary. The importance of extremely speculative research is rarely recognized, although that is where there is the greatest potential for truly revolutionary breakthroughs.

Another theme that struck me was the role of uncertainty and risk. This grabbed my attention because I’ve actually written a book about uncertainty in the physical sciences. In her lecture, Prof. Nowotny referred to the definition (which was quite new to me) of these two terms by Frank Hyneman Knight in a book on economics called Risk, Uncertainty and Profit. The distinction made there is that “risk” is “randomness” with “knowable probabilities”, whereas “uncertainty” involves “randomness” with “unknowable probabilities”. I don’t like these definitions at all. For one thing they both involve a reference to “randomness”, a word which I don’t know how to define anyway; I’d be much happier to use “unpredictability”. Even more importantly, perhaps, I find the distinction between “knowable” and “unknowable” probabilities very problematic. One always knows something about a probability distribution, even if that something means that the distribution has to be very broad. And in any case these definitions imply that the probabilities concerned are “out there”, rather being statements about a state of knowledge (or lack thereof). Sometimes we know what we know and sometimes we don’t, but there are more than two possibilities. As the great American philosopher and social scientist Donald Rumsfeld (Shurely Shome Mishtake? Ed) put it:

“…as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know.”

There may be a proper Bayesian formulation of the distinction between “risk” and “uncertainty” that involves a transition between prior-dominated (uncertain) and posterior-dominated (risky), but basically I don’t see any qualititative difference between the two from such a perspective.

Anyway, it was a very interesting lecture that differed from many talks I’ve attended about the sociology of science in that the speaker clearly understood a lot about how science actually works. The Director of the Science Policy Research Unit invited the Heads of the Science Schools (including myself) to dinner with the speaker afterwards, and that led to the generation of many interesting ideas about how we (I mean scientists and social scientists) might work better together in the future, something we really need to do.


by telescoper at March 02, 2015 01:25 PM

Christian P. Robert - xi'an's og

market static

[Heard in the local market, while queuing for cheese:]

– You took too much!

– Maybe, but remember your sister is staying for two days.

– My sister…, as usual, she will take a big serving and leave half of it!

– Yes, but she will make sure to finish the bottle of wine!


Filed under: Kids, Travel Tagged: farmers' market, métro static

by xi'an at March 02, 2015 01:18 PM

Tommaso Dorigo - Scientificblogging

Neutrino Physics: Poster Excerpts from Neutel XVI
The XVI edition of "Neutrino Telescopes" is about to start in Venice today. In the meantime, I have started to publish in the conference blog a few excerpts of the posters that compete for the "best poster award" at the conference this week. You might be interested to check them out:

read more

by Tommaso Dorigo at March 02, 2015 12:11 PM

Peter Coles - In the Dark

Poll of Polls now updated to 27-02-15 – no real change from last week

telescoper:

Just thought I’d reblog this to show how close it seems the May 2015 General Election will be. The situation with respect to seats is even more complex. It looks like Labour will lose many of their seats in Scotland to the SNP, but the Conservatives will probably only lose a handful to UKIP.

It looks to me that another hung Parliament is on the cards, so coalitions of either Con+Lib+UKIP or Lab+SNP+Lib are distinct possibilities..

Originally posted on More Known Than Proven:

Poll of Polls - 270215I’ve updated my “Poll of Polls” to include 13 more polls that were carried out since I did my last graph. The graphs now include the Greens as I now have data for them too.

Overall this Poll of Polls shows no real change from last week.

If you want to download the spreadsheet that did this analysis go here. If you want to understand the methodology behind the “Poll of Polls” click here and scroll down to the bit that gives the description.

View original


by telescoper at March 02, 2015 10:22 AM

March 01, 2015

Christian P. Robert - xi'an's og

trans-dimensional nested sampling and a few planets

This morning, in the train to Dauphine (train that was even more delayed than usual!), I read a recent arXival of Brendon Brewer and Courtney Donovan. Entitled Fast Bayesian inference for exoplanet discovery in radial velocity data, the paper suggests to associate Matthew Stephens’ (2000)  birth-and-death MCMC approach with nested sampling to infer about the number N of exoplanets in an exoplanetary system. The paper is somewhat sparse in its description of the suggested approach, but states that the birth-date moves involves adding a planet with parameters simulated from the prior and removing a planet at random, both being accepted under a likelihood constraint associated with nested sampling. I actually wonder if this actually is the birth-date version of Peter Green’s (1995) RJMCMC rather than the continuous time birth-and-death process version of Matthew…

“The traditional approach to inferring N also contradicts fundamental ideas in Bayesian computation. Imagine we are trying to compute the posterior distribution for a parameter a in the presence of a nuisance parameter b. This is usually solved by exploring the joint posterior for a and b, and then only looking at the generated values of a. Nobody would suggest the wasteful alternative of using a discrete grid of possible a values and doing an entire Nested Sampling run for each, to get the marginal likelihood as a function of a.”

This criticism is receivable when there is a huge number of possible values of N, even though I see no fundamental contradiction with my ideas about Bayesian computation. However, it is more debatable when there are a few possible values for N, given that the exploration of the augmented space by a RJMCMC algorithm is often very inefficient, in particular when the proposed parameters are generated from the prior. The more when nested sampling is involved and simulations are run under the likelihood constraint! In the astronomy examples given in the paper, N never exceeds 15… Furthermore, by merging all N’s together, it is unclear how the evidences associated with the various values of N can be computed. At least, those are not reported in the paper.

The paper also omits to provide the likelihood function so I do not completely understand where “label switching” occurs therein. My first impression is that this is not a mixture model. However if the observed signal (from an exoplanetary system) is the sum of N signals corresponding to N planets, this makes more sense.


Filed under: Books, Statistics, Travel, University life Tagged: birth-and-death process, Chamonix, exoplanet, label switching, métro, nested sampling, Paris, RER B, reversible jump, Université Paris Dauphine

by xi'an at March 01, 2015 11:15 PM

John Baez - Azimuth

Visual Insight

I have another blog, called Visual Insight. Over here, our focus is on applying science to help save the planet. Over there, I try to make the beauty of pure mathematics visible to the naked eye.

I’m always looking for great images, so if you know about one, please tell me about it! If not, you may still enjoy taking a look.

Here are three of my favorite images from that blog, and a bit about the people who created them.

I suspect that these images, and many more on Visual Insight, are all just different glimpses of the same big structure. I have a rough idea what that structure is. Sometimes I dream of a computer program that would let you tour the whole thing. Unfortunately, a lot of it lives in more than 3 dimensions.

Less ambitiously, I sometimes dream of teaming up with lots of mathematicians and creating a gorgeous coffee-table book about this stuff.

 

Schmidt arrangement of the Eisenstein integers

 

Schmidt Arrangement of the Eisenstein Integers - Katherine Stange

This picture drawn by Katherine Stange shows what happens when we apply fractional linear transformations

z \mapsto \frac{a z + b}{c z + d}

to the real line sitting in the complex plane, where a,b,c,d are Eisenstein integers: that is, complex numbers of the form

m + n \sqrt{-3}

where m,n are integers. The result is a complicated set of circles and lines called the ‘Schmidt arrangement’ of the Eisenstein integers. For more details go here.

Katherine Stange did her Ph.D. with Joseph H. Silverman, an expert on elliptic curves at Brown University. Now she is an assistant professor at the University of Colorado, Boulder. She works on arithmetic geometry, elliptic curves, algebraic and integer sequences, cryptography, arithmetic dynamics, Apollonian circle packings, and game theory.

 

{7,3,3} honeycomb


This is the {7,3,3} honeycomb as drawn by Danny Calegari. The {7,3,3} honeycomb is built of regular heptagons in 3-dimensional hyperbolic space. It’s made of infinite sheets of regular heptagons in which 3 heptagons meet at vertex. 3 such sheets meet at each edge of each heptagon, explaining the second ‘3’ in the symbol {7,3,3}.

The 3-dimensional regions bounded by these sheets are unbounded: they go off to infinity. They show up as holes here. In this image, hyperbolic space has been compressed down to an open ball using the so-called Poincaré ball model. For more details, go here.

Danny Calegari did his Ph.D. work with Andrew Casson and William Thurston on foliations of three-dimensional manifolds. Now he’s a professor at the University of Chicago, and he works on these and related topics, especially geometric group theory.

 

{7,3,3} honeycomb meets the plane at infinity

This picture, by Roice Nelson, is another view of the {7,3,3} honeycomb. It shows the ‘boundary’ of this honeycomb—that is, the set of points on the surface of the Poincaré ball that are limits of points in the {7,3,3} honeycomb.

Roice Nelson used stereographic projection to draw part of the surface of the Poincaré ball as a plane. The circles here are holes, not contained in the boundary of the {7,3,3} honeycomb. There are infinitely many holes, and the actual boundary, the region left over, is a fractal with area zero. The white region on the outside of the picture is yet another hole. For more details, and a different version of this picture, go here.

Roice Nelson is a software developer for a flight data analysis company. There’s a good chance the data recorded on the airplane from your last flight moved through one of his systems! He enjoys motorcycling and recreational mathematics, he has a blog with lots of articles about geometry, and he makes plastic models of interesting geometrical objects using a 3d printer.



by John Baez at March 01, 2015 10:46 PM

Jester - Resonaances

Weekend Plot: Bs mixing phase update
Today's featured plot was released last week by the LHCb collaboration:

It shows the CP violating phase in Bs meson mixing, denoted as φs,  versus the difference of the decay widths between the two Bs meson eigenstates. The interest in φs comes from the fact that it's  one of the precious observables that 1) is allowed by the symmetries of the Standard Model, 2) is severely suppressed due to the CKM structure of flavor violation in the Standard Model. Such observables are a great place to look for new physics (other observables in this family include Bs/Bd→μμ, K→πνν, ...). New particles, even too heavy to be produced directly at the LHC, could produce measurable contributions to φs as long as they don't respect the Standard Model flavor structure. For example, a new force carrier with a mass as large as 100-1000 TeV and order 1 flavor- and CP-violating coupling to b and s quarks would be visible given the current experimental precision. Similarly, loops of supersymmetric particles with 10 TeV masses could show up, again if the flavor structure in the superpartner sector is not aligned with that in the  Standard Model.

The phase φs can be measured in certain decays of neutral Bs mesons where the process involves an interference of direct decays and decays through oscillation into the anti-Bs meson. Several years ago measurements at Tevatron's D0 and CDF experiments suggested a large new physics contribution. The mild excess has gone away since, like many other such hints.  The latest value quoted by LHCb is φs = - 0.010 ± 0.040, which combines earlier measurements of the Bs → J/ψ π+ π- and  Bs → Ds+ Ds- decays with  the brand new measurement of the Bs → J/ψ K+ K- decay. The experimental precision is already comparable to the Standard Model prediction of φs = - 0.036. Further progress is still possible, as the Standard Model prediction can be computed to a few percent accuracy.  But the room for new physics here is getting tighter and tighter.

by Jester (noreply@blogger.com) at March 01, 2015 11:23 AM

February 28, 2015

Tommaso Dorigo - Scientificblogging

Miscellanea
This week I was traveling in Belgium so my blogging activities have been scarce. Back home, I will resume with serious articles soon (with the XVI Neutrino Telescopes conference next week, there will be a lot to report on!). In the meantime, here's a list of short news you might care about as an observer of progress in particle physics research and related topics.

read more

by Tommaso Dorigo at February 28, 2015 11:07 AM

Geraint Lewis - Cosmic Horizons

Shooting relativistic fish in a rational barrel
I need to take a breather from grant writing, which is consuming almost every waking hour in between all of the other things that I still need to do. So see this post as a cathartic exercise.

What makes a scientist? Is it the qualification? What you do day-to-day? The association and societies to which you belong? I think a unique definition may be impossible as there is a continuum of properties of scientists. This makes it a little tricky for the lay-person to identify "real science" from "fringe science" (but, in all honesty, the distinction between these two is often not particularly clear cut).

One thing that science (and many other fields) do is have meetings, conferences and workshops to discuss their latest results. Some people seem to spend their lives flitting between exotic locations essentially presenting the same talk to almost the same audience, but all scientists probably attend a conference or two per year.

In one of my own fields, namely cosmology, there are lots of conferences per year. But accompanying these there are another set of conferences going on, also on cosmology and often including discussions of gravity, particle physics, and the power of electricity in the Universe. At these meetings, the words "rational" and "logical" are bandied about, and it is clear that the people attending think that the great mass of astronomer and physicists have gotten it all wrong, are deluded, are colluding to keep the truth from the public for some bizarre agenda - some sort of worship of Einstein and "mathemagics" (I snorted with laughter when I heard this).

If I am being paid to lie to the public, I would like to point out that my cheque has not arrived and unless it does shortly I will go to the papers with a "tell all"!!

These are not a new phenomenon, but were often in shadows. But now, of course, with the internet, any one can see these conference in action with lots of youtube clips and lectures.

Is there any use for such videos? I think so, as, for the student of physics, they present an excellent place to tests one knowledge by identifying just where the presenters are straying off the path.

A brief search of youtube will turn up talks that point out that black holes cannot exist because
is the starting point for the derivation of the Schwarzschild solution.

Now, if you are not really familiar with the mathematics of relativity, this might look quite convincing. The key point is this equation

Roughly speaking, this says that space-time geometry (left-hand side) is related to the matter and energy density (right-hand side, and you calculate the Schwarzschild geometry for a black hole by setting the right-hand side equal to zero.

Now, with the right-hand side equal to zero that means there is no energy and mass, and the conclusion in the video says that there is no source, no thing to produce the bending of space-time and hence the effects of gravity. So, have the physicists been pulling the wool over everyones eyes for almost 100 years?

Now, a university level student may not have done relativity yet, but it should be simple to see the flaw in this argument. And, to do this, we can use the wonderful world of classical mechanics.

In classical physics, where gravity is a force and we deal with potentials, we have a similar equation to the relativistic equation above. It's known as Poisson's equation
The left-hand side is related to derivatives of the gravitational potential, whereas the right-hand side is some constants (including Newton's gravitational constant (G)) and the density given by the rho.

I think everyone is happy with this equation. Now, one thing you calculate early on in gravitational physics is that the gravitational potential outside of a massive spherical object is given by
Note that we are talking about the potential is outside of the spherical body (the simple V and Phi are meant to be the same thing). So, if we plug this potential into Poisson's equation, does it give us a mass distribution which is spherical?

Now, Poisson's equation can look a little intimidating, but let's recast the potential in Cartesian coordinates. Then it looks like this

Ugh! Does that make it any easier? Yes, let's just simply plug it into Wolfram Alpha to do the hard work. So, the derivatives have an x-part, y-part and z-part - here's the x-part.
Again, is you are a mathphobe, this is not much better, but let's add the y- and z-parts.

After all that, the result is zero! Zilch! Nothing! This must mean that Poisson's equation for this potential is
So, the density is equal to zero. Where's the mass that produces the gravitational field? This is the same as the apparent problem with relativity. What Poisson's equation tells us that the derivatives o the potential AT A POINT is related to the density AT THAT POINT! 

Now, remember these are derivatives, and so the potential can have a whole bunch of shapes at that point, as long as the derivatives still hold. One of these, of course, is there being no mass there and so no gravitational potential at all, but any vacuum, with no mass, will above Poisson = 0 equation, including the potential outside of any body (the one used in this example relied on a spherical source).

So, the relativistic version is that the properties of the space-time curvature AT A POINT is related to the mass and energy AT A POINT. A flat space-time is produced when there is no mass and energy, and so has G=0, but so does any point in a vacuum, but that does not mean that the space-time at that point is not curved (and so no gravity).

Anyway, I got that off my chest, and my Discovery Project submitted, but now it's time to get on with a LIEF application! 

by Cusp (noreply@blogger.com) at February 28, 2015 03:30 AM

February 27, 2015

arXiv blog

The Emerging Challenge of Augmenting Virtual Worlds With Physical Reality

If you want to interact with real world objects while immersed in a virtual reality, how do you do it?


Augmented reality provides a live view of the real world with computer generated elements superimposed. Pilots have long used head-up displays to access air speed data and other parameters while they fly. Some smartphone cameras can superimpose computer-generated characters on to the view of the real world. And emerging technologies such as Google Glass aim to superimpose useful information on to a real world view, such as navigation directions and personal data.

February 27, 2015 08:56 PM

The n-Category Cafe

Concepts of Sameness (Part 4)

This time I’d like to think about three different approaches to ‘defining equality’, or more generally, introducing equality in formal systems of mathematics.

These will be taken from old-fashioned logic — before computer science, category theory or homotopy theory started exerting their influence. Eventually I want to compare these to more modern treatments.

If you know other interesting ‘old-fashioned’ approaches to equality, please tell me!

The equals sign is surprisingly new. It was never used by the ancient Babylonians, Egyptians or Greeks. It seems to originate in 1557, in Robert Recorde’s book The Whetstone of Witte. If so, we actually know what the first equation looked like:

As you can see, the equals sign was much longer back then! He used parallel lines “because no two things can be more equal.”

Formalizing the concept of equality has raised many questions. Bertrand Russell published The Principles of Mathematics [R] in 1903. Not to be confused with the Principia Mathematica, this is where he introduced Russell’s paradox. In it, he wrote:

identity, an objector may urge, cannot be anything at all: two terms plainly are not identical, and one term cannot be, for what is it identical with?

In his Tractatus, Wittgenstein [W] voiced a similar concern:

Roughly speaking: to say of two things that they are identical is nonsense, and to say of one thing that it is identical with itself is to say nothing.

These may seem like silly objections, since equations obviously do something useful. The question is: precisely what?

Instead of tackling that head-on, I’ll start by recalling three related approaches to equality in the pre-categorical mathematical literature.

The indiscernibility of identicals

The principle of indiscernibility of identicals says that equal things have the same properties. We can formulate it as an axiom in second-order logic, where we’re allowed to quantify over predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>:

<semantics>xy[x=yP[P(x)P(y)]]<annotation encoding="application/x-tex"> \forall x \forall y [x = y \; \implies \; \forall P \, [P(x) \; \iff \; P(y)] ] </annotation></semantics>

We can also formulate it as an axiom schema in 1st-order logic, where it’s sometimes called substitution for formulas. This is sometimes written as follows:

For any variables <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics> and any formula <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>, if <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> is obtained by replacing any number of free occurrences of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, such that these remain free occurrences of <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, then

<semantics>x=y[ϕϕ]<annotation encoding="application/x-tex"> x = y \;\implies\; [\phi \;\implies\; \phi' ] </annotation></semantics>

I think we can replace this with the prettier

<semantics>x=y[ϕϕ]<annotation encoding="application/x-tex"> x = y \;\implies\; [\phi \;\iff \; \phi'] </annotation></semantics>

without changing the strength of the schema. Right?

We cannot derive reflexivity, symmetry and transitivity of equality from the indiscernibility of identicals. So, this principle does not capture all our usual ideas about equality. However, as shown last time, we can derive symmetry and transitivity from this principle together with reflexivity. This uses an interesting form of argument where take “being equal to <semantics>z<annotation encoding="application/x-tex">z</annotation></semantics>” as one of the predicates (or formulas) to which we apply the principle. There’s something curiously self-referential about this. It’s not illegitimate, but it’s curious.

The identity of indiscernibles

Leibniz [L] is often credited with formulating a converse principle, the identity of indiscernibles. This says that things with all the same properties are equal. Again we can write it as a second-order axiom:

<semantics>xy[P[P(x)P(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall P [ P(x) \; \iff \; P(y)] \; \implies \; x = y ] </annotation></semantics>

or a first-order axiom schema.

We can go further if we take the indiscernibility of identicals and identity of indiscernibles together as a package:

<semantics>xy[P[P(x)P(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall P [ P(x) \; \iff \; P(y)] \; \iff \; x = y ] </annotation></semantics>

This is often called the Leibniz law. It says an entity is determined by the collection of predicates that hold of that entity. Entities don’t have mysterious ‘essences’ that determine their individuality: they are completely known by their properties, so if two entities have all the same properties they must be the same.

This principle does imply reflexivity, symmetry and transitivity of equality. They follow from the corresponding properties of <semantics><annotation encoding="application/x-tex">\iff</annotation></semantics> in a satisfying way. Of course, if we were wondering why equality has these three properties, we are now led to wonder the same thing about the biconditional <semantics><annotation encoding="application/x-tex">\iff</annotation></semantics>. But this counts as progress: it’s a step toward ‘logicizing’ mathematics, or at least connecting <semantics>=<annotation encoding="application/x-tex">=</annotation></semantics> firmly to <semantics><annotation encoding="application/x-tex">\iff</annotation></semantics>.

Apparently Russell and Whitehead used a second-order version of the Leibniz law to define equality in the Principia Mathematica [RW], while Kalish and Montague [KL] present it as a first-order schema. I don’t know the whole history of such attempts.

When you actually look to see where Leibniz formulated this principle, it’s a bit surprising. He formulated it in the contrapositive form, he described it as a ‘paradox’, and most surprisingly, it’s embedded as a brief remark in a passage that would be hair-curling for many contemporary rationalists. It’s in his Discourse on Metaphysics, a treatise written in 1686:

Thus Alexander the Great’s kinghood is an abstraction from the subject, and so is not determinate enough to pick out an individual, and doesn’t involve the other qualities of Alexander or everything that the notion of that prince includes; whereas God, who sees the individual notion or ‘thisness’ of Alexander, sees in it at the same time the basis and the reason for all the predicates that can truly be said to belong to him, such as for example that he would conquer Darius and Porus, even to the extent of knowing a priori (and not by experience) whether he died a natural death or by poison — which we can know only from history. Furthermore, if we bear in mind the interconnectedness of things, we can say that Alexander’s soul contains for all time traces of everything that did and signs of everything that will happen to him — and even marks of everything that happens in the universe, although it is only God who can recognise them all.

Several considerable paradoxes follow from this, amongst others that it is never true that two substances are entirely alike, differing only in being two rather than one. It also follows that a substance cannot begin except by creation, nor come to an end except by annihilation; and because one substance can’t be destroyed by being split up, or brought into existence by the assembling of parts, in the natural course of events the number of substances remains the same, although substances are often transformed. Moreover, each substance is like a whole world, and like a mirror of God, or indeed of the whole universe, which each substance expresses in its own fashion — rather as the same town looks different according to the position from which it is viewed. In a way, then, the universe is multiplied as many times as there are substances, and in the same way the glory of God is magnified by so many quite different representations of his work.

(Emphasis mine — you have to look closely to find the principle of identity of indiscernibles, because it goes by so quickly!)

There have been a number of objections to the Leibniz law over the years. I want to mention one that might best be handled using some category theory. In 1952, Max Black [B] claimed that in a symmetrical universe with empty space containing only two symmetrical spheres of the same size, the two spheres are two distinct objects even though they have all their properties in common.

As Black admits, this problem only shows up in a ‘relational’ theory of geometry, where we can’t say that the spheres have different positions — e.g., one centered at the points <semantics>(x,y,z)<annotation encoding="application/x-tex">(x,y,z)</annotation></semantics>, the other centered at <semantics>(x,y,z)<annotation encoding="application/x-tex">(-x,-y,-z)</annotation></semantics> — but only speak of their position relative to one another. This sort of theory is certainly possible, and it seems to be important in physics. But I believe it can be adequately formulated only with the help of some category theory. In the situation described by Black, I think we should say the spheres are not equal but isomorphic.

As widely noted, general relativity also pushes for a relational approach to geometry. Gauge theory, also, raises the issue of whether indistinguishable physical situations should be treated as equal or merely isomorphic. I believe the mathematics points us strongly in the latter direction.

A related issue shows up in quantum mechanics, where electrons are considered indistinguishable (in a certain sense), yet there can be a number of electrons in a box — not just one.

But I will discuss such issues later.

Extensionality

In traditional set theory we try to use sets as a substitute for predicates, saying <semantics>xS<annotation encoding="application/x-tex">x \in S</annotation></semantics> as a substitute for <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics>. This lets us keep our logic first-order and quantify over sets — often in a universe where everything is a set — as a substitute for quantifying over predicates. Of course there’s a glitch: Russell’s paradox shows we get in trouble if we try to treat every predicate as defining a set! Nonetheless it is a powerful strategy.

If we apply this strategy to reformulate the Leibniz law in a universe where everything is a set, we obtain:

<semantics>ST[S=TR[SRTR]]<annotation encoding="application/x-tex"> \forall S \forall T [ S = T \; \iff \; \forall R [ S \in R \; \iff \; T \in R]] </annotation></semantics>

While this is true in Zermelo-Fraenkel set theory, it is not taken as an axiom. Instead, people turn the idea around and use the axiom of extensionality:

<semantics>ST[S=TR[RSRT]]<annotation encoding="application/x-tex"> \forall S \forall T [ S = T \; \iff \; \forall R [ R \in S \; \iff \; R \in T]] </annotation></semantics>

Instead of saying two sets are equal if they’re in all the same sets, this says two sets are equal if all the same sets are in them. This leads to a view where the ‘contents’ of an entity as its defining feature, rather than the predicates that hold of it.

We could, in fact, send this idea back to second-order logic and say that predicates are equal if and only if they hold for the same entities:

<semantics>PQ[x[P(x)Q(x)]P=Q]<annotation encoding="application/x-tex"> \forall P \forall Q [\forall x [P(x) \; \iff \; Q(x)] \; \iff P = Q ] </annotation></semantics>

as a kind of ‘dual’ of the Leibniz law:

<semantics>xy[P[P(x)P(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall P [ P(x) \; \iff \; P(y)] \; \iff \; x = y ] </annotation></semantics>

I don’t know if this has been remarked on in the foundational literature, but it’s a close relative of a phenomenon that occurs in other forms of duality. For example, continuous real-valued functions <semantics>F,G<annotation encoding="application/x-tex">F, G</annotation></semantics> on a topological space obey

<semantics>FG[x[F(x)=G(x)]F=G]<annotation encoding="application/x-tex"> \forall F \forall G [\forall x [F(x) \; = \; G(x)] \; \iff F = G ] </annotation></semantics>

but if the space is nice enough, continuous functions ‘separate points’, which means we also have

<semantics>xy[F[F(x)=F(y)]x=y]<annotation encoding="application/x-tex"> \forall x \forall y [ \forall F [ F(x) \; = \; F(y)] \; \iff \; x = y ] </annotation></semantics>

Notes

by john (baez@math.ucr.edu) at February 27, 2015 04:26 PM

ZapperZ - Physics and Physicists

Much Ado About Dress Color
Have you been following this ridiculous debate about the color of this dress? People are going nuts all over different social media about what the color of this dress is based on the photo that has exploded all over the internet.

I'm calling it ridiculous because people are actually arguing with each other, disagreeing about what they see, and then found it rather odd that other people do not see the same thing as they do, as if this is highly unusual and unexpected. Does the fact that different people see colors differently not a well-known fact? Seriously?

I've already mentioned about the limition of the human eye, and why it is really not a very good light detector in many aspects. So already using your eyes to determine the color of this dress is already suspect. Not only that, but due to such uncertainty, one should be to stuborn about what one sees, as if what you are seeing must be the ONLY way to see it.

But how would science solve this? Easy. Devices such as a UV-VIS can easily be used to measure the spectrum of reflected light, and the intensity of those spectral peaks. It tells you unambiguously the wavelengths that are reflected off the source, and how much of it is reflected. So to solve this debate, cut pieces of the dress (corresponding to all the different colors on it), and stick it into one of these devices. Voila! You have killed the debate of the "color".

This is something that can be determined objectively, without any subjective opinion of "color", and without the use of a poor light detector such as one's eyes. So, if someone can tell me where I can get a piece of this fabric, I'll test it out!

Zz.

by ZapperZ (noreply@blogger.com) at February 27, 2015 04:15 PM

CERN Bulletin

CERN Bulletin

Qminder pictures-FR2
Qminder, application of the Registration Service

by Journalist, Student at February 27, 2015 10:13 AM

CERN Bulletin

CERN Bulletin

Klaus Winter (1930 - 2015)

We learned with great sadness that Klaus Winter passed away on 9 February 2015, after a long illness.

 

Klaus was born in 1930 in Hamburg, where he obtained his diploma in physics in 1955. From 1955 to 1958 he held a scholarship at the Collège de France, where he received his doctorate in nuclear physics under the guidance of Francis Perrin. Klaus joined CERN in 1958, where he first participated in experiments on π+ and K0 decay properties at the PS, and later became the spokesperson of the CHOV Collaboration at the ISR.

Starting in 1976, his work focused on experiments with the SPS neutrino beam. In 1984 he joined Ugo Amaldi to head the CHARM experiment, designed for detailed studies of the neutral current interactions of high-energy neutrinos, which had been discovered in 1973 using the Gargamelle bubble chamber at the PS. The unique feature of the detector was its target calorimeter, which used large Carrara marble plates as an absorber material.

From 1984 to 1991, Klaus headed up the CHARM II Collaboration. The huge detector, which weighed 700 tonnes and was principally a sandwich structure of large glass plates and planes of streamer tubes, was primarily designed to study high-energy neutrino-electron scattering through neutral currents.

In recognition of the fundamental results obtained by these experiments, Klaus was awarded the Stern-Gerlach Medal in 1993, the highest distinction of the German Physical Society for exceptional achievements in experimental physics. In 1997, he was awarded the prestigious Bruno Pontecorvo Prize for his major contributions to neutrino physics by the Joint Institute for Nuclear Research in Dubna.

The last experiment under his leadership, from 1991 until his retirement, was CHORUS, which used a hybrid emulsion-electronic detector primarily designed to search for νμ− ντ oscillations in the then-favoured region of large mass differences and small mixing angle.

Among other responsibilities, Klaus served for many years as editor of Physics Letters B and on the Advisory Committee of the International Conference on Neutrino Physics and Astrophysics. He was also the editor of two renowned books, Neutrino Physics (1991 and 2000) and Neutrino Mass with Guido Altarelli (2003).

An exceptional researcher, he also lectured physics at the University of Hamburg and – after the reunification of Germany – at the Humboldt University of Berlin, supervising 25 PhD theses and seven Habilitationen.

Klaus was an outstanding and successful leader, dedicated to his work, which he pursued with vision and determination. His intellectual horizons were by no means limited to science, extending far into culture and the arts, notably modern painting.

We have lost an exceptional colleague and friend.
 

His friends and colleagues from CHARM, CHARM II and CHORUS

February 27, 2015 10:02 AM

Georg von Hippel - Life on the lattice

Back from Mumbai
On Saturday, my last day in Mumbai, a group of colleagues rented a car with a driver to take a trip to Sanjay Gandhi National Park and visit the Kanheri caves, a Buddhist site consisting of a large number of rather simple monastic cells and some worship and assembly halls with ornate reliefs and inscriptions, all carved out out of solid rock (some of the cell entrances seem to have been restored using steel-reinforced concrete, though).

On the way back, we stopped at Mani Bhavan, where Mahatma Gandhi lived from 1917 to 1934, and which is now a museum dedicated to his live and legacy.

In the night, I flew back to Frankfurt, where the temperature was much lower than in Mumbai; in fact, on Monday there was snow.

by Georg v. Hippel (noreply@blogger.com) at February 27, 2015 10:01 AM

Lubos Motl - string vacua and pheno

Nature is subtle
Caltech has created their new Walter Burke Institute for Theoretical Physics. It's named after Walter Burke – but it is neither the actor nor the purser nor the hurler, it's Walter Burke the trustee so no one seems to give a damn about him.



Walter Burke, the actor

That's why John Preskill's speech [URL fixed, tx] focused on a different topic, namely his three principles of creating the environment for good physics.




His principles are, using my words,
  1. the best way to learn is to teach
  2. two-trick ponies (people working at the collision point of two disciplines) are great
  3. Nature is subtle
Let me say a few words about these principles.




Teaching as a way of learning

First, for many of us, teaching is indeed a great way to learn. If you are passionate about teaching, you are passionate about making things as clear to the "student" that he or she just can't object. But to achieve this clarity, you must clarify all the potentially murky points that you may be willing to overlook if the goal were just for you to learn the truth.

You "know" what the truth is, perhaps because you have a good intuition or you have solved similar or very closely related things in the past, and it's therefore tempting – and often useful, if you want to save your time – not to get distracted by every doubt. But a curious, critical student will get distracted and he or she will interrupt you and ask the inconvenient questions.

If you are a competent teacher, you must be able to answer pretty much all questions related to what you are saying, and by getting ready to this deep questioning, you learn the topic really properly.

I guess that John Preskill would agree that I am interpreting his logic in different words and I am probably thinking about these matters similarly to himself. Many famous physicists have agreed. For example, Richard Feynman has said that it was important for him to be hired as a teacher because if the research isn't moving forward, and it often isn't, he still knows that he is doing something useful.

But I still think it's fair to say that many great researchers don't think in this way – and many great researchers aren't even good teachers. Bell Labs have employed numerous great non-teacher researchers. And on the contrary, many good teachers are not able to become great researchers. For those reasons, I think that Preskill's implicit point about the link between teaching and finding new results isn't true in general.

Two-trick ponies

Preskill praises the concept of two-trick ponies – people who learn (at least) two disciplines and benefit from the interplay between them. He is an example of a two-trick pony. And it's great if it works.

On the other hand, I still think that a clear majority of the important results occurs within one discipline. And most combinations of disciplines end up being low-quality science. People often market themselves as interdisciplinary researchers because they're not too good in either discipline – and whenever their deficit in one discipline is unmasked, they may suggest that they're better in another one. Except that it often fails to be the case in all disciplines.

So the interdisciplinary research is often just a euphemism for bad research hiding its low quality. Moreover, even if one doesn't talk about imperfect people at all, I think that random pairs of disciplines (or subdisciplines) of science (or physics) are unlikely to lead to fantastic off-spring, at least not after a limited effort.

Combinations of two disciplines have led and will probably lead to several important breakthroughs – but they are very rare.

There is another point related to the two-trick ponies. Many breakthroughs in physics resulted from the solution to a paradox. The apparent paradox arose from two different perspectives on a problem. These perspectives may usually be associated with two subdisciplines of physics.

Einstein's special relativity is the resolution of disagreements between classical mechanics and classical field theory (electromagnetism) concerning the question how objects behave when you approach the speed of light. String theory is the reconciliation of the laws of the small (quantum field theory) and the laws of the large (general relativity), and there are other examples.

Even though the two perspectives that are being reconciled correspond to different parts of the physics research and the physics community, they are often rather close sociologically. So theoretical physicists tend to know both. The very question whether two classes of questions in physics should be classified as "one pony" or "two monies" (or "more than two ponies") is a matter of conventions. After all, there is just one science and the precise separation of science into disciplines is a human invention.

This ambiguous status of the term "two-trick pony" seriously weakens John Preskill's second principle. When we say that someone is a "two-trick pony", we may only define this proposition relatively to others. A "two-trick pony" is more versatile than others – he knows stuff from subdisciplines that are further from each other than the subdisciplines mastered by other typical ponies.

But versatility isn't really the general key to progress, either. Focus and concentration may often be more important. So I don't really believe that John Preskill's second principle may be reformulated as a general rule with the universal validity.

Nature is subtle

However, I fully agree with Preskill's third principle that says that Nature is subtle. Subtle is Nature but malicious She is not. ;-) Preskill quotes the holographic principle in quantum gravity as our best example of Nature's subtle character. That's a great (but not the greatest) choice of an example, I think. Preskill adds a few more words explaining what he means by the adjective "subtle":
Yes, mathematics is unreasonably effective. Yes, we can succeed at formulating laws of Nature with amazing explanatory power. But it’s a struggle. Nature does not give up her secrets so readily. Things are often different than they seem on the surface, and we’re easily fooled. Nature is subtle.
Nature isn't a prostitute. She is hiding many of Her secrets. That's why the self-confidence of a man who declares himself to be the naturally born expert in Nature's intimate organs may often be unjustified and foolish. The appearances are often misleading. The men often confuse the pubic hair with the swimming suit, the \(\bra{\rm bras}\) with the \(\ket{\rm cats}\) beneath the \(\bra{\rm bras}\), and so on. We aren't born with the accurate knowledge of the most important principles of Nature.

We have to learn them by carefully studying Nature and we should always understand that any partial insight we make may be an illusion. To say the least, every generalization or extrapolation of an insight may turn out to be wrong.

And it may be wrong not just in the way we can easily imagine – a type of wrongness of our theories that we're ready to expect from the beginning. Our provisional theories may be wrong for much more profound reasons.

Of course, I consider the postulates of quantum mechanics to be the most important example of Nature's subtle character. A century ago, physicists were ready to generalize the state-of-the-art laws of classical physics in many "understandable" ways: to add new particles, new classical fields, new terms in the equations that govern them, higher derivatives, and so on. And Lord Kelvin thought that even those "relatively modest" steps had already been completed, so all that remained was to measure the parameters of Nature more accurately than before.

But quantum mechanics forced us to change the whole paradigm. Even though the class of classical (and usually deterministic) theories seemed rather large and tolerant, quantum mechanics showed that it's an extremely special, \(\hbar=0\) limit of more general theories of Nature (quantum mechanical theories) that we must use instead of the classical ones. The objective reality doesn't exist at the fundamental level.

(The \(\hbar=0\) classical theories may look like a "measure zero" subset of the quantum+classical ones with a general value of \(\hbar\). But because \(\hbar\) is dimensionful in usual units and its numerical value may therefore be changed by any positive factor by switching to different units, we may only qualitatively distinguish \(\hbar=0\) and \(\hbar\neq 0\). That means that the classical and quantum theories are pretty much "two comparably large classes" of theories. The classical theories are a "contraction" or a "limit" of the quantum ones; some of the quantum ones are "deformations" of the classical ones. Because of these relationships, it was possible for the physicists to think that the world obeys classical laws although for 90 years, we have known very clearly that it only obeys the quantum laws.)

Quantum mechanics demonstrated that people were way too restrictive when it came to the freedoms they were "generously" willing to grant to Nature. Nature just found the straitjacket to be unacceptably suffocating. It simply doesn't work like that. Quantum mechanics is the most important example of a previously unexpected difficulty but there are many other examples.

At the end, the exact theory of Nature – and our best approximations of the exact theory we may explain these days – are consistent. But the very consistency may sometimes look surprising to a person who doesn't have a sufficient background in mathematics, who hasn't studied the topic enough, or who is simply not sufficiently open-minded or honest.

The lay people – and some of the self-styled (or highly paid!) physicists as well – often incorrectly assume that the right theory must belong to a class of theories (classical theories, those with the objective reality of some kind, were my most important example) they believe is sufficiently broad and surely containing all viable contenders. They believe that all candidates not belonging to this class are crazy or inconsistent. They violate common sense, don't they?

But this instinctive expectation is often wrong. In reality, they have some evidence that their constraint on the theory is a sufficient condition for a theory to be consistent. But they often incorrectly claim that their restriction is actually a necessary condition for the consistency, even though it is not. In most cases, when this error takes place, the condition they were willing to assume is not only unnecessary; it is actually demonstrably wrong when some other, more important evidence or arguments are taken into account.

A physicist simply cannot ignore the possibility that assumptions are wrong, even if he used to consider these assumptions as "obvious facts" or "common sense". Nature is subtle and not obliged to pay lip service to sensibilities that are common. The more dramatic differences between the theories obeying the assumption and those violating the assumption are, the more attention a physicist must pay to the question whether his assumption is actually correct.

Physicists are supposed to find some important or fundamental answers – to construct the big picture. That's why they unavoidably structure their knowledge hierarchically to the "key things" and the "details", and they prefer to care about the former (leaving the latter to the engineers and others). However, separating ideas into "key things" and "details" mindlessly is very risky because the things you consider "details" may very well show that your "key things" are actually wrong, the right "key things" are completely different, and many of the things you consider "details" are not only different than you assumed, but they may actually be some of the "key things" (or the most important "key things"), too!

Of course, I was thinking about very particular examples when I was writing the previous, contrived paragraph. I was thinking about bad or excessively stubborn physicists (if you want me to ignore full-fledged crackpots) and their misconceptions. Those who believe that Nature must have a "realist" description – effectively one from the class of classical theories – may consider all the problems (of the "many worlds interpretation" or any other "realist interpretation" of quantum mechanics) pointed out by others to be just "details". If something doesn't work about these "details", these people believe, those defects will be fixed by some "engineers" in the future.

But most of these objections aren't details at all and it may be seen that no "fix" will ever be possible. They are valid and almost rock-solid proofs that the "key assumptions" of the realists are actually wrong. And if someone or something may overthrow a "key player", then he or it must be a "key player", too. He or it can't be just a "detail"! So if there seems to be some evidence – even if it looks like technical evidence composed of "details" – that actually challenges or disproves your "key assumptions", you simply have to care about it because all your opinions about the truth, along with your separation of questions to the "big ones" and "details", may be completely wrong.

If you don't care about these things, it's too bad and you're very likely to end in the cesspool of religious fanaticism and pseudoscience together with assorted religious bigots and Sean Carrolls.

by Luboš Motl (noreply@blogger.com) at February 27, 2015 06:34 AM

astrobites - astro-ph reader's digest

Corpse too Bright? Make it Bigger!

Title: “Circularization” vs. Accretion — What Powers Tidal Disruption Events?
Authors: T. Piran, G. Svirski, J. Krolik, R. M. Cheng, H. Shiokawa
First Author’s Institution: Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem

 

Our day-to-day experiences with gravity are fairly tame. It keeps our GPS satellites close and ready for last-minute changes to an evening outing, brings us the weekend snow and rain that beg for a cozy afternoon curled up warm and dry under covers with a book and a steaming mug, anchors our morning cereal to its rightful place in our bowls (or in our tummies, for that matter), and keeps the Sun in view day after day for millennia on end, nourishing the plants that feed us and radiating upon us its cheering light.  Combined with a patch of slippery ice, gravity may produce a few lingering bruises, and occasionally we’ll hear about the brave adventurers who, in search of higher vistas, slip tragically off an icy slope or an unforgiving cliff.  But all in all, our experiences with gravity in our everyday lives is a largely unnoticed, unheralded hero that works continually behind the scenes to maintain life as we know it.

Park yourself outside a relatively small but massive object such as the supermassive black hole lurking at the center of our galaxy, and you’ll discover sly gravity’s more feral side. Gravity’s inverse square law dependence on your distance from your massive object of choice dictates that as you get closer and closer to said object, the strength of gravity will increase drastically: if you halve your distance to the massive object, the object will pull four times as hard at you, if you quarter your distance towards the object, it’ll pull sixteen times as hard at you, and well, hang on tight to your shoes because you may start to feel them tugging away from your feet. At this point though, you should be high-tailing it as fast as you can away from the massive object rather than attending to your footwear, for if you’re sufficiently close, the difference in the gravitational pull between your head and your feet can be large enough that you’ll stretch and deform into a long string—or “spaghettify” as astronomers have officially termed this painful and gruesome path of no return.

piran2015-shocks

Figure 1. A schematic of the accretion disk created when a star passes too close to a supermassive black hole. The star is ripped up by the black hole, and its remnants form the disk. Shocks (red) generated as stellar material falls onto the disk produce the light we observe. [Figure taken from today’s paper.]

While it doesn’t look like there’ll be a chance for the daredevils among us to visit such an object and test these ideas any time soon, there are other things that have the unfortunate privilege of doing so: stars. If a star passes closely enough to a supermassive black hole so that the star’s self-gravity—which holds it together in one piece—is dwarfed by the difference in the gravitational pull of the black hole on one side to the star to the other, the black hole raises tides on the star (much like the oceanic tides produced by the Moon and the Sun on Earth) can become so large that it deforms until it rips apart.  The star spaghettifies in what astronomers call a tidal disruption event, or TDE, for short. The star-black hole separation below which the star succumbs to such a fate is called its tidal radius (see Nathan’s post for more details on the importance of the tidal radius in TDEs). A star that passes within this distance sprays out large quantities of its hot gas as it spirals to its eventual death in the black hole. But the star doesn’t die silently.  The stream of hot gas it sheds can produce a spectacular light show that can lasts for months. The gas, too, is eventually swallowed by the black hole, but first forms an accretion disk around the black hole that extends up to the tidal radius. The gas violently releases its kinetic energy in shocks that form near what would have been the original star’s point of closest approach (its periapsis) and where the gas wraps around the black hole then collides with the stream of newly infalling stellar gas at the edge of the disk (see Figure 1). It is the energy radiated by these shocks that eventually escape and make their way to our telescopes, where we can observe them—a distant flare at the heart of a neighboring galaxy.

Or so we thought.

 

TDEs, once just a theorist’s whimsy, have catapulted in standing to an observational reality as TDE-esque flares have been observed near our neighborly supermassive black holes. An increasing number of these have been discovered through UV/optical observations (the alternate method being X-rays), which have yielded some disturbing trends that contradict the predictions of the classic TDE picture. These UV/optical TDEs aren’t as luminous as we expect. They aren’t as hot as we thought they would be and many of them stay the same temperature rather than decrease with time. The light we do see seems to come from a region much larger than we expected, and the gas producing the light is moving more slowly than our classic picture suggested. Haven’t thrown in the towel already?

But hang on to your terrycloth—and cue in the authors of today’s paper. Inspired by new detailed simulations of TDEs, they suggested that what we’re seeing in the optical is not the light from shocks in an accretion disk that extends up to the tidal radius, but from a disk that extends about 100 times that distance. Again, shocks from interacting streams of gas—but this time extending up to and at the larger radius—produce the light we observe. The larger disk automatically solves the size problem, and also conveniently solves the velocity problem with it, since Kepler’s laws predict that material would be moving more slowly at the larger radius. This in turn reduces the luminosity of the TDE, which is powered by the loss of kinetic energy (which, of course, scales with the velocity) at the edge of the disk. A larger radius and lower luminosity work to reduce the blackbody temperature of the gas. The authors predicted the change that each of the observations inconsistent with the classic TDE model would undergo under the new model, and found that they agreed well with the measured peak luminosity, temperature, line width (a proxy for the speed of the gas), and estimated size of the emitting region for seven TDEs that had been discovered in the UV/optical, and found good agreement.

But as most theories are wont to do, while this model solves many observational puzzles, it opens another one: these lower luminosity TDEs radiate only 1% of the energy the stellar remains should lose as they are accreted onto the black hole.  So where does the rest of the energy go?  The authors suggest a few different means (photon trapping? outflows? winds? emission at other wavelengths?), but all of them appear unsatisfying for various reasons.  It appears that these stellar corpses will live on in astronomers’ deciphering minds.

by Stacy Kim at February 27, 2015 02:18 AM

February 26, 2015

Quantum Diaries

Twitter, Planck et les supernovae

Matthieu Roman est un jeune chercheur CNRS à Paris, tout à, fait novice sur la twittosphère. Il nous raconte comment il est en pourtant arrivé à twitter « en direct de son labo » pendant une semaine. Au programme : des échanges à bâton rompu à propos de l’expérience Planck, des supernovae ou l’énergie noire, avec un public passionné et assidu. Peut-être le début d’une vocation en médiation scientifique ?

Mais comment en suis-je arrivé là ? Tout a commencé pendant ma thèse de doctorat en cosmologie au Laboratoire Astroparticule et Cosmologie (APC, CNRS/Paris Diderot), sous la direction de Jacques Delabrouille, entre 2011et 2014. Cette thèse m’a amené à faire partie de la grande collaboration scientifique autour du satellite Planck, et en particulier de son instrument à hautes fréquences plus connu sous son acronyme anglais HFI. Je me suis intéressé au cours de ces trois années à l’étude pour la cosmologie des amas de galaxies détectés par Planck à l’aide de « l’effet Sunyaev-Zel’dovich » (interaction des photons du fond diffus cosmologique avec les électrons piégés au sein des amas de galaxies). En mars 2013, j’étais donc aux premières loges au moment de la livraison des données en température de Planck qui ont donné lieu à un emballement médiatique impressionnant. Les résultats démontraient la solidité du modèle cosmologique actuel composé de matière noire froide et d’énergie noire.

A-t-on découvert les ondes gravitationnelles primordiales ?
Puis quelques mois plus tard, les américains de l’expérience BICEP2, située au Pôle Sud, ont convoqué les médias du monde entier afin d’annoncer la découverte des ondes gravitationnelles primordiales grâce à leurs données polarisées. Ils venaient simplement nous apporter le Graal des cosmologistes ! Nouvelle excitation, experts en tous genres invités sur les plateaux télés, dans les journaux pour expliquer que l’on avait détecté ce qu’avait prédit Einstein un siècle plus tôt.

Mais dans la collaboration Planck, nombreux étaient les sceptiques. Nous n’avions pas encore les moyens de répondre à BICEP2 car les données polarisées n’étaient pas encore analysées, mais nous sentions qu’une partie importante du signal polarisé de la poussière galactique n’était pas pris en compte.

Les derniers résultats ont montré une carte de poussière galactique sur laquelle a été rajoutée la direction du champ magnétique galactique. Je la trouve particulièrement belle ! Crédits : ESA - collaboration Planck

Les derniers résultats ont montré une carte de poussière galactique sur laquelle a été rajoutée la direction du champ magnétique galactique. Je lui trouve un aspect particulièrement artistique ! Crédits : ESA- collaboration Planck

Et voilà : depuis quelques jours, c’est officiel ! Planck, dans une étude conjointe avec BICEP2 et Keck, fixe une limite supérieure sur la quantité d’ondes gravitationnelles primordiales, et par conséquent pas de détection. En somme, retour à la case départ, mais avec beaucoup d’informations supplémentaires. Les futures missions spatiales, ou expériences au sol ou en ballon visant à détecter avec une grande précision la polarisation du fond diffus à grande échelle, dont l’intérêt aurait pu être remis en question si BICEP2 avait eu raison, viennent de prendre à nouveau tout leur sens. Car il faudra bien aller les chercher, ces ondes gravitationnelles primordiales, avec un nombre de détecteurs embarqués de plus en plus grand afin d’augmenter la sensibilité, et la capacité de confirmer à coup sûr l’origine cosmologique de tout signal détecté !

De la poussière galactique aux explosions d’étoiles
Entre temps, j’ai eu l’opportunité de prolonger mon activité de recherche pendant trois années supplémentaires avec un post-doctorat au Laboratoire de physique nucléaire et des hautes énergies (CNRS, Université Pierre et Marie Curie et Université Paris Diderot) sur un sujet complètement nouveau à mes yeux : les supernovae, ces étoiles en fin de vie dont l’explosion est très lumineuse. On les étudie dans le but ultime de connaître précisément la nature de l’énergie noire, tenue responsable de l’expansion accélérée de l’Univers. Au temps de la preuve de l’existence de l’énergie noire obtenue à l’aide des supernovae (1999), on imaginait que leur courbe de lumière était assez peu variable. On a pris d’ailleurs l’habitude de les appeler « chandelles standard ».

Sur cette  image de la galaxie M101 on peut voir distinctement une supernova qui a explosé en 2011 : c'est le gros point blanc en haut à droite. Crédit T.A. Rector (University of Alaska Anchorage), H. Schweiker & S. Pakzad NOAO/AURA/NSF

Sur cette image de la galaxie M101 on peut voir distinctement une supernova qui a explosé en 2011 : c’est le gros point blanc en haut à droite. Celle-ci se situe dans l’un des bras spiraux, mais ne brillerait pas de la même façon si elle était au centre. Crédit T.A. Rector (University of Alaska Anchorage), H. Schweiker & S. Pakzad NOAO/AURA/NSF

Avec l’affinement des méthodes de détection, on se rend compte que les supernovae ne sont pas vraiment les chandelles standard que l’on croit, ce qui relance complètement l’intérêt du domaine. En particulier, le type de galaxie dans laquelle explose une supernova peut créer des variations de luminosité, et ainsi affecter la mesure du paramètre décrivant la nature de l’énergie noire. C’est le projet dans lequel je me suis lancé au sein de la (petite) collaboration du Supernova Legacy Survey (SNLS). En espérant un jour pouvoir étudier ces objets sous la forme d’autres projets scientifiques, avec des détecteurs encore plus puissants comme Subaru ou LSST.

Twitter en direct de mon labo…
En fait c’est une amie, Agnès, qui m’a fait découvrir Twitter et m’a encouragé à raconter mon travail au jour le jour et pendant une semaine via le compte @EnDirectDuLabo. Il s’agissait d’un monde nouveau pour moi, qui n’était pas du tout actif sur ce que l’on appelle « la twittosphère ». C’est malheureusement le cas pour de nombreux chercheurs en France. Expérience très enrichissante s’il en est, puisqu’elle semble susciter l’intérêt de nombreux twittos, et a permis de porter le nombre d’abonnés à plus de 2000. Cela m’a permis par exemple d’expliquer les bases de l’électromagnétisme nécessaires en astronomie, des détails plus techniques sur les performances de l’expérience dans laquelle je travaille ou encore ma vie au quotidien dans mon laboratoire.

Ce fut très amusant de livrer mon travail quotidien au grand public, mais aussi très chronophage ! J’ai toujours été convaincu par l’importance de la médiation scientifique, sans jamais oser me lancer. Il était peut-être temps…

Matthieu Roman est actuellement post-doctorant au Laboratoire de physique nucléaire et de hautes énergies (CNRS, Université Pierre et Marie Curie et Université Paris Diderot)

by CNRS-IN2P3 at February 26, 2015 11:01 PM

The n-Category Cafe

Introduction to Synthetic Mathematics (part 1)

John is writing about “concepts of sameness” for Elaine Landry’s book Category Theory for the Working Philosopher, and has been posting some of his thoughts and drafts. I’m writing for the same book about homotopy type theory / univalent foundations; but since HoTT/UF will also make a guest appearance in John’s and David Corfield’s chapters, and one aspect of it (univalence) is central to Steve Awodey’s chapter, I had to decide what aspect of it to emphasize in my chapter.

My current plan is to focus on HoTT/UF as a synthetic theory of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids. But in order to say what that even means, I felt that I needed to start with a brief introduction about the phrase “synthetic theory”, which may not be familiar. Right now, my current draft of that “introduction” is more than half the allotted length of my chapter; so clearly it’ll need to be trimmed! But I thought I would go ahead and post some parts of it in its current form; so here goes.

In general, mathematical theories can be classified as analytic or synthetic. An analytic theory is one that analyzes, or breaks down, its objects of study, revealing them as put together out of simpler things, just as complex molecules are put together out of protons, neutrons, and electrons. For example, analytic geometry analyzes the plane geometry of points, lines, etc. in terms of real numbers: points are ordered pairs of real numbers, lines are sets of points, etc. Mathematically, the basic objects of an analytic theory are defined in terms of those of some other theory.

By contrast, a synthetic theory is one that synthesizes, or puts together, a conception of its basic objects based on their expected relationships and behavior. For example, synthetic geometry is more like the geometry of Euclid: points and lines are essentially undefined terms, given meaning by the axioms that specify what we can do with them (e.g. two points determine a unique line). (Although Euclid himself attempted to define “point” and “line”, modern mathematicians generally consider this a mistake, and regard Euclid’s “definitions” (like “a point is that which has no part”) as fairly meaningless.) Mathematically, a synthetic theory is a formal system governed by rules or axioms. Synthetic mathematics can be regarded as analogous to foundational physics, where a concept like the electromagnetic field is not “put together” out of anything simpler: it just is, and behaves in a certain way.

The distinction between analytic and synthetic dates back at least to Hilbert, who used the words “genetic” and “axiomatic” respectively. At one level, we can say that modern mathematics is characterized by a rich interplay between analytic and synthetic — although most mathematicians would speak instead of definitions and examples. For instance, a modern geometer might define “a geometry” to satisfy Euclid’s axioms, and then work synthetically with those axioms; but she would also construct examples of such “geometries” analytically, such as with ordered pairs of real numbers. This approach was pioneered by Hilbert himself, who emphasized in particular that constructing an analytic example (or model) proves the consistency of the synthetic theory.

However, at a deeper level, almost all of modern mathematics is analytic, because it is all analyzed into set theory. Our modern geometer would not actually state her axioms the way that Euclid did; she would instead define a geometry to be a set <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> of points together with a set <semantics>L<annotation encoding="application/x-tex">L</annotation></semantics> of lines and a subset of <semantics>P×L<annotation encoding="application/x-tex">P\times L</annotation></semantics> representing the “incidence” relation, etc. From this perspective, the only truly undefined term in mathematics is “set”, and the only truly synthetic theory is Zermelo–Fraenkel set theory (ZFC).

This use of set theory as the common foundation for mathematics is, of course, of 20th century vintage, and overall it has been a tremendous step forwards. Practically, it provides a common language and a powerful basic toolset for all mathematicians. Foundationally, it ensures that all of mathematics is consistent relative to set theory. (Hilbert’s dream of an absolute consistency proof is generally considered to have been demolished by Gödel’s incompleteness theorem.) And philosophically, it supplies a consistent ontology for mathematics, and a context in which to ask metamathematical questions.

However, ZFC is not the only theory that can be used in this way. While not every synthetic theory is rich enough to allow all of mathematics to be encoded in it, set theory is by no means unique in possessing such richness. One possible variation is to use a different sort of set theory like ETCS, in which the elements of a set are “featureless points” that are merely distinguished from each other, rather than labeled individually by the elaborate hierarchical membership structures of ZFC. Either sort of “set” suffices just as well for foundational purposes, and moreover each can be interpreted into the other.

However, we are now concerned with more radical possibilities. A paradigmatic example is topology. In modern “analytic topology”, a “space” is defined to be a set of points equipped with a collection of subsets called open, which describe how the points vary continuously into each other. (Most analytic topologists, being unaware of synthetic topology, would call their subject simply “topology.”) By contrast, in synthetic topology we postulate instead an axiomatic theory, on the same ontological level as ZFC, whose basic objects are spaces rather than sets.

Of course, by saying that the basic objects “are” spaces we do not mean that they are sets equipped with open subsets. Instead we mean that “space” is an undefined word, and the rules of the theory cause these “spaces” to behave more or less like we expect spaces to behave. In particular, synthetic spaces have open subsets (or, more accurately, open subspaces), but they are not defined by specifying a set together with a collection of open subsets.

It turns out that synthetic topology, like synthetic set theory (ZFC), is rich enough to encode all of mathematics. There is one trivial sense in which this is true: among all analytic spaces we find the subclass of indiscrete ones, in which the only open subsets are the empty set and the whole space. A notion of “indiscrete space” can also be defined in synthetic topology, and the collection of such spaces forms a universe of ETCS-like sets (we’ll come back to these in later installments). Thus we could use them to encode mathematics, entirely ignoring the rest of the synthetic theory of spaces. (The same could be said about the discrete spaces, in which every subset is open; but these are harder (though not impossible) to define and work with synthetically. The relation between the discrete and indiscrete spaces, and how they sit inside the synthetic theory of spaces, is central to the synthetic theory of cohesion, which I believe David is going to mention in his chapter about the philosophy of geometry.)

However, a less boring approach is to construct the objects of mathematics directly as spaces. How does this work? It turns out that the basic constructions on sets that we use to build (say) the set of real numbers have close analogues that act on spaces. Thus, in synthetic topology we can use these constructions to build the space of real numbers directly. If our system of synthetic topology is set up well, then the resulting space will behave like the analytic space of real numbers (the one that is defined by first constructing the mere set of real numbers and then equipping it with the unions of open intervals as its topology).

The next question is, why would we want to do mathematics this way? There are a lot of reasons, but right now I believe they can be classified into three sorts: modularity, philosophy, and pragmatism. (If you can think of other reasons that I’m forgetting, please mention them in the comments!)

By “modularity” I mean the same thing as does a programmer: even if we believe that spaces are ultimately built analytically out of sets, it is often useful to isolate their fundamental properties and work with those abstractly. One advantage of this is generality. For instance, any theorem proven in Euclid’s “neutral geometry” (i.e. without using the parallel postulate) is true not only in the model of ordered pairs of real numbers, but also in the various non-Euclidean geometries. Similarly, a theorem proven in synthetic topology may be true not only about ordinary topological spaces, but also about other variant theories such as topological sheaves, smooth spaces, etc. As always in mathematics, if we state only the assumptions we need, our theorems become more general.

Even if we only care about one model of our synthetic theory, modularity can still make our lives easier, because a synthetic theory can formally encapsulate common lemmas or styles of argument that in an analytic theory we would have to be constantly proving by hand. For example, just as every object in synthetic topology is “topological”, every “function” between them automatically preserves this topology (is “continuous”). Thus, in synthetic topology every function <semantics><annotation encoding="application/x-tex">\mathbb{R}\to \mathbb{R}</annotation></semantics> is automatically continuous; all proofs of continuity have been “packaged up” into the single proof that analytic topology is a model of synthetic topology. (We can still speak about discontinuous functions too, if we want to; we just have to re-topologize <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics> indiscretely first. Thus, synthetic topology reverses the situation of analytic topology: discontinuous functions are harder to talk about than continuous ones.)

By contrast to the argument from modularity, an argument from philosophy is a claim that the basic objects of mathematics really are, or really should be, those of some particular synthetic theory. Nowadays it is hard to find mathematicians who hold such opinions (except with respect to set theory), but historically we can find them taking part in the great foundational debates of the early 20th century. It is admittedly dangerous to make any precise claims in modern mathematical language about the beliefs of mathematicians 100 years ago, but I think it is justified to say that in hindsight, one of the points of contention in the great foundational debates was which synthetic theory should be used as the foundation for mathematics, or in other words what kind of thing the basic objects of mathematics should be. Of course, this was not visible to the participants, among other reasons because many of them used the same words (such as “set”) for the basic objects of their theories. (Another reason is that among the points at issue was the very idea that a foundation of mathematics should be built on precisely defined rules or axioms, which today most mathematicians take for granted.) But from a modern perspective, we can see that (for instance) Brouwer’s intuitionism is actually a form of synthetic topology, while Markov’s constructive recursive mathematics is a form of “synthetic computability theory”.

In these cases, the motivation for choosing such synthetic theories was clearly largely philosophical. The Russian constructivists designed their theory the way they did because they believed that everything should be computable. Similarly, Brouwer’s intuitionism can be said to be motivated by a philosophical belief that everything in mathematics should be continuous.

(I wish I could write more about the latter, because it’s really interesting. The main thing that makes Brouwerian intuitionism non-classical is choice sequences: infinite sequences in which each element can be “freely chosen” by a “creating subject” rather than being supplied by a rule. The concrete conclusion Brouwer drew from this is that any operation on such sequences must be calculable, at least in stages, using only finite initial segments, since we can’t ask the creating subject to make an infinite number of choices all at once. But this means exactly that any such operation must be continuous with respect to a suitable topology on the space of sequences. It also connects nicely with the idea of open sets as “observations” or “verifiable statements” that was mentioned in another thread. However, from the perspective of my chapter for the book, the purpose of this introduction is to lay the groundwork for discussing HoTT/UF as a synthetic theory of <semantics><annotation encoding="application/x-tex">\infty</annotation></semantics>-groupoids, and Brouwerian intuitionism would be a substantial digression.)

Finally, there are arguments from pragmatism. Whereas the modularist believes that the basic objects of mathematics are actually sets, and the philosophist believes that they are actually spaces (or whatever), the pragmatist says that they could be anything: there’s no need to commit to a single choice. Why do we do mathematics, anyway? One reason is because we find it interesting or beautiful. But all synthetic theories may be equally interesting and beautiful (at least to someone), so we may as well study them as long as we enjoy it.

Another reason we study mathematics is because it has some application outside of itself, e.g. to theories of the physical world. Now it may happen that all the mathematical objects that arise in some application happen to be (say) spaces. (This is arguably true of fundamental physics. Similarly, in applications to computer science, all objects that arise may happen to be computable.) In this case, why not just base our application on a synthetic theory that is good enough for the purpose, thereby gaining many of the advantages of modularity, but without caring about how or whether our theory can be modeled in set theory?

It is interesting to consider applying this perspective to other application domains. For instance, we also speak of sets outside of a purely mathematical framework, to describe collections of physical objects and mental acts of categorization; could we use spaces in the same way? Might collections of objects and thoughts automatically come with a topological structure by virtue of how they are constructed, like the real numbers do? I think this also starts to seem quite natural when we imagine topology in terms of “observations” or “verifiable statetments”. Again, saying any more about that in my chapter would be a substantial digression; but I’d be interested to hear any thoughts about it in the comments here!

by shulman (viritrilbia@gmail.com) at February 26, 2015 08:26 PM

Clifford V. Johnson - Asymptotia

Ceiba Speciosa
Saw all the fluffy stuff on the ground. Took me a while to "cotton on" and look up: silk_floss_tree (ceiba speciosa.. "silk floss" tree. click for larger view.) -cvj Click to continue reading this post

by Clifford at February 26, 2015 05:29 PM

astrobites - astro-ph reader's digest

Black Holes Grow First in Mergers

Title: Following Black Hole Scaling Relations Through Gas-Rich Mergers
Authors: Anne M. Medling, Vivian U, Claire E. MaxDavid B. Sanders, Lee Armus, Bradford Holden, Etsuko Mieda, Shelley A. Wright, James E. Larkin
First Author’s Institution: Research School of Astronomy & Astrophysics, Mount Stromlo Observatory, Australia National University, Cotter Road, Weston, ACT 2611, Australia
Status: Accepted to ApJ

It’s currently accepted theory that every galaxy has a super massive black hole (SMBH) at it’s center. These black holes have been observed to be strongly correlated with the galaxy’s bulge mass, total stellar mass and velocity dispersion.

Figure 1: NGC 2623 - one of the merging galaxies observed in this study. Image credit: NASA.

Figure 1: NGC 2623 – one of the merging galaxies observed in this study. Image credit: NASA.

The mechanism which drives this has long thought to be mergers (although there are recent findings showing the bulgeless galaxies which have not undergone a merger also host high mass SMBHs) which causes a funneling of gas into the center of a galaxy, which is either used in a burst of star formation or accreted by the black hole. The black hole itself can regulate both its and the galaxy’s growth if it becomes active and throws off huge winds which expel the gas needed for star formation and black hole growth on short timescales.

To understand the interplay between these effects the authors of this paper study 9 nearby ultra luminous infrared galaxies in a range of stages through a merger and measure the mass of the black holes at the center of each. They calculated these masses from spectra taken with the Keck telescopes on Mauna Kea, Hawaii by measuring the stellar kinematics (the movement of the stars around the black hole) as shown by the doppler broadening of the emission lines in the spectra. Doppler broadening occurs when gas is emitting light at a very specific wavelength but is also moving either towards or away from us (or both if it is rotating around a central object). Some of this emission is doppler shifted to larger or smaller wavelengths effectively smearing (or broadening) a narrow emission line into a broad one.

Figure 1: The mass of the black hole against the stellar velocity dispersion, sigma, of the 9 galaxies observed in this study. Also shown are galaxies from McConnel & Ma (2013) and the best fit line to that data as a comparison to typical galaxies.

Figure 2: The mass of the black hole against the stellar velocity dispersion, sigma, of the 9 galaxies observed in this study. Also shown are galaxies from McConnel & Ma (2013) and the best fit line to that data as a comparison to typical galaxies. Originally Figure 2 in Medling et al. (2015)

From this estimate of the rotational velocities of the stars around the centre of the galaxy, the mass of the black hole and the velocity dispersion can be calculated. These measurements for the 9 galaxies in this study are plotted in Figure 1 (originally Fig. 2 in Medling et al. 2015) and are shown to be either within the scatter or above the typical relationship between black hole mass and velocity dispersion.

The authors run a Kolmogorov-Smirnoff statistical test on the data to confirm that these merging galaxies are drawn from a completely different population to those that lie on the relation with a p-value of 0.003, i.e. the likelihood of the these merging galaxies being drawn from the same population as the typical galaxies is 0.3%.

The black holes therefore have a larger mass than they should for the stellar velocity dispersion in the galaxy. This suggests that the black hole grows first in a merger before the bulges of the two galaxies have fully merged and settled into a gravitationally stable structure (virialized). Although measuring the velocity dispersion in a bulge consisting of two bulges merging is difficult and can produce errors in the measurement, simulations have shown that the velocity dispersion will only be underestimated by 50%; an amount which is not significant enough to change these results.

The authors also consider whether there has been enough time since the merger began for these black holes to grow so massive. Assuming that both galaxies used to lay on typical scaling relations for the black hole mass, and that the black holes accreted at the typical rate (Eddington rate), they find that it should have taken somewhere in the range of a few 10-100  million years – a time much less than the simulated time for a merger to happen.

A second consideration is how long it will take for these galaxies to virialize and for their velocity dispersion to increase to bring each one back onto the typical scaling relation with the black hole mass. They consider how many more stars are needed to form in order for the velocity dispersion in the bulge to reach the required value. Taking measured star formation rates of these galaxies gives a range of timescales of about 1-2 billion years which is consistent with simulated merger timescales. It is therefore plausible that these galaxies can return to the black hole mass-velocity dispersion relation by the time they have finished merging.

The authors conclude therefore that black hole fueling and growth begins in the early stages of a merger and can outpace the formation of the bulge and any bursts in star formation. To confirm this result measurements of a much larger sample of currently merging galaxies needs to be taken – the question is, where do we look?

by Becky Smethurst at February 26, 2015 02:11 PM

Symmetrybreaking - Fermilab/SLAC

From the Standard Model to space

A group of scientists who started at particle physics experiments move their careers to the final frontier.

As a member of the ATLAS experiment at the Large Hadron Collider, Ryan Rios spent 2007 to 2012 surrounded by fellow physicists.

Now, as a senior research engineer for Lockheed Martin at NASA’s Johnson Space Center, he still sees his fair share.

He’s not the only scientist to have made the leap from experimenting on Earth to keeping astronauts safe in space. Rios works on a small team that includes colleagues with backgrounds in physics, biology, radiation health, engineering, information technology and statistics.

“I didn’t really leave particle physics, I just kind of changed venues,” Rios says. “A lot of the skillsets I developed on ATLAS I was able to transfer over pretty easily.”

The group at Johnson Space Center supports current and planned crewed space missions by designing, testing and monitoring particle detectors that measure radiation levels in space.

Massive solar flares and other solar events that accelerate particles, other sources of cosmic radiation, and weak spots in Earth’s magnetic field can all pose radiation threats to astronauts. Members of the radiation group provide advisories on such sources. This makes it possible to warn astronauts, who can then seek shelter in heavier-shielded areas of the spacecraft.

Johnson Space Center has a focus on training and supporting astronauts and planning for future crewed missions. Rios has done work for the International Space Station and the robotic Orion mission that launched in December as a test for future crewed missions. His group recently developed a new radiation detector for the space station crew.

Rios worked at CERN for four years as a graduate student and postdoc at Southern Methodist University in Dallas. At CERN he was introduced to a physics analysis platform called ROOT, which is also used at NASA. Some of the particle detectors he works with now were developed by a CERN-based collaboration.

Fellow Johnson Space Center worker Kerry Lee wound up a group lead for radiation operations after using ROOT during his three years as a summer student on the Collider Detector at Fermilab, or CDF experiment.

“As a kid, I just knew I wanted to work at NASA,” says Lee, who grew up in rural Wyoming. He pursued an education in engineering physics and “enjoyed the physics part more than the engineering.” He received a master’s degree in particle physics at Texas Tech University.

A professor there helped him attain his current position. “He asked me what I really wanted to do in life,” Lee says, “and I told him, ‘NASA.’”

He worked on data analysis for a detector aboard the robotic Mars Odyssey mission, which flew in 2001. “The tools I learned at Fermilab for data analysis were perfectly applicable for the analysis on this detector,” he says.

One of his most enjoyable roles was training astronauts to use radiation-monitoring equipment in space.

“Every one of the crew members would come through [for training],” he says. “Meeting the astronauts is very exciting—it is always a diverse and interesting group of people. I really enjoy that part of the job.”

Physics was also the starting point for Martin Leitgab, a senior research engineer who joined the Johnson Space Center group in 2013. As a PhD student, Leitgab worked at the PHENIX detector at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider. He also took part in the Belle Collaboration at the KEK B-factory in Japan.

A native of Austria who had attended the University of Illinois at Urbana-Champaign, Leitgab says his path to NASA was fairly roundabout.

“When I finished my PhD work I was at a crossroads—I did not have a master plan,” he says.

He says he became interested in aerospace and wrote some papers related to solar power in space. His wife is from Texas, so Johnson Space Center seemed to be a good fit.

“My job is to make sure that the detector built for the International Space Station works as it should, and to get data out of it,” he says. “It’s very similar to what I did before… The hardware is very different, but the experimental approach in testing and debugging detectors, debugging the software that reads out the data from the detectors and determining the system efficiency and calibration—that’s pretty much a one-to-one comparison with high-energy physics detectors work.”

Leitgab, Lee and Rios all say the small teams and tight, product-driven deadlines at NASA represent a departure from the typically massive collaborations for major particle physics experiments. But other things are very familiar: For example, NASA’s extensive collection of acronyms.

Rios says he relishes his new role but is glad to have worked on one of the experiments that in 2012 discovered the Higgs boson. “At the end of the day, I had the opportunity to work on a very huge discovery—probably the biggest one of the 21st century we’ll see,” he says.

 

Like what you see? Sign up for a free subscription to symmetry!

by Glenn Roberts Jr. at February 26, 2015 02:00 PM

February 25, 2015

Jester - Resonaances

Persistent trouble with bees
No, I still have nothing to say about colony collapse disorder... this blog will stick to physics for at least 2 more years. This is an update on the anomalies in B decays reported by the LHCbee experiment. The two most important ones are:

  1. The  3.7 sigma deviation from standard model predictions in the differential distribution of the B➝K*μ+μ- decay products.
  2.  The 2.6 sigma violation of lepton flavor universality in B+→K+l+l- decays. 

 The first anomaly is statistically more significant. However, the theoretical error of the standard model prediction is not trivial to estimate and the significance of the anomaly is subject to fierce discussions. Estimates in the literature range from 4.5 sigma to 1 sigma, depending on what is assumed about QCD uncertainties. For this reason, the second anomaly made this story much more intriguing.  In that case, LHCb measures the ratio of the decay with muons and with electrons:  B+→K+μ+μ- vs B+→K+e+e-. This observable is theoretically clean, as large QCD uncertainties cancel in the ratio. Of course, 2.7 sigma significance is not too impressive; LHCb once had a bigger anomaly (remember CP violation in D meson decays?)  that is now long gone. But it's fair to say that the two anomalies together are marginally interesting.      

One nice thing is that both anomalies can be explained at the same time by a simple modification of the standard model. Namely, one needs to add the 4-fermion coupling between a b-quark, an s-quark, and two muons:

with Λ of order 30 TeV. Just this one extra coupling greatly improves a fit to the data, though other similar couplings could be simultaneously present. The 4-fermion operators can be an effective description of new heavy particles coupled to quarks and leptons.  For example, a leptoquark (scalar particle with a non-zero color charge and lepton number) or a Z'  (neutral U(1) vector boson) with mass in a few TeV range have been proposed. These are of course simple models created ad-hoc. Attempts to put these particles in a bigger picture of physics beyond  the standard model have not been very convincing so far, which may be one reason why the anomalies are viewed a bit skeptically. The flip side is that, if the anomalies turn out to be real, this will point to unexpected symmetry structures around the corner.

Another nice element of this story is that it will be possible to acquire additional relevant information in the near future. The first anomaly is based on just 1 fb-1 of LHCb data, and it will be updated to full 3 fb-1 some time this year. Furthermore, there are literally dozens of other B decays where the 4-fermion operators responsible for the anomalies could  also show up. In fact, there may already be some hints that this is happening. In the table borrowed from this paper we can see that there are several other  2-sigmish anomalies in B-decays that may possibly have the same origin. More data and measurements in  more decay channels should clarify the picture. In particular, violation of lepton flavor universality may come together with lepton flavor violation.  Observation of decays forbidden in the standard model, such as B→Keμ or  B→Kμτ, would be a spectacular and unequivocal signal of new physics.

by Jester (noreply@blogger.com) at February 25, 2015 08:32 PM

arXiv blog

Data Mining Indian Recipes Reveals New Food Pairing Phenomenon

By studying the network of links between Indian recipes, computer scientists have discovered that the presence of certain spices makes a meal much less likely to contain ingredients with flavors in common.


The food pairing hypothesis is the idea that ingredients that share the same flavors ought to combine well in recipes. For example, the English chef Heston Blumenthal discovered that white chocolate and caviar share many flavors and turn out to be a good combination. Other unusual combinations that seem to confirm the hypothesis include strawberries and peas, asparagus and butter, and chocolate and blue cheese.

February 25, 2015 06:05 PM

The n-Category Cafe

Concepts of Sameness (Part 3)

Now I’d like to switch to pondering different approaches to equality. (Eventually I’ll have put all these pieces together into a coherent essay, but not yet.)

We tend to think of <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> as a fundamental property of equality, perhaps the most fundamental of all. But what is it actually used for? I don’t really know. I sometimes joke that equations of the form <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> are the only really true ones — since any other equation says that different things are equal — but they’re also completely useless.

But maybe I’m wrong. Maybe equations of the form <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> are useful in some way. I can imagine one coming in handy at the end of a proof by contradiction where you show some assumptions imply <semantics>xx<annotation encoding="application/x-tex">x \ne x</annotation></semantics>. But I don’t remember ever doing such a proof… and I have trouble imagining that you ever need to use a proof of this style.

If you’ve used the equation <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> in your own work, please let me know.

To explain my question a bit more precisely, it will help to choose a specific formalism: first-order classical logic with equality. We can get the rules for this system by taking first-order classical logic with function symbols and adding a binary predicate “<semantics>=<annotation encoding="application/x-tex">=</annotation></semantics>” together with three axiom schemas:

1. Reflexivity: for each variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>,

<semantics>x=x<annotation encoding="application/x-tex"> x = x </annotation></semantics>

2. Substitution for functions: for any variables <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics> and any function symbol <semantics>f<annotation encoding="application/x-tex">f</annotation></semantics>,

<semantics>x=yf(,x,)=f(,y,)<annotation encoding="application/x-tex"> x = y \; \implies\; f(\dots, x, \dots) = f(\dots, y, \dots) </annotation></semantics>

3. Substitution for formulas: For any variables <semantics>x,y<annotation encoding="application/x-tex">x, y</annotation></semantics> and any formula <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics>, if <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> is obtained by replacing any number of free occurrences of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> in <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, such that these remain free occurrences of <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, then

<semantics>x=y(ϕϕ)<annotation encoding="application/x-tex"> x = y \;\implies\; (\phi \;\implies\; \phi') </annotation></semantics>

Where did symmetry and transitivity of equality go? They can actually be derived!

For transitivity, use ‘substitution for formulas’ and take <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> to be <semantics>x=z<annotation encoding="application/x-tex">x = z</annotation></semantics>, so that <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> is <semantics>y=z<annotation encoding="application/x-tex">y = z</annotation></semantics>. Then we get

<semantics>x=y(x=zy=z)<annotation encoding="application/x-tex"> x=y \;\implies\; (x = z \;\implies\; y = z) </annotation></semantics>

This is almost transitivity. From this we can derive

<semantics>(x=y&x=z)y=z<annotation encoding="application/x-tex"> (x = y \;\&\; x = z) \;\implies\; y = z </annotation></semantics>

and from this we can derive the usual statement of transitivity

<semantics>(x=y&y=z)x=z<annotation encoding="application/x-tex"> (x = y\; \& \; y = z) \;\implies\; x = z </annotation></semantics>

by choosing different names of variables and using symmetry of equality.

But how do we get symmetry? We can derive this using reflexivity and substitution for formulas. Take <semantics>ϕ<annotation encoding="application/x-tex">\phi</annotation></semantics> to be <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> and take <semantics>ϕ<annotation encoding="application/x-tex">\phi'</annotation></semantics> be the result of substituting the first instance of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>: that is, <semantics>y=x<annotation encoding="application/x-tex">y = x</annotation></semantics>. Then we get

<semantics>x=y(x=xy=x)<annotation encoding="application/x-tex"> x = y \;\implies \;(x = x \;\implies \;y = x) </annotation></semantics>

Using <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics>, we can derive

<semantics>x=yy=x<annotation encoding="application/x-tex"> x = y \;\implies\; y = x </annotation></semantics>

This is the only time I remember using <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> to derive something! So maybe this equation is good for something. But if proving symmetry and transitivity of equality is the only thing it’s good for, I’m not very impressed. I would have been happy to take both of these as axioms, if necessary. After all, people often do.

So, just to get the conversation started, I’ll conjecture that reflexivity of equality is completely useless if we include symmetry of equality in our axioms. Namely:

Conjecture. Any theorem in classical first-order logic with equality that does not include a subformula of the form <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics> for any variable <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> can also be derived from a variant where we drop reflexivity, keep substitution for functions and substitution for formulas, and add this axiom schema:

1’. Symmetry: for any variables <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>,

<semantics>x=yy=x<annotation encoding="application/x-tex"> x = y \; \implies \; y = x </annotation></semantics>

Proof theorists: can you show this is true, or find a counterexample? We’ve seen that we can get transitivity from this setup, and then I don’t really see how it hurts to omit <semantics>x=x<annotation encoding="application/x-tex">x = x</annotation></semantics>. I may be forgetting something, though!

by john (baez@math.ucr.edu) at February 25, 2015 04:10 PM

The n-Category Cafe

Concepts of Sameness (Part 2)

I’m writing about ‘concepts of sameness’ for Elaine Landry’s book Category Theory for the Working Philosopher. After an initial section on a passage by Heraclitus, I had planned to write a bit about Gongsun Long’s white horse paradox — or more precisely, his dialog When a White Horse is Not a Horse.

However, this is turning out to be harder than I thought, and more of a digression than I want. So I’ll probably drop this plan. But I have a few preliminary notes, and I might as well share them.

Gongsun Long

Gongsun Long was a Chinese philosopher who lived from around 325 to 250 BC. Besides the better-known Confucian and Taoist schools of Chinese philosophy, another important school at this time was the Mohists, who were more interested in science and logic. Gongsun Long is considered a member of the Mohist-influenced ‘School of Names’: a loose group of logicians, not really a school in any real sense. They are remembered largely for their paradoxes: for example, they independently invented a version of Zeno’s paradox.

As with Heraclitus, most of Gongsun Long’s writings are lost. Joseph Needham [N] has written that this is one of the worst losses of ancient Chinese texts, which in general have survived much better than the Greek ones. The Gongsun Longzi is a text that originally contained 14 of his essays. Now only six survive. The second essay discusses the question “when is a white horse not a horse?”

The White Horse Paradox

When I first heard this ‘paradox’ I didn’t get it: it just seemed strange and silly, not a real paradox. I’m still not sure I get it. But I’ve decided that’s what makes it interesting: it seems to rely on modes of thought, or speech, that are quite alien to me. What counts as a ‘paradox’ is more culturally specific than you might realize.

If a few weeks ago you’d asked me how the paradox goes, I might have said something like this:

A white horse is not a horse, because where there is whiteness, there cannot be horseness, and where there is horseness there cannot be whiteness.

However this is inaccurate because there was no word like ‘whiteness’ (let alone ‘horseness’) in classical Chinese.

Realizing that classical Chinese does not have nouns and adjectives as separate parts of speech may help explain what’s going on here. To get into the mood for this paradox, we shouldn’t think of a horse as a thing to which the predicate ‘whiteness’ applies. We shouldn’t think of the world as consisting of things <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and, separately, predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>, which combine to form assertions <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics>. Instead, both ‘white’ and ‘horse’ are on more of an equal footing.

I like this idea because it suggests that predicate logic arose in the West thanks to peculiarities of Indo-European grammar that aren’t shared by all languages. This could free us up to have some new ideas.

Here’s how the dialog actually goes. I’ll use Angus Graham’s translation because it tries hard not to wash away the peculiar qualities of classical Chinese:

Is it admissible that white horse is not-horse?

S. It is admissible.

O. Why?

S. ‘Horse’ is used to name the shape; ‘white’ is used to name the color. What names the color is not what names the shape. Therefore I say white horse is not horse.

O. If we take horses having color as nonhorse, since there is no colorless horse in the world, can we say there is no horse in the world?

S. Horse obviously has color, which is why there is white horse. Suppose horse had no color, then there would just be horse, and where would you find white horse. The white is not horse. White horse is white and horse combined. Horse and white is horse, therefore I say white horse is non-horse.

(Chad Hansen writes: “Most commentaries have trouble with the sentence before the conclusion in F-8, “horse and white is horse,” since it appears to contradict the sophist’s intended conclusion. But recall the Mohists asserted that ox-horse both is and is not ox.” I’m not sure if that helps me, but anyway….)

O. If it is horse not yet combined with white which you deem horse, and white not yet combined with horse which you deem white, to compound the name ‘white horse’ for horse and white combined together is to give them when combined their names when uncombined, which is inadmissible. Therefore, I say, it is inadmissible that white horse is not horse.

S. ‘White’ does not fix anything as white; that may be left out of account. ‘White horse’ has ‘white’ fixing something as white; what fixes something as white is not ‘white’. ‘Horse’ neither selects nor excludes any colors, and therefore it can be answered with either yellow or black. ‘White horse’ selects some color and excludes others, and the yellow and the black are both excluded on grounds of color; therefore one may answer it only with white horse. What excludes none is not what excludes some. Therefore I say: white horse is not horse.

One possible anachronistic interpretation of the last passage is

The set of white horses is not equal to the set of horses, so “white horse” is not “horse”.

This makes sense, but it seems like a way of saying we can have <semantics>ST<annotation encoding="application/x-tex">S \subseteq T</annotation></semantics> while also <semantics>ST<annotation encoding="application/x-tex">S \ne T</annotation></semantics>. That would be a worthwhile observation around 300 BC — and it would even be worth trying to get people upset about this, back then! But it doesn’t seem very interesting today.

A more interesting interpretation of the overall dialog is given by Chad Hansen [H]. He argues that to understand it, we should think of both ‘white’ and ‘horse’ as mass nouns or ‘kinds of stuff’.

The issue of how two kinds of stuff can be present in the same place at the same time is a bit challenging — we see Plato battling with it in the Parmenides — and in some sense western mathematics deals with it by switching to a different setup, where we have a universe of entities <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> of which predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> can be asserted. If <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is a horse and <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics> is ‘being white’, then <semantics>P(x)<annotation encoding="application/x-tex">P(x)</annotation></semantics> says the horse is white.

However, then we get Leibniz’s principle of the ‘indistinguishability of indiscernibles’, which is a way of defining equality. This says that <semantics>x=y<annotation encoding="application/x-tex">x = y</annotation></semantics> if and only if <semantics>P(x)P(y)<annotation encoding="application/x-tex">P(x) \iff P(y)</annotation></semantics> for all predicates <semantics>P<annotation encoding="application/x-tex">P</annotation></semantics>. By this account, an entity really amounts to nothing more than the predicates it satisfies!

This is where equality comes in — but as I said, all of this is seeming like too much of a distraction from my overall goals for this essay right now.

Notes

  • [N] Joseph Needham, Science and Civilisation in China vol. 2: History of Scientific Thought, Cambridge U. Press, Cambridge, 1956, p. 185.

  • [H] Chad Hansen, Mass nouns and “A white horse is not a horse”, Philosophy East and West 26 (1976), 189–209.

by john (baez@math.ucr.edu) at February 25, 2015 04:06 AM

February 24, 2015

Symmetrybreaking - Fermilab/SLAC

Physics in fast-forward

During their first run, experiments at the Large Hadron Collider rediscovered 50 years' worth of physics research in a single month.

In 2010, the brand-spanking-new CMS and ATLAS detectors started taking data for the first time. But the question physicists asked was not, “Where is the Higgs boson?” but rather “Do these things actually work?”

“Each detector is its own prototype,” says UCLA physicist Greg Rakness, run coordinator for the CMS experiment. “We don’t get trial runs with the LHC. As soon as the accelerator fires up, we’re collecting data.”

So LHC physicists searched for a few old friends: previously discovered particles.

“We can’t say we found a new particle unless we find all the old ones first,” says Fermilab senior scientist Dan Green. “Well, you can, but you would be wrong.”

Rediscovering 50 years' worth of particle physics research allowed LHC scientists to calibrate their rookie detectors and appraise their experiments’ reliability.

The CMS collaboration produced this graph using data from the first million LHC particle collisions identified as interesting by the experiment's trigger. It represents the instances in which the detector saw a pair of muons.

Muons are heavier versions of electrons. The LHC can produce muons in its particle collisions. It can also produce heavier particles that decay into muon pairs.

On the x-axis of the graph is the combined mass of two muons that appeared simultaneously in the aftermath of a high-energy LHC collision. On the y-axis is the number of times scientists saw each muon+muon mass combination.

On top of a large and raggedy-looking half-parabola, six sharp peaks emerge.

“Each peak represents a parent particle, which was produced during the collision and then spat out two muons during its decay,” Green says.

When muon pairs appear at a particular mass more often than random chance can explain, scientists can deduce that there must some other process tipping the scale. This is how scientists find new particles and processes—by looking for an imbalance in the data and then teasing out the reason why.

Each of the six peaks on this graph can be traced back to a well-known particle that decays to two muons.

Courtesy of: Dan Green

 

  • The rho [ρ] was discovered in 1961.
  • The J-psi [J/ Ψ] was discovered in 1974 (and earned a Nobel Prize for experimenters at the Massachusetts Institute of Technology and SLAC National Accelerator Laboratory).
  • The upsilon [Y] was discovered in 1977.
  • The Z was discovered in 1983 (and earned a Nobel Prize for experimenters at CERN).

What originally took years of work and multiple experiments to untangle, the CMS and ATLAS collaborations rediscovered after only about a month.

“The LHC is higher energy and produces a lot more data than earlier accelerators,” Green says. “It’s like going from a garden hose to a fire hose. The data comes in amazingly fast.”

But even the LHC has its limitations. On the far-right side, the graph stops looking like a half-parabola and start looking like a series of short, jutting lines.

“It looks chaotic because we just didn’t have enough data for events at higher masses,” Green says. “Eventually, we would expect to see a peak representing the Higgs decaying to two muons popping up at around 125 GeV. But we just hadn’t produced enough high-mass muons to see it yet.”

Over the summer, the CMS and ATLAS detectors will resume taking data—this time with collisions containing 60 percent more energy. Green says he and his colleagues are excited to push the boundaries of this graph to see what lies just out of reach.

 

Like what you see? Sign up for a free subscription to symmetry!

 

by Sarah Charley at February 24, 2015 09:42 PM

Clifford V. Johnson - Asymptotia

Simulated meets Real!
Here's a freshly minted Oscar winner who played a scientist surrounded by... scientists! I'm with fellow physicists Erik Verlinde, Maria Spiropulu, and David Saltzberg at an event last month. Front centre are of course actors Eddie Redmayne (Best Actor winner 2015 for Theory of Everything) and Felicity Jones (Best Actress - nominee) along with the screenwriter of the film, Anthony McCarten. The British Consul-General Chris O'Connor is on the right. (Photo was courtesy of Getty Images.) [...] Click to continue reading this post

by Clifford at February 24, 2015 02:06 AM

February 23, 2015

Sean Carroll - Preposterous Universe

I Wanna Live Forever

If you’re one of those people who look the universe in the eyeball without flinching, choosing to accept uncomfortable truths when they are supported by the implacable judgment of Science, then you’ve probably acknowledged that sitting is bad for you. Like, really bad. If you’re not convinced, the conclusions are available in helpful infographic form; here’s an excerpt.

Sitting-Infographic

And, you know, I sit down an awful lot. Doing science, writing, eating, playing poker — my favorite activities are remarkably sitting-based.

So I’ve finally broken down and done something about it. On the good advice of Carl Zimmer, I’ve augmented my desk at work with a Varidesk on top. The desk itself was formerly used by Richard Feynman, so I wasn’t exactly going to give that up and replace it with a standing desk. But this little gizmo lets me spend most of my time at work on my feet instead of sitting on my butt, while preserving the previous furniture.

IMG_1173

It’s a pretty nifty device, actually. Room enough for my laptop, monitor, keyboard, mouse pad, and the requisite few cups for coffee. Most importantly for a lazybones like me, it doesn’t force you to stand up absolutely all the time; gently pull some handles and the whole thing gently settles down to desktop level, ready for your normal chair-bound routine.

IMG_1174

We’ll see how the whole thing goes. It’s one thing to buy something that allows you to stand while working, it’s another to actually do it. But at least I feel like I’m trying to be healthier. I should go have a sundae to celebrate.

by Sean Carroll at February 23, 2015 09:08 PM

ZapperZ - Physics and Physicists

Which Famous Physicist Should Be Depicted In The Movie Next?
Eddie Redmayne won the Oscar last night for his portrayal of Stephen Hawking in the movie "The Theory of Everything". So this got me into thinking of which famous physicist should be portrayed next in a movie biography. Hollywood won't choose someone who isn't eccentric, famous, or in the news. So that rules out a lot.

I would think that Richard Feynman would make a rather compelling biographical movie. He certainly was a very complex person, and definitely not boring. They could give the movie a title of "Sure You Must Be Joking", or "Six Easy Pieces", or "Shut Up And Calculate", although the latter may not be entirely attributed to Feynman.

Hollywood, I'm available for consultation!

Zz.

by ZapperZ (noreply@blogger.com) at February 23, 2015 08:10 PM

Symmetrybreaking - Fermilab/SLAC

Video: LHC experiments prep for restart

Engineers and technicians have begun to close experiments in preparation for the next run.

The LHC is preparing to restart at almost double the collision energy of its previous run. The new energy will allow physicists to check previously untestable theories and explore new frontiers in particle physics.

When the LHC is on, counter-rotating beams of particles will be made to collide at four interaction points 100 meters underground, around which sit the huge detectors ALICE, ATLAS, CMS and LHCb.

In the video below, engineers and technicians prepare these four detectors to receive the showers of particles that will be created in collisions at energies of 13 trillion electronvolts.

The giant endcaps of the ATLAS detector are back in position and the wheels of the CMS detector are moving it back into its “closed” configuration. The huge red door of the ALICE experiment is closed up ready for restart, and the access door to the LHC tunnel is sealed with concrete blocks.


A version of this article was published by CERN.

 

Like what you see? Sign up for a free subscription to symmetry!

by Cian O&#039;Luanaigh at February 23, 2015 04:50 PM

arXiv blog

Computational Anthropology Reveals How the Most Important People in History Vary by Culture

Data mining Wikipedia people reveals some surprising differences in the way eastern and western cultures identify important figures in history, say computational anthropologists.

 

February 23, 2015 04:18 PM

February 21, 2015

Jester - Resonaances

Weekend plot: rare decays of B mesons, once again
This weekend's plot shows the measurement of the branching fractions for neutral B and Bs mesons decays into  muon pairs:
This is not exactly a new thing. LHCb and CMS separately announced evidence for the B0s→μ+μ- decay in summer 2013, and a preliminary combination of their results appeared a few days after. The plot above comes from the recent paper where a more careful combination is performed, though the results change only slightly.

A neutral B meson is a  bound state of an anti-b-quark and a d-quark (B0) or an s-quark (B0s), while for an anti-B meson the quark and the antiquark are interchanged. Their decays to μ+μ- are interesting because they are very suppressed in the standard model. At the parton level, the quark-antiquark pair annihilates into a μ+μ- pair. As for all flavor changing neutral current processes, the lowest order diagrams mediating these decays occur at the 1-loop level. On top of that, there is the helicity suppression by the small muon mass, and the CKM suppression by the small Vts (B0s) or Vtd (B0) matrix elements. Beyond the standard model one or more of these suppression factors may be absent and the contribution could in principle exceed that of the standard model even if the new particles are as heavy as ~100 TeV. We already know this is not the case for the B0s→μ+μ- decay. The measured branching fraction (2.8 ± 0.7)x10^-9  agrees within 1 sigma with the standard model prediction (3.66±0.23)x10^-9. Further reducing the experimental error will be very interesting in view of observed anomalies in some other b-to-s-quark transitions. On the other hand, the room for new physics to show up  is limited,  as the theoretical error may soon become a showstopper.

Situation is a bit different for the B0→μ+μ- decay, where there is still relatively more room for new physics. This process has been less in the spotlight. This is partly due to a theoretical prejudice: in most popular new physics models it is impossible to generate a large effect in this decay without generating a corresponding excess in B0s→μ+μ-. Moreover,  B0→μ+μ- is experimentally more difficult:  the branching fraction predicted by the standard model is (1.06±0.09)x10^-10, which is 30 times smaller than that for  B0s→μ+μ-. In fact, a 3σ evidence for the B0→μ+μ- decay appears only after combining LHCb and CMS data. More interestingly, the measured branching fraction, (3.9±1.4)x10^-10, is some 2 sigma above the standard model value. Of course, this is  most likely a statistical fluke, but nevertheless it will be interesting to see an update once the 13-TeV LHC run collects enough data.

by Jester (noreply@blogger.com) at February 21, 2015 06:18 PM

Jester - Resonaances

Do-or-die year
The year 2015 began as any other year... I mean the hangover situation in particle physics. We have a theory of fundamental interactions - the Standard Model - that we know is certainly not the final  theory because it cannot account for dark matter, matter-antimatter asymmetry, and cosmic inflation. At the same time, the Standard Model perfectly describes any experiment we have performed here on Earth (up to a few outliers that can be explained as statistical fluctuations).  This is puzzling, because some these experiments are in principle sensitive to very heavy particles, sometimes well beyond the reach of the LHC or any future colliders. Theorists cannot offer much help at this point. Until recently,  naturalness was the guiding principle in constructing new theories, but  few have retained confidence in it. No other serious paradigm has appeared to replace naturalness. In short, we know for sure there is new physics beyond the Standard Model, but have absolutely no clue what it is and how big energy is needed to access it.

Yet 2015 is different because it is the year when LHC restarts at 13 TeV energy.  We should expect high-energy collisions some time in summer, and around 10 inverse femtobarns of data by the end of the year. This is the last significant energy jump most of us may experience before retirement, therefore this year is going to be absolutely crucial for the future of particle physics. If, by next Christmas, we don't hear any whispers of anomalies in LHC data, we will have to brace for tough times ahead. With no energy increase in sight, slow experimental progress, and no theoretical hints for a better theory, particle physics as we know it will be in deep merde.

You may protest this is too pessimistic. In principle, new physics may show up at the LHC anytime between this fall and the year 2030 when 3 inverse attobarns of data will have been accumulated. So the hope will not die completely anytime soon. However, the subjective probability of making a discovery will decrease exponentially as time goes on, as you can see in the attached plot. Without a discovery, the mood will soon plummet, resembling something of the late Tevatron, rather than the thrill of pushing the energy frontier that we're experiencing now.

But for now, anything may yet happen. Cross your fingers.

by Jester (noreply@blogger.com) at February 21, 2015 06:18 PM

Jester - Resonaances

Weekend plot: spin-dependent dark matter
This weekend plot is borrowed from a nice recent review on dark matter detection:
It shows experimental limits on the spin-dependent scattering cross section of dark matter on protons. This observable is not where the most spectacular race is happening, but it is important for constraining more exotic models of dark matter. Typically, a scattering cross section in the non-relativistic limit is independent of spin or velocity of the colliding particles. However, there exist reasonable models of dark matter where the low-energy cross section is more complicated. One possibility is that the interaction strength is proportional to the scalar product of spin vectors of a dark matter particle and a nucleon (proton or neutron). This is usually referred to as the spin-dependent scattering, although other kinds of spin-dependent forces that also depend on the relative velocity are possible.

In all existing direct detection experiments, the target contains nuclei rather than single nucleons. Unlike in the spin-independent case, for spin-dependent scattering the cross section is not enhanced by coherent scattering over many nucleons. Instead, the interaction strength is proportional to the expectation values of the proton and neutron spin operators in the nucleus.  One can, very roughly, think of this process as a scattering on an odd unpaired nucleon. For this reason, xenon target experiments such as Xenon100 or LUX are less sensitive to the spin-dependent scattering on protons because xenon nuclei have an even number of protons.  In this case,  experiments that contain fluorine in their target molecules have the best sensitivity. This is the case of the COUPP, Picasso, and SIMPLE experiments, who currently set the strongest limit on the spin-dependent scattering cross section of dark matter on protons. Still, in absolute numbers, the limits are many orders of magnitude weaker than in the spin-independent case, where LUX has crossed the 10^-45 cm^2 line. The IceCube experiment can set stronger limits in some cases by measuring the high-energy neutrino flux from the Sun. But these limits depend on what dark matter annihilates into, therefore they are much more model-dependent than the direct detection limits.

by Jester (noreply@blogger.com) at February 21, 2015 06:18 PM

Clifford V. Johnson - Asymptotia

Pre-Oscar Bash: Hurrah for Science at the Movies?
It is hard to not get caught up each year in the Oscar business if you live in this town and care about film. If you care about film, you're probably just mostly annoyed about the whole thing because the slate of nominations and eventual winners hardly represents the outcome of careful thought about relative merits and so forth. The trick is to forget being annoyed and either hide from the whole thing or embrace it as a fun silly thing that does not mean too much. british_film_oscar_bash_smaller_05 This year since there has been a number of high profile films that help raise awareness of and interest in science and scientists, I have definitely not chosen the "hide away" option. Whatever one thinks of how good or bad "The Theory of Everything", "The Imitation Game" and "Interstellar" might be, I think that is simply silly to ignore the fact that it is a net positive thing that they've got millions of people taking about science and science-related things while out on their movie night. That's a good thing, and as I've been saying for the last several months (see e.g. here and here), good enough reason for people interested in science engagement to be at least broadly supportive of the films, because that'll encourage more to be made, an the more such films are being made, the better the chances are that even better ones get made. This is all a preface to admitting that I went to one of those fancy pre-Oscar parties last night. It was put on by the British Consul-General in Los Angeles (sort of a followup to the one I went to last month mentioned here) in celebration of the British Film industry and the large number of British Oscar [...] Click to continue reading this post

by Clifford at February 21, 2015 06:15 PM

February 20, 2015

Lubos Motl - string vacua and pheno

Barry Kripke wrote a paper on light-cone-quantized string theory
In the S08E15 episode of The Big Bang Theory, Ms Wolowitz died. The characters were sad and Sheldon was the first one who said something touching. I think it was a decent way to deal with the real-world death of Carol Ann Susi who provided Ms Wolowitz with her voice.

The departure of Ms Wolowitz abruptly solved a jealousy-ignited argument between Stewart and Howard revolving around the furniture from the Wolowitz house.

Also, if you missed that, Penny learned that she's been getting tests from Amy who was comparing her intelligence to the intelligence of the chimps. Penny did pretty well, probably more so than Leonard.




But the episode began with the twist described in the title. Barry brought a bottle to Amy because she had previously helped him with a paper that Kripke wrote and that was apparently very successful.




Kripke revealed that the paper was on the wight-cone quantization (I suppose he meant light-cone quantization) of string theory.

It's funny because some of my most well-known papers were about the light-cone quantization (in particular, Matrix theory is a light-cone-quantized description of string/M-theory), and I've been a big fan of this (not terribly widely studied) approach to string theory since 1994 when I began to learn string theory at the technical level. There are no bad ghosts (negative-norm states) or local redundancies in that description (well, except for the \(U(N)\) gauge symmetry if we use the matrix model description) which is "very physical" from a certain perspective.

Throughout the episode, Sheldon was jealous and unhappy that he had left string theory. Penny was trying to help him to "let it go"; this effort turned against her later. Recall that in April 2014, the writers turned Sheldon into a complete idiot who had only been doing string theory because some classmates had been beating him with a string theory textbook and who suddenly decided that he no longer considered string theory a vital branch of the scientific research.

The yesterday's episode fixed that harm to string theory – but it hasn't really fixed the harm done to Sheldon's image. Nothing against Kripke but the path that led him to write papers to string theory seems rather bizarre to me. When he appeared in the sitcom for the first time, I was convinced that we had quite some data that he was just some low-energy, low-brow, and perhaps experimental physicist. Those usually can't write papers on light-cone-quantized string theory.

But the writers have gradually transformed the subdiscipline of physics that Kripke is good at (this was not the first episode in which Kripke looked like a theoretical physicist). Of course, this is a twist that I find rather strange and unlikely but what can we do? Despite his speech disorder and somewhat obnoxious behavior, we should praise a new string theorist. Welcome, Barry Kripke. ;-)

An ad:
Adopt your own Greek for €500!

He will do everything that you don't have time to do:

* sleep until 11 am
* regularly have coffee
* honor siesta after the lunch
* spend evenings by sitting in the bar

You will finally have the time to work from dawn to dusk.

by Luboš Motl (noreply@blogger.com) at February 20, 2015 06:42 PM