Particle Physics Planet


May 04, 2016

Peter Coles - In the Dark

Farewell to Whitchurch..

One of the things that happened over the Bank Holiday Weekend was the closure of Whitchurch Hospital on April 30th 2016. I read about this here, from which source I also took the photograph below:

Whitchurch-Hospital-2

Whitchurch Hospital was built in 1908 and was originally known as Cardiff City Asylum. After over a hundred years of providing care for the mentally ill – including soldiers treated for shell shock in two world wars,  the remaining patients have now been transferred to site will be redeveloped for residential housing and the remaining inpatients transferred to a brand new psychiatric care unit at Llandough.

It was strange reading about the closure of Whitchurch Hospital. Having spent more time myself there than I wish I had, including an extended period an acute ward, I never thought I would feel nostalgic about the place. Quite apart from the fact that it looked like something out of a Gothic novel, it was in dire need of refurbishment and modernisation. Looking back, however, I have the greatest admiration for the staff who worked there and deep gratitude for the patience and kindness they showed me while I was there.

The first extended period I spent in a psychiatric institution, back in the 1980s, was in Hellingly Hospital in Sussex. That place also had something of the Hammer House of Horror about it. I was completely terrified from the moment I arrived there to the moment I was discharged and don’t feel any nostalgia for it. However, when I recently looked at what it is like now – abandoned and decaying – it gave me more than a shudder.

 


by telescoper at May 04, 2016 01:00 PM

CERN Bulletin

CERN Bulletin Issue No. 17-18/2016
Link to e-Bulletin Issue No. 17-18/2016Link to all articles in this issue No.

May 04, 2016 09:39 AM

May 03, 2016

Christian P. Robert - xi'an's og

global-local mixtures

Anindya Bhadra, Jyotishka Datta, Nick Polson and Brandon Willard have arXived this morning a short paper on global-local mixtures. Although the definition given in the paper (p.1) is rather unclear, those mixtures are distributions of a sample that are marginals over component-wise (local) and common (global) parameters. The observations of the sample are (marginally) exchangeable if not independent.

“The Cauchy-Schlömilch transformation not only guarantees an ‘astonishingly simple’ normalizing constant for f(·), it also establishes the wide class of unimodal densities as global-local scale mixtures.”

The paper relies on the Cauchy-Schlömilch identity

\int_0^\infty f(\{x-g(x)\}^2)\text{d}x=\int_0^\infty f(y^2)\text{d}y\qquad \text{with}\quad g(x)=g^{-1}(x)

a self-inverse function. This generic result proves helpful in deriving demarginalisations of a Gaussian distribution for densities outside the exponential family like Laplace’s. (This is getting very local for me as Cauchy‘s house is up the hill, while Laplace lived two train stations away. Before train was invented, of course.) And for logistic regression. The paper also briefly mentions Etienne Halphen for his introduction of generalised inverse Gaussian distributions, Halphen who was one of the rare French Bayesians, worked for the State Electricity Company (EDF) and briefly with Lucien Le Cam (before the latter left for the USA). Halphen introduced some families of distributions during the early 1940’s, including the generalised inverse Gaussian family, which were first presented by his friend Daniel Dugué to the Académie des Sciences maybe because of the Vichy racial laws… A second result of interest in the paper is that, given a density g and a transform s on positive real numbers that is decreasing and self-inverse, the function f(x)=2g(x-s(x)) is again a density, which can again be represented as a global-local mixture. [I wonder if these representations could be useful in studying the Cauchy conjecture solved last year by Natesh and Xiao-Li.]


Filed under: Books, pictures, Running, Statistics, Travel Tagged: Cauchy, Pierre Simon de Laplace, Schlömilch

by xi'an at May 03, 2016 10:16 PM

Symmetrybreaking - Fermilab/SLAC

EXO-200 resumes its underground quest

The upgraded experiment aims to discover if neutrinos are their own antiparticles.

Science is often about serendipity: being open to new results, looking for the unexpected.

The dark side of serendipity is sheer bad luck, which is what put the Enriched Xenon Observatory experiment, or EXO-200, on hiatus for almost two years.

Accidents at the Department of Energy’s underground Waste Isolation Pilot Project (WIPP) facility near Carlsbad, New Mexico, kept researchers from continuing their search for signs of neutrinos and their antimatter pairs. Designed as storage for nuclear waste, the site had both a fire and a release of radiation in early 2014 in a distant part of the facility from where the experiment is housed. No one at the site was injured. Nonetheless, the accidents, and the subsequent efforts of repair and remediation, resulted in a nearly two-year suspension of the EXO-200 effort.

Things are looking up now, though: Repairs to the affected area of the site are complete, new safety measures are in place, and scientists are back at work in their separate area of the site, where the experiment is once again collecting data. That’s good news, since EXO-200 is one of a handful of projects looking to answer a fundamental question in particle physics: Are neutrinos and antineutrinos the same thing?

The neutrino that wasn't there

Each type of particle has its own nemesis: its antimatter partner. Electrons have positrons—which have the same mass but opposite electric charge—quarks have antiquarks and protons have antiprotons. When a particle meets its antimatter version, the result is often mutual annihilation. Neutrinos may also have antimatter counterparts, known as antineutrinos. However, unlike electrons and quarks, neutrinos are electrically neutral, so antineutrinos look a lot like neutrinos in many circumstances.

In fact, one hypothesis is that they are one and the same. To test this, EXO-200 uses 110 kilograms of liquid xenon (of its 200kg total) as both a particle source and particle detector. The experiment hinges on a process called double beta decay, in which an isotope of xenon has two simultaneous decays, spitting out two electrons and two antineutrinos. (“Beta particle” is a nuclear physics term for electrons and positrons.)

If neutrinos and antineutrinos are the same thing, sometimes the result will be neutrinoless double beta decay. In that case, the antineutrino from one decay is absorbed by the second decay, canceling out what would normally be another antineutrino emission. The challenge is to determine if neutrinos are there or not, without being able to detect them directly.

“Neutrinoless double beta decay is kind of a nuclear physics trick to answer a particle physics problem,” says Michelle Dolinski, one of the spokespeople for EXO-200 and a physicist at Drexel University. It’s not an easy experiment to do.

EXO-200 and similar experiments look for indirect signs of neutrinoless double beta decay. Most of the xenon atoms in EXO-200 are a special isotope containing 82 neutrons, four more than the most common version found in nature. The isotope decays by emitting two electrons, changing the atom from xenon into barium. Detectors in the EXO-200 experiment collect the electrons and measure the light produced when the beta particles are stopped in the xenon. These measurements together are what determine whether double beta decay happened, and whether the decay was likely to be neutrinoless.

EXO-200 isn’t the only neutrinoless double beta decay experiment, but many of the others use solid detectors instead of liquid xenon. Dolinski got her start on the CUORE experiment, a large solid-state detector, but later changed directions in her research.

“I joined EXO-200 as a postdoc in 2008 because I thought that the large liquid detectors were a more scalable solution,” she says. "If you want a more sensitive liquid-state experiment, you can build a bigger tank and fill it with more xenon.”

Neutrinoless or not, double beta decay is very rare. A given xenon atom decays randomly, with an average lifetime of a quadrillion times the age of the universe. However, if you use a sufficient number of atoms, a few of them will decay while your experiment is running.

“We need to sample enough nuclei so that you would detect these putative decays before the researcher retires,” says Martin Breidenbach, one of the EXO-200 project leaders and a physicist at the Department of Energy’s SLAC National Accelerator Laboratory.

But the experiment is not just detecting neutrinoless events. Heavier neutrinos mean more frequent decays, so measuring the rate reveals the neutrino mass — something very hard to measure otherwise.

Prior runs of EXO-200 and other experiments failed to see neutrinoless double beta decay, so either neutrinos and antineutrinos aren’t the same particle after all, or the neutrino mass is small enough to make decays too rare to be seen during the experiment’s lifetime. The current limit for the neutrino mass is less than 0.38 electronvolts—for comparison, electrons are about 500,000 electronvolts in mass.

SLAC National Accelerator Laboratory's Jon Davis checks the enriched xenon storage bottles before the refilling of the TPC.

Brian Dozier, Los Alamos National Laboratory

Working in the salt mines

Cindy Lin is a Drexel University graduate student who spends part of her time working on the EXO-200 detector at the mine. Getting to work is fairly involved.

“In the morning we take the cage elevator half a mile down to the mine,” she says. Additionally, she and the other workers at WIPP have to take a 40-hour safety training to ensure their wellbeing, and wear protective gear in addition to normal lab clothes.

“As part of the effort to minimize salt dust particles in our cleanroom, EXO-200 scientists also cover our hair and wear coveralls,” Lin adds.

The sheer amount of earth over the detector shields it from electrons and other charged particles from space, which would make it too hard to spot the signal from double beta decay. WIPP is carved out of a sodium chloride deposit—the same stuff as table salt—that has very little uranium or the other radioactive minerals you find in solid rock caverns. But it has its drawbacks, too.

“Salt is very dynamic: It moves at the level of centimeters a year, so you can't build a nice concrete structure,” says Breidenbach. To compensate, the EXO-200 team has opted for a more modular design.

The inadvertent shutdown provided extra challenges. EXO-200, like most experiments, isn’t well suited for being neglected for more than a few days at a time. However, Lin and other researchers worked hard to get the equipment running for new data this year, and the downtime also allowed researchers to install some upgraded equipment.

The next phase of the experiment, nEXO, is at a conceptual stage based on what has been learned from EXO200. Experimenters are considering the benefits of moving the project deeper underground, perhaps at a facility like the Sudbury Neutrino Observatory (SNOlab) in Canada. Dolinski is optimistic that if there are any neutrinoless double beta decays to see, nEXO or similar experiments should see them in the next 15 years or so.

Then, maybe we’ll know if neutrinos and antineutrinos are the same and find out more about these weird low-mass particles.

by Matthew R. Francis at May 03, 2016 04:28 PM

Axel Maas - Looking Inside the Standard Model

Digging into a particle
This time I would like to write about a new paper which I have just put out. In this paper, I investigate a particular class of particles.

This class of particles is actually quite similar to the Higgs boson. I. e. the particles are bosons and they have the same spin as the Higgs boson. This spin is zero. This class of particles is called scalars. These particular sclars also have the same type of charges, they interact with the weak interaction.

But there are fundamental differences as well. One is that I have switched off the back reaction between these particles and the weak interactions: The scalars are affected by the weak interaction, but they do not influence the W and Z bosons. I have also switched off the interactions between the scalars. Therefore, no Brout-Englert-Higgs effect occurs. On the other hand, I have looked at them for several different masses. This set of conditions is known as quenched, because all the interactions are shut-off (quenched), and the only feature which remains to be manipulated is the mass.

Why did I do this? There are two reasons.

One is a quite technical reason. Even in this quenched situation, the scalars are affected by quantum corrections, the radiative corrections. Due to them, the mass changes, and the way the particles move changes. These effects are quantitative. And this is precisely the reason to study them in this setting. Being quenched it is much easier to actually determine the quantitative behavior of these effects. Much easier than when looking at the full theory with back reactions, which is a quite important part of our research. I have learned a lot about these quantitative effects, and am now much more confident in how they behave. This will be very valuable in studies beyond this quenched case. As was expected, there was not many surprises found. Hence, it was essentially a necessary but unspectacular numerical exercise.

Much more interesting was the second aspect. When quenching, this theory becomes very different from the normal standard model. Without the Brout-Englert-Higgs effect, the theory actually looks very much like the strong interaction. Especially, in this case the scalars would be confined in bound states, just like quarks are in hadrons. How this occurs is not really understood. I wanted to study this using these scalars.

Justifiable, you may ask why I would do this. Why would I not just have a look at the quarks themselves. There is a conceptual and a technical reason. The conceptual reason is that quarks are fermions. Fermions have non-zero spin, in contrast to scalars. This entails that they are mathematically more complicated. These complications mix in with the original question about confinement. This is disentangled for scalars. Hence, by choosing scalars, these complications are avoided. This is also one of the reasons to look at the quenched case. The back-reaction, irrespective of with quarks or scalars, obscures the interesting features. Thus, quenching and scalars isolates the interesting feature.

The other is that the investigations were performed using simulations. Fermions are much, much more expensive than scalars in such simulations in terms of computer time. Hence, with scalars it is possible to do much more at the same expense in computing time. Thus, simplicity and cost made scalars for this purpose attractive.

Did it work? Well, no. At least not in any simple form. The original anticipation was that confinement should be imprinted into how the scalars move. This was not seen. Though the scalars are very peculiar in their properties, they in no obvious way show confinement. It may still be that there is an indirect way. But so far nobody has any idea how. Though disappointing, this is not bad. It only tells us that our simple ideas were wrong. It also requires us to think harder on the problem.

An interesting observation could be made nonetheless. As said above, the scalars were investigated for different masses. These masses are, in a sense, not the observed masses. What they really are is the mass of the particle before quantum effects are taken into account. These quantum effects change the mass. These changes were also measured. Surprisingly, the measured mass was larger than the input mass. The interactions created mass, even if the input mass was zero. The strong interaction is known to do so. However, it was believed that this feature is strongly tied to fermions. For scalars it was not expected to happen, at least not in the observed way. Actually, the mass is even of a similar size as for the quarks. This is surprising. This implies that the kind of interaction is generically introducing a mass scale.

This triggered for me the question whether the mass scale also survives when having the backcoupling in once more. If it remains even when there is a Brout-Englert-Higgs effect then this could have interesting implications for the mass of the Higgs. But this remains to be seen. It may as well be that this will not endure when not being quenched.

by Axel Maas (noreply@blogger.com) at May 03, 2016 04:21 PM

Peter Coles - In the Dark

50 Years of the Astronomy Centre at the University of Sussex

It is my pleasure to share here the announcement that there will be a  special celebration for the 50th Anniversary of the Astronomy Centre at the University of Sussex whose first students began their studies here in 1966.

Lord Martin Rees – Astronomer Royal, Fellow of Trinity College, Cambridge, Past President of the Royal Society and Sussex Honorary – will be joining alumni and other former faculty for the celebratory lunch and has kindly agreed to deliver a short speech as part of the event.

Organised by the Astronomy Centre and the Development and Alumni Office, and supported by the School of Mathematical and Physical Sciences , this celebration is open to all former students and their partners. Please make a note of the date and time:

Date: Saturday 15th October 2016
Venue: 3rd Floor, Bramber House, University of Sussex
Time: 12 – 3pm
Cost:  £20 per person, to include lunch and refreshments

You can book online here to secure your place(s).

We are very much looking forward to welcoming you back to campus to share in the celebrations. If you are in touch with other alumni or faculty from Sussex who have connections with the Astronomy Centre, please let them know!


by telescoper at May 03, 2016 04:19 PM

Emily Lakdawalla - The Planetary Society Blog

What's up in the solar system, May 2016 edition: Good news in cruise for Juno and ExoMars Trace Gas Orbiter
May 2016 will be yet another month of fairly routine operations across the solar system -- if you can ever use the word "routine" to describe autonomous robots exploring other planets. ExoMars' cruise to Mars has started smoothly, and Juno is only two months away from Jupiter orbit insertion. Earthlings will witness a Mercury transit of the Sun on May 9.

May 03, 2016 04:17 PM

Peter Coles - In the Dark

Afterwards, by Thomas Hardy

When the Present has latched its postern behind my tremulous stay,
And the May month flaps its glad green leaves like wings,
Delicate-filmed as new-spun silk, will the neighbours say,
“He was a man who used to notice such things”?

If it be in the dusk when, like an eyelid’s soundless blink,
The dewfall-hawk comes crossing the shades to alight
Upon the wind-warped upland thorn, a gazer may think,
“To him this must have been a familiar sight.”

If I pass during some nocturnal blackness, mothy and warm,
When the hedgehog travels furtively over the lawn,
One may say, “He strove that such innocent creatures should come to no harm,
But he could do little for them; and now he is gone.”

If, when hearing that I have been stilled at last, they stand at the door,
Watching the full-starred heavens that winter sees,
Will this thought rise on those who will meet my face no more,
“He was one who had an eye for such mysteries”?

And will any say when my bell of quittance is heard in the gloom,
And a crossing breeze cuts a pause in its outrollings,
Till they rise again, as they were a new bell’s boom,
“He hears it not now, but used to notice such things”?

by Thomas Hardy (1840-1928).

 


by telescoper at May 03, 2016 03:45 PM

astrobites - astro-ph reader's digest

Diaries of a Dwarf Planet: What are Those Spots on Ceres?

Picture_Author

Lauren Sgro, PhD student at University of Georgia

Today we have a guest post from Lauren Sgro, who is a second year PhD student at the University of Georgia. Lauren studies the orbits of young, nearby binary star systems. Despite the all-consuming nature of graduate school, she enjoys doing yoga and occasionally hiking up a mountain.

Title: Sublimation in Bright Spots on (1) Ceres

Authors: A. Nathues, M. Hoffmann, M. Schaefer, L. Le Corre, V. Reddy, T. Platz, E. A. Cloutis, U. Christensen, T. Kneissl, J.-Y. Li, K. Mengel, N. Schmedemann, T. Schaefer, C. T. Russell, D. M. Applin, D. L. Buczkowski, M. R. M. Izawa, H. U. Keller, D. P. O’Brien, C. M. Pieters, C. A. Raymond, J. Ripken, P. M. Schenk, B. E. Schmidt, H. Sierks, M. V. Sykes, G. S. Thangjam, J.-B. Vincent.

First Author’s Institution: Max Planck Institute for Solar System Research, Göttingen, Germany.

Status: Accepted to Nature, 21 September 2015.

The largest body in the asteroid belt, Ceres, has been throwing humans for a loop ever since we first turned our telescopes in its direction. For instance, Herschel’s detection of water vapor on Ceres marked the first indisputable detection of water vapor on any object in the asteroid belt. Not to mention Ceres’ lone mountain, dubbed Ahuna Mons, which has scientists stumped trying to model its formation. Now, as NASA’s space probe Dawn completes its first year of observations, another mystery has materialized. What are those bright spots that pepper this dwarf planet?

The surface of Ceres is fairly dark, akin to freshly laid asphalt. However, a recent study counts 130 bright spots distributed over its surface that seem to be more similar in brightness to new concrete. Bright patches such as this hint that Ceres may have an underlying layer of ice. The relatively young Occator crater houses the brightest spot at its center, an area that is almost four times more reflective than any other feature on the alien world. Occator and the second lightest spot, designated ‘Feature A,’ are both labeled in Figure 1.

Figure 1. Enhanced color map of Ceres from the Dawn Framing Camera. Colors correspond to wavelengths as follows: red = 0.96m, green = 0.75m, blue = 0.44m. This composite image from today’s paper represents bright spots as more blue or white than the surrounding material. (Nathues et al. 2015)

So what is this shiny stuff?

Scientists used spectral analysis and absolute reflectance to narrow down the possibilities. Reflectance is a fractional measure of how much light that is hitting a surface is reflected back, which typically changes depending on the wavelength of light we are talking about. Most of Occator’s bright spots, referred to as secondary spots, have a wavelength of maximum reflectance (wavelength of light that is most reflected by a material) that is shorter than that for the average surface, as shown in Figure 2a. In fact this trend applies to most of the bright features found by this survey. But the central most spot inside Occator seems to be an exception to the rule. This region, the brightest source on Ceres, displays a spectrum (Figure 2b) that is entirely different than both the secondary spots and the average Ceres spectrum. Along with the confirmed presence of water vapor, this data suggests that the curious bright substance is either water ice, some sort of salt, or an iron-poor clay mineral.

A closer look at the brightest spot reveals that the most likely compound responsible for this phenomenon is really just dehydrated Epsom salt. As we move away from the central portion of Occator towards the secondary spots near the edge, the reflective material shows signs of alteration. The secondary spots could simply contain a salt that is less hydrated, indicating that the presence of water wanes moving outward from the center of the crater. Feature A’s spectrum supports this theory by matching well to a more dehydrated salt in Figure 2c.

Figure 2. a) The spectra of different features on Ceres, not including the central brightest Occator spot. The brighter spots exhibit more reflective spectra, which indicate brighter material than the average surface. b) Spectrum of the central brightest spot in Occator matched to a combination of the Ceres average and dehydrated salt spectra. c) Spectrum of Feature A matched to a combination of the Ceres average and an even less hydrated salt spectra. Discrepancies are likely due to differences in iron-rich minerals.

Figure 2. a) The spectra of different features on Ceres, not including the central brightest Occator spot. The brighter spots exhibit more reflective spectra, which indicate brighter material than the average surface. b) Spectrum of the central brightest spot in Occator matched to a combination of the Ceres average and dehydrated salt spectra. c) Spectrum of Feature A matched to a combination of the Ceres average and an even less hydrated salt spectra. Discrepancies are likely due to differences in iron-rich minerals. (Nathues et al. 2015)

More evidence of a watery world

Occator itself is a mini mystery machine (cue Scooby Doo). Dawn caught a glimpse of a low altitude haze loitering inside the crater while looking for plumes on Ceres. In this context, a plume is a column of water vapor that erupts from the surface of a planet and suggests the existence of a subterranean ocean, much like that expected to exist beneath the crust of Europa or Enceladus. The failure to detect plumes signifies that any water hidden within Ceres is probably not liquid, ruling out various sorts of geological activity. However, the detection of haze tells its own story. Scientists speculate that this haze may be comprised of water-ice particles and dust, which is an idea supported by Herschel’s confirmation of a relatively nearby water vapor source. The looming fog exhibited diurnal patterns, being distinct around local noon but disappearing near sundown. The haze forms due to the sun’s heat, which warms the surface and causes sublimation of the present water-ice. Rising vapor brings along dust from the surface to create the observed haze, leaving behind salt deposits that manifest as bright stains. Without the presence of the sun, sublimation ceases and the haze disintegrates at dusk.

This hazy and fleeting feature isn’t found everywhere on Ceres. It is likely that in the case of Occator or even Feature A, where the same process is expected to occur, some underground reserve of ice became exposed and thus subject to sublimation. But how could the fresh ice have been revealed? Perhaps an impact was able to permeate the dark crust of Ceres and uncover the reflective ice and salt. This simple theory fits well with the observation that most of the bright spots are associated with impact craters. The less reflective spots scattered across Ceres can then be thought of as places where sublimation has exhausted the uncovered supply of briny water-ice.  

With all this talk of sublimation, it seems that Ceres has more cometary characteristics than expected for an object residing in the asteroid belt. As the largest object that lies between Jupiter and Mars, Ceres gives us clues about the formation of the Solar System. This study suggests that our Solar System may have evolved with less of a distinction between asteroids and comets than previously thought, challenging the long-held belief that asteroids and comets formed entirely separately. Dawn will remain at Ceres for the rest of its mission and indefinitely afterwards, continuing to gather more exciting details about this strange and far-away world. 

by Guest at May 03, 2016 02:28 PM

May 02, 2016

Christian P. Robert - xi'an's og

contemporary issues in hypothesis testing

hipocontemptNext Fall, on 15-16 September, I will take part in a CRiSM workshop on hypothesis testing. In our department in Warwick. The registration is now open [until Sept 2] with a moderate registration free of £40 and a call for posters. Jim Berger and Joris Mulder will both deliver a plenary talk there, while Andrew Gelman will alas give a remote talk from New York. (A terrific poster by the way!)


Filed under: pictures, Statistics, Travel, University life Tagged: Andrew Gelman, Bayes factors, Bayesian foundations, Bayesian statistics, Coventry, CRiSM, England, Fall, hypothesis testing, Jim Berger, Joris Mulder, statistical tests, University of Warwick, workshop

by xi'an at May 02, 2016 10:16 PM

Emily Lakdawalla - The Planetary Society Blog

A Moon for Makemake
The solar system beyond Neptune is full of worlds hosting moons. Now we know that the dwarf planet Makemake has one of its very own.

May 02, 2016 03:54 PM

Peter Coles - In the Dark

Flowers in Bute Park

On  my way back to Brighton after a weekend in Cardiff. I would have lingered for more of this bank holiday Monday but the trains are running to a weird timetable and I didn’t want to get back too late.

Anyway, in lieu of a proper post here’s a picture I took of some of the spring  flowers in Bute Park on Saturday.

image


by telescoper at May 02, 2016 03:11 PM

ZapperZ - Physics and Physicists

Walter Kohn
Walter Kohn, who won the Nobel Prize in Chemistry, has passed away on April 19.

He is considered as the father of Density Functional Theory (DFT). If you have done any computational chemistry or band structure calculation in solid state physics, you will have seen DFT in one form or another. It has become an indispensable technique to be able to accurately arrive at a theoretical description of many systems.

Zz.

by ZapperZ (noreply@blogger.com) at May 02, 2016 03:11 PM

ZapperZ - Physics and Physicists

ITER Is Getting More Expensive And More Delayed
This news report details the cost overruns and the more-and-a-decade delay of ITER.

ITER chief Bernard Bigot said the experimental fusion reactor under construction in Cadarache, France, would not see the first test of its super-heated plasma before 2025 and its first full-power fusion not before 2035.

The biggest lesson from this is how NOT to run a major international collaboration. Any more large science projects like this, and the politicians and the public will understandably be reluctant to support science projects of that scale. The rest of us will suffer for it.

Zz.

by ZapperZ (noreply@blogger.com) at May 02, 2016 02:34 PM

astrobites - astro-ph reader's digest

Mapping Gravity in Stellar Nurseries

Paper TitleGravitational Acceleration and Edge Effects in Molecular Clouds

Authors: G-X Li , A. Burkert, T. Megeath, and F. Wyrowski

First Author’s Institution: University Observatory Munich

Paper Status: Submitted to Astronomy and Astrophysics

Stars form when the densest parts of molecular clouds collapse under their own gravity. Molecular clouds are shaped by the interaction of gravity, turbulencemagnetic fields, and feedback from the energy and pressure exerted by stars. Turbulence is observed by measuring the statistical properties of the gas, magnetic fields can be observed using polarized light, and feedback can be identified by searching for protostellar outflows and other features in the cloud. But ultimately, gravitational collapse forms stars. Today’s paper proposes a method to map gravitational acceleration in molecular clouds.

The Method

Gravity comes from mass (thanks Isaac). To map the gravitational acceleration, we need a map of the mass in a molecular cloud. Molecular clouds are made up of gas and dust. The gas can be mapped using emission lines. The dust can be mapped by its effect on background starlight. The more dust in a cloud the dimmer and redder a background star appears. A map of the dust can be turned into the total column density by assuming an average factor between dust and gas mass. The gravitational acceleration is calculated for each pixel of a molecular cloud by adding the contributions from the mass in all other pixels.

The gravitational acceleration is a three-dimensional property, but observations only tell us the column density on the plane of the sky. We don’t know how the mass is distributed along the line of sight. The authors avoid this problem by assuming that all clouds have a uniform thickness of 0.3 pc. How bad is this assumption? It depends on the geometry of the cloud. A spherically symmetric cloud will have equal mass in front and behind the center of the cloud, contributing equal and opposite gravitational acceleration, so the assumption of uniform thickness is not bad. However, a filamentary cloud may be oriented at different angles to our line of sight, and could be much longer than it appears projected on the sky. These clouds appear more compact than they are, causing us to overestimate the acceleration within them. The authors stress that the acceleration maps are only accurate to an order of magnitude, and should be interpreted qualitatively.

Gravity On the Edge

Gravitational acceleration depends strongly on the shape of the cloud. Figure 1 shows the gravitational acceleration map of a simple disk. This cloud experiences the strongest gravitational acceleration in a ring around the edge of the disk. Figure 2 shows the acceleration in a simple filament. The strongest acceleration in this case is at the short ends of the cloud. Because of these edge effects, gravitational acceleration may enhance star formation in the edges of clouds.

89B9AB0C-22CC-461F-980E-41DB5717EAC4

Figure 1. Uniform disk model of a molecular cloud. (Left) Cloud density is indicated in blue. Red arrows indicate the direction and magnitude of the gravitational acceleration in the cloud. (Right) The magnitude of acceleration is indicated in red. The acceleration is strongest in a ring around the edge of the disk. This effect can be seen in the Perseus molecular cloud (Figure 4).

BB4E4D77-7B21-40F1-9D3D-23B603299250

Figure 2. Filament model of a molecular cloud. The colors and arrows are the same as in Figure 1. In this case, the acceleration is strongest at the short ends of the filament. This effect can be seen in the Pipe Nebula (Figure 3).

Accelerating Star Formation

The authors make gravitational acceleration maps of several real  molecular clouds. The Pipe Nebula, shown in Figure 3, is a filamentary cloud. The acceleration is highest at the ends of the filament, like the model in Figure 2. The authors also show the positions of young stars in the Pipe Nebula. These young stars are mainly found in the areas of highest gravitational acceleration. The Perseus molecular cloud is shown in Figure 4. This cloud resembles the uniform disk model in Figure 1. Again, the gravitational acceleration is highest at the edges of this cloud. Young stars are found preferentially along the edges of the cloud. Both of these examples show that large acceleration may induce star formation. This could be because gas is flowing onto the clouds in these areas, enhancing the density enough to trigger collapse into stars.

Figure 3. The Pipe Nebula. The colors and arrows are the same as in previous figures. The stars show the position of young stars in the cloud. The young stars tend to be found in areas of high acceleration, suggesting that star formation is triggered by gas accreting onto the cloud in these areas. Compare to the filament model in Figure 2.

Figure 3. The Pipe Nebula. The colors and arrows are the same as in previous figures. The stars show the position of young stars in the cloud. The young stars tend to be found in areas of high acceleration, suggesting that star formation is triggered by gas accreting onto the cloud in these areas. Compare to the filament model in Figure 2.

 

Figure 4. Perseus molecular cloud. The colors and arrows are the same as previous figures. The red contours show the region where dense cores of gas - the precursors of stars - are found. The star formation is preferentially occurring on the edge of the cloud, where the acceleration is highest. This effect is also seen in the simple uniform disk model in Figure 1.

Figure 4. Perseus molecular cloud. The colors and arrows are the same as previous figures. The red contours show the region where dense cores of gas – the precursors of stars – are found. The star formation is preferentially occurring on the edge of the cloud, where the acceleration is highest. This effect is also seen in the simple uniform disk model in Figure 1.

The effects of gravitational acceleration may enhance star formation in areas of molecular clouds. But gravity is not the only force at work here. The effect of turbulence and magnetic fields need to be considered to judge the relative importance of gravity. The method presented here is complicated by theuncertainty in the three-dimensional structure of clouds, and the conversion between dust mass and total mass. By mapping the acceleration in simulated clouds, future studies may be able to quantify these uncertainties. Gravity is ultimately responsible for turning gas into stars, so understanding the role that gravity plays in molecular clouds is critical to a complete picture of star formation.

by Jesse Feddersen at May 02, 2016 04:07 AM

May 01, 2016

Christian P. Robert - xi'an's og

auxiliary likelihood-based approximate Bayesian computation in state-space models

With Gael Martin, Brendan McCabe, David T. Frazier, and Worapree Maneesoonthorn, we arXived (and submitted) a strongly revised version of our earlier paper. We begin by demonstrating that reduction to a set of sufficient statistics of reduced dimension relative to the sample size is infeasible for most state-space models, hence calling for the use of partial posteriors in such settings. Then we give conditions [like parameter identification] under which ABC methods are Bayesian consistent, when using an auxiliary model to produce summaries, either as MLEs or [more efficiently] scores. Indeed, for the order of accuracy required by the ABC perspective, scores are equivalent to MLEs but are computed much faster than MLEs. Those conditions happen to to be weaker than those found in the recent papers of Li and Fearnhead (2016) and Creel et al.  (2015).  In particular as we make no assumption about the limiting distributions of the summary statistics. We also tackle the dimensionality curse that plagues ABC techniques by numerically exhibiting the improved accuracy brought by looking at marginal rather than joint modes. That is, by matching individual parameters via the corresponding scalar score of the integrated auxiliary likelihood rather than matching on the multi-dimensional score statistics. The approach is illustrated on realistically complex models, namely a (latent) Ornstein-Ulenbeck process with a discrete time linear Gaussian approximation is adopted and a Kalman filter auxiliary likelihood. And a square root volatility process with an auxiliary likelihood associated with a Euler discretisation and the augmented unscented Kalman filter.  In our experiments, we compared our auxiliary based  technique to the two-step approach of Fearnhead and Prangle (in the Read Paper of 2012), exhibiting improvement for the examples analysed therein. Somewhat predictably, an important challenge in this approach that is common with the related techniques of indirect inference and efficient methods of moments, is the choice of a computationally efficient and accurate auxiliary model. But most of the current ABC literature discusses the role and choice of the summary statistics, which amounts to the same challenge, while missing the regularity provided by score functions of our auxiliary models.


Filed under: Books, pictures, Statistics, University life Tagged: ABC, auxiliary model, consistency, Kalman filter, Melbourne, Monash University, score function, summary statistics

by xi'an at May 01, 2016 10:16 PM

Peter Coles - In the Dark

R.I.P. Harry Kroto (1939-2016)

image

I heard earlier this afternoon of the death at the age of 76 of the distinguished chemist Sir Harry Kroto.

Along with Robert Curl and Richard Smalley,  Harry Kroto was awarded the Nobel Prize for Chemistry in 1996 for the discovery of the C60 structure that became known as Buckminsterfullerene (or the “Buckyball” for short).

Harry had a long association with the University of Sussex and was a regular visitor to the Falmer campus even after he moved to the USA.

I remember first meeting him in the 1988 when, as a new postdoc fresh out of my PhD, I had just taken over organising the Friday seminars for the Astronomy Centre. One speaker called off his talk just an hour before it was due to start so I asked if anyone could suggest someone on campus who might stand in. Someone suggested Harry, whose office was  nearby in the School of Molecular Sciences (now the Chichester Building). I was very nervous as I knocked on his door – Harry was already famous then – and held out very little hope that such a busy man would agree to give a talk with less than an hour’s notice. In fact he accepted immediately and with good grace gave a fine impromptu talk about the possibility that C60 might be a major component of interstellar dust. If only all distinguished people were so approachable and helpful!

I met him in campus more recently a couple of years ago when we met to talk about some work he had been doing on a range of things to do with widening participation in STEM subjects. I remember I had booked  an hour in my calendar but we talked for at least three. He was brimming with ideas and energy then. It’s hard to believe he is no more.

Harry Kroto was a man of very strong views  and he was not shy in expressing them. He cared passionately about science and was a powerful advocate for it. He will be greatly missed.

Rest in peace, Harry Kroto (1939-2016)


by telescoper at May 01, 2016 05:08 PM

April 30, 2016

Clifford V. Johnson - Asymptotia

Wild Thing

wild_thing_april_2016The wildflower patch continues to produce surprises. You never know exactly what's going to come up, and in what quantities. I've been fascinated by this particular flower, for example, which seems to be constructed out of several smaller flowers! What a wonder, and of course, there's just one example of its parent plant in the entire patch, so once it is gone, it's gone.

-cvj Click to continue reading this post

The post Wild Thing appeared first on Asymptotia.

by Clifford at April 30, 2016 03:34 PM

The n-Category Cafe

Relative Endomorphisms

Let <semantics>(M,)<annotation encoding="application/x-tex">(M, \otimes)</annotation></semantics> be a monoidal category and let <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> be a left module category over <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics>, with action map also denoted by <semantics><annotation encoding="application/x-tex">\otimes</annotation></semantics>. If <semantics>mM<annotation encoding="application/x-tex">m \in M</annotation></semantics> is a monoid and <semantics>cC<annotation encoding="application/x-tex">c \in C</annotation></semantics> is an object, then we can talk about an action of <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> on <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics>: it’s just a map

<semantics>α:mcc<annotation encoding="application/x-tex">\alpha : m \otimes c \to c</annotation></semantics>

satisfying the usual associativity and unit axioms. (The fact that all we need is an action of <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> on <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> to define an action of <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> on <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics> is a cute instance of the microcosm principle.)

This is a very general definition of monoid acting on an object which includes, as special cases (at least if enough colimits exist),

  • actions of monoids in <semantics>Set<annotation encoding="application/x-tex">\text{Set}</annotation></semantics> on objects in ordinary categories,
  • actions of monoids in <semantics>Vect<annotation encoding="application/x-tex">\text{Vect}</annotation></semantics> (that is, algebras) on objects in <semantics>Vect<annotation encoding="application/x-tex">\text{Vect}</annotation></semantics>-enriched categories,
  • actions of monads (letting <semantics>M=End(C)<annotation encoding="application/x-tex">M = \text{End}(C)</annotation></semantics>), and
  • actions of operads (letting <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> be a symmetric monoidal category and <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> be the monoidal category of symmetric sequences under the composition product)

This definition can be used, among other things, to straightforwardly motivate the definition of a monad (as I did here): actions of a monoidal category <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> on a category <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> correspond to monoidal functors <semantics>MEnd(C)<annotation encoding="application/x-tex">M \to \text{End}(C)</annotation></semantics>, so every action in the above sense is equivalent to an action of a monad, namely the image of the monoid <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics> under such a monoidal functor. In other words, monads on <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> are the universal monoids which act on objects <semantics>cC<annotation encoding="application/x-tex">c \in C</annotation></semantics> in the above sense.

Corresponding to this notion of action is a notion of endomorphism object. Say that the relative endomorphism object <semantics>End M(c)<annotation encoding="application/x-tex">\text{End}_M(c)</annotation></semantics>, if it exists, is the universal monoid in <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> acting on <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics>: that is, it’s a monoid acting on <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics>, and the action of any other monoid on <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics> uniquely factors through it.

This is again a very general definition which includes, as special cases (again if enough colimits exist),

  • the endomorphism monoid in <semantics>Set<annotation encoding="application/x-tex">\text{Set}</annotation></semantics> of an object in an ordinary category,
  • the endomorphism algebra of an object in a <semantics>Vect<annotation encoding="application/x-tex">\text{Vect}</annotation></semantics>-enriched category,
  • the endomorphism monad of an object in an ordinary category, and
  • the endomorphism operad of an object in a symmetric monoidal category.

If the action of <semantics>M<annotation encoding="application/x-tex">M</annotation></semantics> on <semantics>C<annotation encoding="application/x-tex">C</annotation></semantics> has a compatible enrichment <semantics>[,]:C op×CM<annotation encoding="application/x-tex">[-, -] : C^{op} \times C \to M</annotation></semantics> in the sense that we have natural isomorphisms

<semantics>Hom C(mc 1,c 2)Hom M(m,[c 1,c 2])<annotation encoding="application/x-tex">\text{Hom}_C(m \otimes c_1, c_2) \cong \text{Hom}_M(m, [c_1, c_2])</annotation></semantics>

then <semantics>End M(c)<annotation encoding="application/x-tex">\text{End}_M(c)</annotation></semantics> is just the endomorphism monoid <semantics>[c,c]<annotation encoding="application/x-tex">[c, c]</annotation></semantics>, and in fact the above discussion could have been done in the context of enrichments only, but in the examples I have in mind the actions are easier to notice than the enrichments. (Has anyone ever told you that symmetric monoidal categories are canonically enriched over symmetric sequences? Nobody told me, anyway.)

Here’s another example where the action is easier to notice than the enrichment. If <semantics>D,C<annotation encoding="application/x-tex">D, C</annotation></semantics> are two categories, then the monoidal category <semantics>End(C)=[C,C]<annotation encoding="application/x-tex">\text{End}(C) = [C, C]</annotation></semantics> has a natural left action on the category <semantics>[D,C]<annotation encoding="application/x-tex">[D, C]</annotation></semantics> of functors <semantics>DC<annotation encoding="application/x-tex">D \to C</annotation></semantics>. If <semantics>G:DC<annotation encoding="application/x-tex">G : D \to C</annotation></semantics> is a functor, then the relative endomorphism object <semantics>End End(C)(G)<annotation encoding="application/x-tex">\text{End}_{\text{End}(C)}(G)</annotation></semantics>, if it exists, turns out to be the codensity monad of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics>!

This actually follows from the construction of an enrichment: the category <semantics>[D,C]<annotation encoding="application/x-tex">[D, C]</annotation></semantics> of functors <semantics>DC<annotation encoding="application/x-tex">D \to C</annotation></semantics> is (if enough limits exist) enriched over <semantics>End(C)<annotation encoding="application/x-tex">\text{End}(C)</annotation></semantics> in a way compatible with the natural left action. This enrichment takes the following form (by a straightforward verification of universal properties): if <semantics>G 1,G 2[D,C]<annotation encoding="application/x-tex">G_1, G_2 \in [D, C]</annotation></semantics> are two functors <semantics>DC<annotation encoding="application/x-tex">D \to C</annotation></semantics>, then their hom object

<semantics>[G 1,G 2]=Ran G 1(G 2)End(C)<annotation encoding="application/x-tex">[G_1, G_2] = \text{Ran}_{G_1}(G_2) \in \text{End}(C)</annotation></semantics>

is, if it exists, the right Kan extension of <semantics>G 2<annotation encoding="application/x-tex">G_2</annotation></semantics> along <semantics>G 1<annotation encoding="application/x-tex">G_1</annotation></semantics>. When <semantics>G 1=G 2<annotation encoding="application/x-tex">G_1 = G_2</annotation></semantics> this recovers the definition of the codensity monad of a functor <semantics>G:DC<annotation encoding="application/x-tex">G : D \to C</annotation></semantics> as the right Kan extension of <semantics>G<annotation encoding="application/x-tex">G</annotation></semantics> along itself, and neatly explains why it’s a monad: it’s an endomorphism object.

Question: Has anyone seen this definition of relative endomorphisms before?

It seems pretty natural, but I tried guessing what it would be called on the nLab and failed. It also seems that “relative endomorphisms” is used to mean something else in operad theory.

by qchu (qchu@math.berkeley.edu) at April 30, 2016 01:29 AM

April 29, 2016

ZapperZ - Physics and Physicists

LHC Knocked Out By A Weasel?
You can't make these things up!

CERN's Large Hadron Collider, the world's biggest particle accelerator located near Geneva, Switzerland, lost power Friday. Engineers who were investigating the outage made a grisly discovery -- the charred remains of a weasel, CERN spokesman Arnaud Marsollier told CNN.
If you are a weasel kind, be forewarned! Don't mess around at CERN!

Zz.

by ZapperZ (noreply@blogger.com) at April 29, 2016 08:49 PM

astrobites - astro-ph reader's digest

A PeVatron at the Galactic Center

Title: Acceleration of petaelectronvolt protons in the Galactic Centre
Authors: The HESS Collaboration
Status: Published in Nature

In the past, we’ve talked on this website a bit about the mysteries of galactic cosmic rays, or charged particles from outer space that are mainly made up of protons.  These particles can reach PeV energies and beyond, but the shocks of supernova remnants (the origin of most galactic cosmic rays) cannot accelerate particles to these high energies.  The HESS Collaboration analyzed 10 years of gamma-ray observations and have seen evidence of a PeVatron (PeV accelerator) in the center of our galaxy.  If confirmed, this would be the first PeVatron in our galaxy.

As mentioned above, the HESS Collaboration used observations of gamma rays from their array of telescopes to do this analysis.  Gamma rays are often used to probe the nature of cosmic ray accelerators; this is because they are associated with these sites, but unlike the charged cosmic rays, they are electrically neutral and therefore don’t bend in magnetic fields on their way to Earth (i.e. they point back to the source).

Figure 1: HESS's very high energy gamma ray map of the Galactic Center region. The color scale shows the number of gamma rays per pixel, while the white contour lines illustrate the distribution of molecular gas. Their correlation points to a hadronic origin of gamma ray emission. The right panel is simply a zoomed view of the inner portion.

Figure 1: HESS’s very high energy gamma ray map of the Galactic Center region. The color scale shows the number of gamma rays per pixel, while the white contour lines illustrate the distribution of molecular gas. Their correlation points to a hadronic origin of gamma ray emission.  The right panel is simply a zoomed view of the inner portion.  (Source: Figure 1 from the paper)

Figure 2: The red shaded area shows the 1 sigma confidence band of the measured gamma-ray spectrum of the diffuse emission in the region of interest. The red lines show different models, assuming that the gamma rays are coming from neutral pion decay after the pions have been produced in proton-proton interactions. Note the lack of cutoff at high energies, indicating that the parent protons have energies in the PeV range. The blue data points refer to another gamma-ray source in the region, HESS J1745-290. The link between these two objects is currently unknown.

Figure 2: The red shaded area shows the 1 sigma confidence band of the measured gamma-ray spectrum of the diffuse emission in the region of interest. The red lines show different models, assuming that the gamma rays are coming from neutral pion decay after the pions have been produced in proton-proton interactions. Note the lack of cutoff at high energies, indicating that the parent protons have energies in the PeV range.
The blue data points refer to another gamma-ray source in the region, HESS J1745-290. The link between these two objects is currently unknown.

The area they studied is known as the Central Molecular Zone, which surrounds the Galactic Center.  They found that the distribution of gamma rays mirrored the distribution of the gas-rich areas, which points to a hadronic (coming from proton interactions) origin of the gamma rays.  From the gamma-ray luminosity and amount of gases in the area, it can be shown that there must be at least one cosmic ray accelerator in the region.  Additionally, the energy spectrum of the diffuse gamma-ray emission from the region around Sagittarius A* (the location of the black hole at at the Galactic Center) does not have an observed cutoff or a break in the TeV energy range.  This means that the parent proton population that created these gamma rays should have energies of ~1 PeV (the PeVatron).  Just to refresh everyone’s memory, a TeV is 10^12 electronvolts, while a PeV is 10^15 electronvolts.  A few TeV is about the limit of what can be produced in particle laboratories on Earth (the LHC reaches 14 TeV).  A PeV is roughly 1000 times that!

What is the source of these protons?  The typical explanation for Galactic cosmic rays, supernova remnants, is unlikely here: in order to match the data and inject enough cosmic rays into the Central Molecular Zone, the authors estimate that we would need more than 10 supernova events over 1000 years.  This is a very high rate that is improbable.

Instead, they hypothesize that Sgr A* is the source of these protons.  They could either be accelerate in the accretion flow immediately outside the black hole, or further away where the outflow terminates.  They do note that the required acceleration rate is a few orders of magnitude above the current luminosity, but that the black hole may have been much more active in the past, leading to higher production rates of the protons and other nuclei.  If this is true, it could solve one of the most puzzling mysteries in cosmic ray physics: the origin of the higher energy galactic cosmic rays.

by Kelly Malone at April 29, 2016 08:09 PM

Emily Lakdawalla - The Planetary Society Blog

Future High-Resolution Imaging of Mars: Super-Res to the Rescue?
HiRISE Principal Investigator Alfred McEwen explains an imaging technique known as Super-Resolution Restoration (SRR), and how it could come in handy for high-resolution imaging of the Red Planet.

April 29, 2016 07:27 PM

April 28, 2016

Emily Lakdawalla - The Planetary Society Blog

What NASA Can Learn from SpaceX
SpaceX's announcement that it will send Dragon capsules to Mars demonstrates the advantage of having a clear plan to explore the red planet. NASA should take note.

April 28, 2016 04:42 PM

Emily Lakdawalla - The Planetary Society Blog

The phases of the far side of the Moon
Serbian artist Ivica Stošić used Clementine and Kaguya data to give a glimpse of the phases of the lunar farside.

April 28, 2016 03:50 PM

Symmetrybreaking - Fermilab/SLAC

A GUT feeling about physics

Scientists want to connect the fundamental forces of nature in one Grand Unified Theory.

The 1970s were a heady time in particle physics. New accelerators in the United States and Europe turned up unexpected particles that theorists tried to explain, and theorists in turn predicted new particles for experiments to hunt. The result was the Standard Model of particles and interactions, a theory that is essentially a catalog of the fundamental bits of matter and the forces governing them.

While that Standard Model is a very good description of the subatomic world, some important aspects—such as particle masses—come out of experiments rather than theory.

“If you write down the Standard Model, quite frankly it's a mess,” says John Ellis, a particle physicist at King’s College London. “You've got a whole bunch of parameters, and they all look arbitrary. You can't convince me that's the final theory!”

The hunt was on to create a grand unified theory, or GUT, that would elegantly explain how the universe works by linking three of the four known forces together. Physicists first linked the electromagnetic force, which dictates the structure of atoms and the behavior of light, and the weak nuclear force, which underlies how particles decay.

But they didn’t want to stop there. Scientists began working to link this electroweak theory with the strong force, which binds quarks together into things like the protons and neutrons in our atoms. (The fourth force that we know, gravity, doesn’t have a complete working quantum theory, so it's relegated to the realm of Theories of Everything, or ToEs.)

Linking the different forces into a single theory isn’t easy, since each behaves a different way. Electromagnetism is long-ranged, the weak force is short-ranged, and the strong force is weak in high-energy environments such as the early universe and strong where energy is low. To unify these three forces, scientists have to explain how they can be aspects of a single thing and yet manifest in radically different ways in the real world.

The electroweak theory unified the electromagnetic and weak forces by proposing they were aspects of a single interaction that is present only at very high energies, as in a particle accelerator or the very early universe. Above a certain threshold known as the electroweak scale, there is no difference between the two forces, but that unity is broken when the energy drops below a certain point.

The GUTs developed in the mid-1970s to incorporate the strong force predicted new particles, just as the electroweak theory had before. In fact, the very first GUT showed a relationship between particle masses that allowed physicists to make predictions about the second-heaviest particle before it was detected experimentally.

“We calculated the mass of the bottom quark before it was discovered,” says Mary Gaillard, a particle physicist at University of California, Berkeley. Scientists at Fermilab would go on to find the particle in 1977.

GUTs also predicted that protons should decay into lighter particles. There was just one problem: Experiments didn’t see that decay.

Artwork by Sandbox Studio, Chicago

The problem with protons

GUTs predicted that all quarks could potentially change into lighter particles, including the quarks making up protons. In fact, GUTs said that protons would be unstable over a period much longer than the lifetime of the universe. To maximize the chances of seeing that rare proton decay, physicists needed to build detectors with a lot of atoms.

However, the first Kamiokande experiment in Japan didn't detect any proton decays, which meant a proton lifetime longer than that predicted by the simplest GUT theory. More complicated GUTs emerged with longer predicted proton lifetimes – and more complicated interactions and additional particles.

Most modern GUTs mix in supersymmetry (SUSY), a way of thinking about the structure of space-time that has profound implications for particle physics. SUSY uses extra interactions to adjust the strength of the three forces in the Standard Model so that they meet at a very high energy known as the GUT scale.

“Supersymmetry gives more particles that are involved via virtual quantum effects in the decay of the proton,” says JoAnne Hewett, a physicist at the Department of Energy’s SLAC National Accelerator Laboratory. That extends the predicted lifetime of the proton beyond what previous experiments were able to test. Yet SUSY-based GUTs also have some problems.

“They're kinda messy,” Gaillard says. Particularly, these theories predict more Higgs-like particles and different ways the Higgs boson from the Standard Model should behave. For that reason, Gaillard and other physicists are less enamored of GUTs than they were in the 1970s and '80s. To make matters worse, no supersymmetric particles have been found yet. But the hunt is still on.

“The basic philosophical impulse for grand unification is still there, just as important as ever,” Ellis says. “I still love SUSY, and I also am enamored of GUTs.”

Hewett agrees that GUTs aren't dead yet.

“I firmly believe that an observation of proton decay would affect how every person would think about the world,” she says. “Everybody can understand that we're made out of protons and ‘Oh wow! They decay.’”

Upcoming experiments like the proposed Hyper-K in Japan and the Deep Underground Neutrino Experiment in the United States will probe proton decay to greater precision than ever. Seeing a proton decay will tell us something about the unification of the forces of nature and whether we ultimately can trust our GUTs.

by Matthew R. Francis at April 28, 2016 02:37 PM

astrobites - astro-ph reader's digest

Studying the First Stars with Gravitational Waves

Title: Gravitational Waves from the Remnants of the First Stars

Authors: Tilman Hartwig et al.

First Author’s Institution: Sorbonne University

Status: Submitted to MNRAS Letters

The detection of gravitational waves (neatly summarized in this excellent astrobites post) provided astronomers with an entirely new way of understanding the cosmos. With the notable exception of a handful of neutrinos from SN 1987A, all of our information about the universe outside of our galaxy had previously come in the form of little packets of electromagnetic radiation known as photons. Gravitational waves, on the other hand, are ripples in spacetime–a totally different phenomenon. They cause the distances between objects in spacetime to change as they pass through.

The Advanced Laser Interferometer Gravitational-Wave Observatory’s (aLIGO) detection of GW150914 (so named because it was detected on September 14, 2015) was the result of the inspiral and merger of two stellar black holes (BHs). The larger of the two black holes was about 36 solar masses and the smaller one about 29 solar masses. This first detection of gravitational waves has already provided enough information for scientists to start inferring rates of black hole-black hole (BH-BH) mergers.

Hartwig et al's Figure 2

Figure 1 (Figure 2 from the paper) shows the intrinsic merger rate densities for their models (this includes NS-NS, NS-BH, and BH-BH mergers) compared to the literature.  The red shaded area shows the variance for the fiducial IMF. The model from Kinugawa et al. (2014) is reproduced with both the original star formation rate (SFR) and with the SFR rescaled to match this paper’s. The original SFR from the Kinugawa (which does not agree with the 2015 Planck results) paper produces a much higher merger rate density. The Dominik et al. (2013) curve shows the merger rate for later generations of stars (Pop I/II) that have a tenth of the solar metallicity. GW150914’s point shows the estimated merger rate density inferred from the detection, with error bars. The merger rate in the local universe of primordial black holes is much lower than predicted in previous studies, meaning that most of the mergers that we detect should come from stellar remnants of Pop I/II.

The authors of today’s paper estimate the probability of detecting gravitational waves from the very first generation of stars, known as Population III (Pop III), stars with aLIGO. Pop III stars, which are made from the pristine (read: zero-metallicity) gas leftover from big-bang nucleosynthesis, are thought to be more massive than later generations of stars. They are relatively rare in the local universe, but should produce strong signals in gravitational waves thanks to their high masses. Detection rates for their BH-BH mergers should therefore be high. 

The authors begin by modeling the formation of dark matter halos from z=50 to z=6 (here z means redshift and translates to about 0.2-1.0 billion years after the Big Bang), when they expect that no more Pop III stars will form. The authors populate their simulated halos with stars of various masses determined by a logarithmically-flat IMF (initial mass function). Since the exact IMF of Pop III stars is unknown, they consider three cases: their fiducial IMF of 3-300 solar masses, a low-mass case where the stars range from 1-100 solar masses, and a high-mass case of 10-1000 solar masses. The total stellar mass in each halo is then determined by the star-formation efficiency, which they make sure is consistent with the results from Planck.

Figure 2

Figure 2 (Figure 3 from the paper) shows the expected number of BH-BH merger detections as a function of the total mass of the binary system. The top shows the current sensitivity of aLIGO and the bottom shows the final sensitivity. The gray bar indicates where GW150914 falls. The de Mink & Mandel (2016) histogram indicates the estimated rate of Pop I/II star detections. As we can see, mergers from remnants of Pop I/II stars are expected to dominate in the mass range between 30 and 100 solar masses. However, if aLIGO detects enough events that result from binary systems with approximately 300 solar masses and above (which must come from Pop III stars), it will be able to discriminate between the three Pop III IMFs (initial mass function–basically the distribution of masses) discussed in this paper.

Like tango, it takes two to make gravitational waves, so they use previous studies of Pop III stars to get an estimate for the fraction of the stars in binary systems. The probability for any single star to be in a binary is 50% in their model, but they note that it is easy to scale their results with different fractions of binary stars. The authors also take into account the fact that stars in binary systems usually have similar mass and that stars in the 140-260 solar mass range are expected to blow apart completely as pair-instability supernovae, leaving no compact remnants (i.e. black holes) behind. They then calculate the signal-to-noise ratio of a single aLIGO detector for each of their mergers and to determine if something is a detection. 

The result? Compared to previous studies which do not match the results of the Planck paper quite as well, their star formation rates are lower, as are their merger rate densities. They also find that the number of mergers they expect is largely dominated by the maximum mass in each distribution, since less massive stars produce neutron stars instead of black holes, which have a lower merger rate. On the other hand, the merger rate density decreases as the masses of the stars get higher since fewer high-mass binary pairs are formed in any given location. Figure 1 shows the merger rate densities that they obtained for each of their mass distributions.

Even with all of this information, how would we know that what we have detected is of primordial origin (and not something that formed more recently)? Since Pop III stars are more massive than their later counterparts, any remnants with masses greater than 42 solar masses must have come from the earliest stars (even stars with a tenth of the metallicity of our Sun would not leave remnants more massive than this). Thus the contribution to the BH-BH merger rate from Pop III stars varies as a function of the mass of the mergers, which is demonstrated in Figure 2. Though the primordial BH-BH merger rate that we could detect would be small, the massiveness of the black holes will make their gravitational wave signal stronger and thus easier to detect. This means that given our detection rate of BH-BH mergers, we might be able to rule out or put bounds on our Pop III mass distribution.

They conclude that GW150914 had about a 1% probability of being from primordial origin and estimate that aLIGO will probably be able to detect about 5 primordial BH-BH mergers a year. By looking at the rate of BH-BH mergers that we actually do detect, we’ll also be able to learn about the stars that these remnants originated from.

by Caroline Huang at April 28, 2016 02:37 PM

Tommaso Dorigo - Scientificblogging

The Number Of My Publications Has Four Digits
While tediously compiling a list of scientific publications that chance to have my name in the authors list (I have to apply for a career advancement and apparently the committee will scrutinize the hundred-page-long lists of that kind that all candidates submit), I discovered today that I just passed the mark of 1000 published articles. This happened on February 18th 2016 with the appearance in print of a paper on dijet resonance searches by CMS. Yay! And 7 more have been added to the list since then.

read more

by Tommaso Dorigo at April 28, 2016 12:30 PM

April 27, 2016

Tommaso Dorigo - Scientificblogging

35% Off World Scientific Titles
I think this might be interesting to the few of you left out there who still read paper books (I do too). World Scientific offers, until April 29th, a 35% reduction in the cover price of its books, if you purchase two of them.
This might be a good time to pre-order my book, "Anomaly!", if you have not done so yet. Plus maybe get one of the other many excellent titles in the collection of WS.

You can see the offer at the site of my book (that's where I got the info from!).

by Tommaso Dorigo at April 27, 2016 05:27 PM

astrobites - astro-ph reader's digest

The Geology of Pluto and Charon

Title:  The Geology of Pluto and Charon Through the Eyes of New Horizons
Authors: Jeffrey M. Moore, William B. McKinnon, John R. Spencer, et al., including the New Horizons team
First Author’s Institution: NASA Ames Research Center
Status: Published in Science

The New Horizons mission to Pluto and Charon, launched when Pluto was still officially a planet, gave us the best images of the dwarf planet and its largest moon that we might ever see in our lifetime.  Less than a year after its July 14, 2015 fly-by, the New Horizons team have published a preliminary geologic examination of the two bodies.  Predicted to have a rather boring landscape unchanged for billions of years, the surfaces of Pluto and Charon have been discovered to be surprisingly complex and, for Pluto, still geologically active!

Pluto:

Figure 1: An annotated map of Pluto. Credit.

The image of Pluto’s “heart” proved that it still reciprocated our love for it despite its demotion from planetary status.  Officially named the “Tombaugh Regio” for Pluto’s discoverer, Clyde Tombaugh, this region actually comprises multiple regions that just have similar coloration and albedo (reflectivity).  The left half of the heart is known as the Sputnik Planum (SP), a region the size of Texas and Oklahoma (or, alternatively, France and the United Kingdom).   It’s a giant block of solid nitrogen, carbon monoxide, and methane ice that sits about 3-4 km below the surrounding highlands.  Amazingly, despite the fact that Pluto was thought to be geologically dead, the age of the surface of the SP is likely < 10 million years old based on the fact that no craters have been discovered in the region (see Figure 2).  The appearance of cells in the northern half of the SP implies the existence of solid-state convection that may be responsible for resurfacing the SP.

Figure 2: The distribution of craters on the surface of Pluto. The Sputnik Planum (left half of the heart) contains no craters, implying a surface age of < 10 million years. Credit.

The western edge of the SP is marked by multiple, heterogeneous mountain ranges.  These mountain ranges have been spectroscopically confirmed to be composed of water ice.  These mountains appear to be chunks of a pre-existing surface that have been fractured, moved, and rotated, giving them high elevations and steep slopes.  The authors do not know why the mountain ranges exist only on the western edge of the SP.

The portion of the Tombaugh Regio east of the SP (the right half of the heart) comprises flat plains and pitted uplands.  These pits are typically a few kilometers across and roughly 1 kilometer deep.  At the border between the pitted uplands and the SP, potential glaciers could be breaking off the pitted uplands and flowing into the SP.  To the north and northwest of the SP, there are a variety of different terrain, including a washboard (i.e., ridged) terrain, likely formed via erosion, and terrain dissected by networks of deep, wide valleys.  To the southwest of the SP are a pair of two huge mounds, only one of which, Wright Mons, was imaged with proper lighting.  Wright Mons is 3-4 km high, 150 km across, and has a deep hole right in the center at least 5 km deep.  The other mound, Piccard Mons, appears to be even larger. These two mounds are possible cryovolcanoes, water-ice volcanoes.

The surface of Pluto allows its history to be partially reconstructed.  The distribution of crater sizes implies that most of the surface can be dated back to the Late Heavy Bombardment (LHB) about 3.9 billion years ago.  A portion of the Cthulhu Regio, which wins the prize for most creative region name, may even pre-date the LHB.  The SP, on the other hand, is < 10 million years old, although it sits in a much older basin. Its half-ring of mountains to the west suggest that the entire SP might be a giant impact crater.

Charon:

Figure 3:  An enhanced color map of Charon.  Credit.

Whereas Pluto’s surface is dominated by nitrogen, carbon monoxide, and methane ices, Charon’s surface is mostly water-ice.  Charon is rent in half by a pair of ancient, gigantic canyons 50-200 km wide and 5-7 km deep, which split the moon (or maybe binary dwarf planet) into two general regions.  The northern half displays a network of large troughs 3-6 km deep.  A massive depression nearly 10 km deep (seen in profile as the bumpy edge against the black of space in Figure 3) lies near the Mordor Macula (MM), the reddish cap on the north pole.  Meanwhile, the Dorothy Gale crater located to the bottom and right of the MM is 230 km wide and 6 km deep.  The overall crater population distribution imply that the northern half of Charon is about 4 billion years old.

By contrast, the southern half of Charon is smoother and slightly younger by maybe a few hundreds of millions of years.  A lower density of craters and fields of small hills point to past cryovolcanism.  A unique feature to Charon is the existence of mountains rising 3-4 km above encircling moats. The moats themselves are 1-2 km below the surrounding surface, potentially due to the mountains pressing the surrounding region deeper into the moon.

The Future of New Horizons:

This is only the beginning.  It’s been just 9 months since New Horizons careened past Pluto and Charon.  A data transfer rate of just 1-2 kb/s means we won’t even have all the data for another 7 months!  New Horizons isn’t finished yet.  It plans to study several Kuiper Belt objects from afar and will even perform a close fly-by of one, 2014 MU69, on January 1, 2019, for NASA’s own version of a New Year’s fireworks display.

by Joseph Schmitt at April 27, 2016 04:40 PM

Axel Maas - Looking Inside the Standard Model

Some small changes in the schedule
As you may have noticed, I have not written a new entry since some time.

The reasons have been twofold.

One is that being a professor is a little more strenuous than being a postdoc. Though not unexpected, at some point it takes a toll.

The other is that in the past I tried just to keep a regular schedule. However, that often required of me to think hard about a topic as there was no natural candidate. At other times, I had a number of possible topics, which where then stretched out rather than to be written when they were important.

As a consequence, I think it is more appropriate to write entries when something happens that is interesting to write about. This will be at least any time we put out a new paper, so that I will still update you on our research. I will also write something whenever somebody new starts in the group, or otherwise we start a new project. Also, some of my students want to also contribute, and I will be very happy to give them the opportunity to do so. Once in a while, I will also write some background entries, such that I can offer some context for the research we are doing.

So stay tuned. It may be in a different rhythm, but I will keep on writing about our (and my) research.

by Axel Maas (noreply@blogger.com) at April 27, 2016 12:54 PM

CERN Bulletin

The CERN Accelerator School
Introduction to accelerator physics The CERN Accelerator School: Introduction to Accelerator Physics, which should have taken place in Istanbul, Turkey, later this year has now been relocated to Budapest, Hungary.  Further details regarding the new hotel and dates will be made available as soon as possible on a new Indico site at the end of May.

April 27, 2016 08:49 AM

April 26, 2016

Alexey Petrov - Symmetry factor

30 years of Chernobyl disaster

30 years ago, on 26 April 1986, the biggest nuclear accident happened at the Chernobyl nuclear power station.

Class1986

The picture above is of my 8th grade class (I am in the front row) on a trip from Leningrad to Kiev. We wanted to make sure that we’d spend May 1st (Labor Day in the Soviet Union) in Kiev! We took that picture in Gomel, which is about 80 miles away from Chernobyl, where our train made a regular stop. We were instructed to bury some pieces of clothing and shoes after coming back to Leningrad due to excess of radioactive dust on them…

 


by apetrov at April 26, 2016 06:16 PM

Lubos Motl - string vacua and pheno

When anti-CO2, junk food pseudosciences team up
Among other things, a Czech-Swedish man showed me an article in the April 9th issue of Nude Socialist
Reaping what we sow (pages 18-19)
written by Irakli Loladze (Google Scholar), a professor of junk food science at a college I've never heard of. He told us that he wanted to get lots of money and Barack Obama (whose relationship to science is accurately described by his being a painful footnote in the curved constitutional space) finally gave Loladze some big bucks for the excellent "research" that Loladze already wanted to do in 2002.



What is the result of the research? It's a simple combination of the pseudosciences about the "evil junk food" and about the "evil CO2". It says that CO2 turns out food into junk food. I kid you not. The one-page article in Nude Socialist contains basically nothing beyond the previous sentence written in the bold face.




This "new interdisciplinary pseudoscience" is quite a hybrid because the pseudosciences about the "junk food" and about the "evil CO2" are probably two worst examples of pseudosciences in the contemporary era, two examples featuring the most arrogant pseudo-experts as well as the largest number of ordinary people who have been brainwashed by these junk sciences.




Let's start with these two things separately. The would-be scientific memes about the "junk food" are what a sensible left-wing blogger called the pseudoscience of McDonald's hate. The term "junk food" is meant to denote a food with a lot of sugar, fat, and perhaps salt; and the deficit of all other things – proteins, fibers, minerals, and vitamins. Almost by definition, we're supposed to think that it's "what we get at McDonald's" and other fast food chains; and it's "bad".

Except that none of these statements is defensible.

First, it is not true that the fast food chains significantly differ from other restaurants and homes in the percentage of the nutrients. In fact, you can get lots of salads with assorted vegetables in McDonald's and many people actually eat these things – which trump the "healthy food" image of the things you can eat elsewhere. A majority of the people still eat the traditional things that contain beef or chicken and carbohydrates and fat, e.g. the buns and French fries. But the substance of this food doesn't really differ from what you eat in non-chain restaurants with hamburgers, or any other restaurants, for that matter.



Feynman mentioned the pseudoscience on healthy food as an example of the pseudosciences encouraged by the success of the actual science. Just try to appreciate how much these things have grown since the times when Feynman recorded the program above.

People self-evidently single out the hamburger chains because of their intense anti-corporate bigotry and dishonesty. As the guy whom I just linked to has said, there is no evidence that McDonald's contains special addictive chemicals or something entirely different than other places to eat; and its containing some chemicals is a tautology because all of our bodies and life are composed of numerous chemicals and all organisms are giant chemical factories.

Because Ray Kroc who turned McDonald's into a big company was a Czech American (his father was born 10 km from my home), I could also argue that the attacks on McDonald's are manifestations of nationalist or racist prejudices against my nation. Long before I knew about Ray Kroc, I considered McDonald's to be one of the symbols of the advanced civilization and when I learned about the founder, my pride about my nation's contributions to the civilization went up. But be sure, it doesn't influence my honest attitude. I have exactly the same opinions about the Burger King and Wendy's.

Second, it's not true that "sugars and fats and salt are bad". Sugars and fats are clearly the most important compounds that our bodies look for in the food; they're the essence and the main reasons why we eat at all. Other things are "cherries on a pie" in comparison. We mainly need energy – which may be quantified in calories – and when it comes to food, it is stored almost entirely in carbohydrates and fats.

Animals and humans have always looked for plants that were rich in sugars; and they loved to grow animals that had enough fat. It's no coincidence that civilized nations (except for Jews) eat lots of pork and pigs are fat before they are killed. It's really the point that they have the fat plus some proteins that are found in every meat.

You can always eat lots of things – fruits, vegetables, roots – that contain almost no calories but they have lots of all the other things. But people just don't need so much of the other things (although you need to refill several of them sort of regularly). People and animals primarily need some daily intake of food that has calories in it – this food is the essential part of food.

And what about salt? Even if it were right that McDonald's gives one more salt than other restaurants or food prepared by your wife, there is simply no problem with salt. Junk food evangelists keep on brainwashing most of the mankind by fairy-tales about the scientific evidence for the increased cardiovascular problems caused by excess salt. The actual truth is that science produces no evidence of this sort – and it has produced some so far weak evidence in the opposite direction. You surely need some salt and more than the minimum amount of salt seems to be good for you, especially if cardiovascular problems put you at risk.

So far the largest study of the influence of salt on the mortality was done in 2011,
Reduced dietary salt for the prevention of cardiovascular disease: a meta-analysis of randomized controlled trials (Cochrane review, PDF full text),
by Rod Taylor and four collaborators. As the main link shows, the paper has 217 citations so far. See a review, Now Salt Is Safe To Eat, in the Express (2011). They found that the increased salt intake has led to
  1. the increased salt in urine which is hardly surprising LOL
  2. the increased blood pressure which is not surprising either because a higher blood pressure allows the body to get rid of the salt more quickly, but the increased pressure means nothing because there were:
  3. no hints of benefits or a change of all-cause mortality
  4. salt restrictions have actually increased all-cause mortality in those with heart failure!
So the restrictions of salt intake have led to no statistically significant change in the main variables – and they shortened the life of those at cardiovascular risk.

According to the actual scientific research, the effects of salt seem too weak to be clearly discernible and if you think that by removing a gram of salt from a food, you are significantly helping your health, you are absolutely fooling yourself – just like if you believe the horoscope on the same page of your favorite newspapers as the "healthy food science". Some hints that extra salt could be good for you exists; but this influence isn't extremely strong, either. It just doesn't matter how much salt you eat if you have at least the minimum amount.

Salt superstitions are just a particular example. We're bombarded by hundreds of similar things and most of us don't have the time – and mostly even expertise – needed to find the actual answers in the scientific literature to the question whether these memes are right or wrong. But you're invited to look at the paper above and search for other papers and decide whether you still believe that the scientific research justifies the idea that it's good for your health to halve the intake of NaCl. There's no evidence for such a claim. It's really easy for the body to get rid of the extra salt and if the body increases the blood pressure to do so quickly, it doesn't signify any immediate problem. The very meme that "a higher blood pressure equals a problem" is another myth, too. There is a correlation between cardiovascular problems and a higher blood pressure (be sure that a low pressure may also reduce the quality of people's lives) but it's simply not true that the blood pressure always ruins the body or shows a problem – the blood pressure may get increased by the body's mechanisms safely and for sensible reasons, too.

On the other hand, spasms may be caused by shortage of salt. If you experience such things, you surely need to add more salt to your food.

Loladze, the junk food "scholar" who wrote the article in Nude Socialist that I started with, claims that CO2 is a "similar kind of junk food" for the plants as carbohydrates and fats are for humans. Right, I totally agree with this analogy. The only problem is that his "junk food" claims are equally idiotic in both cases. CO2 is the "main food" of plants in the exactly analogous way in which sugars and fats are the main food for us (proteins come third). Organisms just can't survive without this main part of the food! And they naturally look for the food that has a high percentage of these compounds. This strategy is in no way suicidal; it has been a strategy to survive in the previous billions of years (in the case of plants) or hundreds of millions of years (in the case of animals). In the developed world, we have enough food in calories and some people eat a lot of them and despite lots of superstitions about some mysterious detailed other reasons, that's the main reason why they may get fat. But that's just like children who are spoiled because their parents are too rich. Being rich in money or fats or sugar is primarily a good thing. Every good thing may turn into a bad thing in some situations but it's still true that people promoting a primarily good thing as a bad thing are liars and scammers.

Lolardze did this "research" – and it was paid – with an obvious purpose. To try to counter the fact that a higher concentration of CO2 is good for plants. Some of them may afford to have fewer pores (the holes through which get CO2) because fewer pores are enough to get the same CO2 if its concentration is higher. But it's good for the plants because water vapor is evaporating from the pores. So the "modern, high-CO2-using" plant with fewer pores is a much better water manager. It doesn't lose too much water which is why it can grow in less humid environments, and why it can grow larger, too. Serious articles showing the beneficial effect of CO2 on plants appear every other day. Yesterday, Nature published another one about the global greening of the Earth 70% of which was attributed to higher CO2.

Will Happer of Princeton knows much more about the processes involving CO2 in the plants etc. and he sort of wanted me to learn most of these details as well but I am simply not interested in too many details of this science. I feel confident that I know more than enough to be certain that a higher CO2 concentration in the air may be said to be good for plants.

Now, a question is what are the changes of the concentration of various nutrients in plants induced by the changed concentration of CO2. Quite generally, one may expect the increase of the compounds with lots of carbon – because carbon became more accessible. At the same moment, the change probably won't be too big. DNA still needs some – the same amount – of phosphorus and other things, the conditions for the "optimum material from which the leaves and other parts of the plant" should be composed are basically unchanged.

So I think that the actual answer is that the percentages of other nutrients don't change much but it may be expected that some relative concentration of non-carbon nutrients (and perhaps some organic nutrients as well) will go down. But that doesn't mean that people get unhealthy from the food grown in a higher CO2. If the food has too much sugar or fat in it, people will feel that the taste is boring or bland and they will add more vegetables on it, or spices, or eat more fruits and nuts and other things, whatever. They will do so because they feel that they're missing something.

Just like the plants try to manage their nutrients and resources optimally, people are trying to do the same thing. The shortage of CO2 for plants and fats+sugars for humans is clearly a problem causing starvation. The increase of their availability makes starvation much less likely and it basically leads to positive consequences only – and no negative ones. Well, you may say that if you have too much money, it's also bad. But the point is that you may always ignore it. If you know that it's bad for you to spend too much money, you can leave the funds in the safebox. In the same way, people may just leave the sugar in the McDonald's or Wendy's restaurants. Plants may leave the CO2 in the air. In all cases, people and plants generally try to get fats, sugars, CO2, and money because they believe it's good for them and in a vast majority of cases, they're obviously right.

I think that people like Loladze know very well that they're just dishonest corrupt aßholes. They must know that the higher CO2 is making the life of plants better and, as a consequence of our dependence on plants, it's making our lives better as well. They know that according to all objective criteria that people knew before CO2 was politicized, it may be shown that a higher CO2 concentration increases the quality of wine and pretty much every other thing related to food and beverages. But they just construct convoluted pseudoscientific theories and memes suggesting the opposite because they get paid for this deception.

These junk scientists are immoral and if there's no God to do the job of sending a lightning upon their heads, they have to be punished otherwise.

by Luboš Motl (noreply@blogger.com) at April 26, 2016 02:32 PM

Symmetrybreaking - Fermilab/SLAC

The hottest job in physics?

Accelerator scientists are in demand at labs and beyond.

While the supply of accelerator physicists in the United States has grown modestly over the last decade, it hasn’t been able to catch up with demand fueled by industry interest in medical particle accelerators and growing collaborations at the national labs. 

About 15 PhDs in accelerator physics are granted by US universities each year. That’s up from around 12 per year, a rate that held relatively steady from 1985 to 2005. But accelerator physicists often come to the field without a specialized degree. For people like Yunhai Cai of the US Department of Energy's SLAC National Accelerator Laboratory, this has been a blessing and a curse. A blessing because high demand meant Cai found a ready job after his post doctoral studies, even though his expertise was in particle theory and he had never worked on accelerators. A curse because now, despite the growth, his field is still in need of more experts.

“Eleven of DOE’s seventeen national laboratories use large particle accelerators as one of their primary scientific instruments,” says Eric Colby, senior technical advisor for the Office of High Energy Physics at DOE. That means plenty of job opportunities for those coming out of special training programs or eager to transfer from another field. “These are major projects that will require hundreds of accelerator physicists and engineers to successfully complete.”

Transition mettle

Cai, now a senior staff scientist at SLAC and head of the Free-Electron Laser and Beam Physics Department, is one of many scientists recruited from other fields. The transition is intensive, and Cai considers himself fortunate that his academic background taught him the mathematical principles behind his first job. 

Notwithstanding, “the most valuable help was the trust of many leaders in the field of accelerators,” Cai says. “They offered me a position knowing I had no experience in the field.”

Training specialists from other fields is a common and successful practice, says Lia Merminga, associate lab director for accelerators at SLAC. A planned upgrade to SLAC's Linac Coherent Light Source is creating a high demand for specialized accelerator experts, such as cryogenics engineers and superconducting radio frequency (SRF) physicists and engineers.

“Instead of hiring trained cryogenics engineers who are in short supply, we hire mechanical engineers and train them in cryogenics either by providing for hands-on experience or with coursework,” Merminga says. 

New funds catalyze university research

The National Science Foundation has recently provided a boost to university research, which could help produce more accelerator scientists. In 2014 NSF launched their Accelerator Science program, distributing a total of $18.8 million in research funds, divided among approximately 30 awards in 2014 and 2015. The grants seed and support fundamental accelerator science at universities independent of government projects. Additionally, the program aims to entice students to accelerator science by challenging recipients to develop potentially disruptive technologies and ideas that could lead to breakthrough discoveries, as well as by supporting student travel to major accelerator science conferences.

“We are looking for high-risk, transformational ideas cross-cutting with other academic disciplines, with the goal of attracting the best students and postdocs,” says Vyacheslav Lukin, accelerator science program director at NSF. “Such students tend to gravitate toward the truly challenging problems with potential for novel solutions.”

Significantly, the NSF program recognizes accelerator science as a distinct field, which many institutions have been slow to do. 

“There are few universities offering disciplines in the field of accelerators,” Cai says. “Most importantly, many people think it is [only] an engineering field.” Similar concerns were raised in responses to a 2015 Request for Information posted by DOE on the issue of too few accelerator physicists. Multiple respondents pointed out that many research awards don’t include work with accelerators.

Others believe solutions lie in outreach. SLAC has instituted programs to introduce undergraduates to research opportunities in accelerator science and plans to extend partnerships and internships to more schools and industries. Some respondents have pushed even further, supporting K-12 outreach as well.

Colby says that DOE will be implementing some of the suggestions over the next few years to strengthen its decades-long tradition of sponsoring accelerator science that supports its mission.

Illustration by Sandbox Studio, Chicago with Ana Kova

DOE labs partnering with universities

The NSF funding is not the only effort to foster growth. An adequate accelerator for students to train on can be an enormous boon to a university, so DOE has historically supported university programs by granting access to beams at national labs. 

Northern Illinois University has supported its Northern Illinois Center for Accelerator and Detector Development this way for fifteen years. NICADD fosters development of a new generation of accelerator and detector technologies. Faculty and students at NICADD also have access to Fermilab and Argonne National Laboratory facilities for research and instruction. The labs, in turn, work with the jointly appointed faculty on major projects such as Muon g-2, Mu2e and DUNE at Fermilab or CERN’s ATLAS experiment through Argonne. The program has also collaborated on international experiments such as CERN’s AWAKE and ALPHA in its own right. University and labs may share the costs of hiring new faculty, enabling the parties to develop a world-class accelerator research enterprise and generate significant research income.

NICADD “has done quite well recruiting graduate students in accelerator physics,” says David Hedin, acting director. “We attribute this to the scarcity of graduate programs in the subfield and to our close connection to Fermilab.”

NIU has granted eight PhDs in accelerator physics since 2009, all without an accelerator on its campus.

A similar partnership formed between Old Dominion University and Thomas Jefferson National Accelerator Facility in 2008. The Facility for Rare Isotope Beams, a joint project between DOE and Michigan State University, promises to boost an already strong NSF-supported program at the school, and Brookhaven National Laboratory partnering with Stony Brook University has formed the Center for Accelerator Studies and Education. SLAC partners primarily with Stanford University, but also works with other schools, including the University of California, Los Angeles.

“Labs such as SLAC, with a broad accelerator research portfolio, guidance from world-renowned accelerator physicists, leading test facilities where students can get hands-on training, and connections to Stanford and Silicon Valley, offer an ideal environment for student training in accelerator science,” Merminga says.

USPAS expands the traditional classroom

University programs don’t have faculty dedicated to every topic that falls under the umbrella of accelerator science: particle sources, accelerating structures, cryogenics, superconducting radio frequency cavities, magnets, beam dynamics, and instrumentation and controls, to name a few.

DOE fills those gaps with the US Particle Accelerator School (USPAS). The semi-annual, traveling two-week session of rigorous courses trains students and professionals alike in both general and specialized courses.

“US accelerator school provides a critical service to schools that do have PhD programs in accelerator physics by essentially providing all the advanced courses,” says Bill Barletta, who directs the program. Universities give their students credit for coursework completed through USPAS that often is not offered at their own institution. 

Barletta says roughly a third of participants are non-accelerator specialists transitioning into accelerator roles. Cai, who is familiar with that path from his own career change, has taught at USPAS twice – offering his mentorship in special topics such as charged particle optics and beam dynamics.

An industry perspective

Creating more accelerator scientists is valuable for both academia and industry, where particle accelerators are used for work in energy and medicine. The value of the accelerator science industry is estimated to be growing by approximately 10 percent each year. 

“The real increase has been in medical accelerators, with a number of new companies getting into the proton therapy business,” says Robert Hamm, CEO of R&M Technical Enterprises, an accelerator consulting group. “This has been the most significant factor in the industrial demand for accelerator physicists.”

Most private companies only have the training resources to specialize new hires on their products. Thus, most companies want to recruit individuals trained at universities or national labs. Industry can, however, partner with these institutions through internships and collaborations to commercialize technology.

“Accelerator [science] has many applications, ranging from high energy physics, nuclear physics, and material and medical sciences,” Cai says. Both within the field of high-energy physics and beyond, the high demand illustrates the immense value of accelerator scientists and of the institutions helping to train them.

by Troy Rummler at April 26, 2016 01:16 PM

CERN Bulletin

CERN Bulletin Issue No. 15-16/2016
Link to e-Bulletin Issue No. 15-16/2016Link to all articles in this issue No.

April 26, 2016 07:32 AM

April 25, 2016

Sean Carroll - Preposterous Universe

Youthful Brilliance

A couple of weeks ago I visited the Texas A&M Physics and Engineering Festival. It was a busy trip — I gave a physics colloquium and a philosophy colloquium as well as a public talk — but the highlight for me was an hourlong chat with the Davidson Young Scholars, who had traveled from across the country to attend the festival.

The Davidson Young Scholars program is an innovative effort to help nurture kids who are at the very top of intellectual achievement in their age group. Every age and ability group poses special challenges to educators, and deserves attention and curricula that are adjusted for their individual needs. That includes the most high-achieving ones, who easily become bored and distracted when plopped down in an average classroom. Many of them end up being home-schooled, simply because school systems aren’t equipped to handle them. So the DYS program offers special services, including most importantly a chance to meet other students like themselves, and occasionally go out into the world and get the kind of stimulation that is otherwise hard to find.

carroll-davidson-scholars

These kids were awesome. I chatted just very briefly, telling them a little about what I do and what it means to be a theoretical physicist, and then we had a free-flowing discussion. At some point I mentioned “wormholes” and it was all over. These folks love wormholes and time travel, and many of them had theories of their own, which they were eager to come to the board and explain to all of us. It was a rollicking, stimulating, delightful experience.

You can see from the board that I ended up talking about Einstein’s equation. Not that I was going to go through all of the mathematical details or provide a careful derivation, but I figured that was something they wouldn’t typically be exposed to by either their schoolwork or popular science, and it would be fun to give them a glimpse of what lies ahead if they study physics. Everyone’s life is improved by a bit of exposure to Einstein’s equation.

The kids are all right. If we old people don’t ruin them, the world will be in good hands.

by Sean Carroll at April 25, 2016 07:33 PM

Lubos Motl - string vacua and pheno

5,000 cernettes with \(750\GeV\) may be found in the LHC trash bins each year
The excess of the diphoton events whose invariant mass is apparently \(750\GeV\), the decay products of the hypothetical new "cernette" particles, is arguably the most convincing or most tantalizing existing experimental hint of the Beyond the Standard Model physics at the LHC right now. I estimate the probability that a new particle (or new particles) exists in that region to be 50%.

Nude Socialist just posted an interesting story
Hacking the LHC to sift trash could help find a mystery particle
about a possibly clever idea to dramatically increase the sensitivity of the LHC to the "cernettes" that was reported in a fresh hep-ex preprint
Turning the LHC Ring into a New Physics Search Machine
by 4 physicists from Iowa, Helsinki, and CERN that include Risto Orava. Orava is a cute region in the Northwestern Slovak countryside (pix) where Elon Musk just built Tesla Orava, a company producing some incredibly hot futuristic high-tech products including the Color Oravan TV that I can already/still offer you. Tesla Czechoslovakia just succeeded in selling 100,000 new vinyl record players to Japan. (Everyone laughs now.) The Japanese bought them as carousels for puppet shows.




Orava was also the destination of the man who caught Joey from the Swamps (0:08). OK, I have no idea whether this Finnish physicist has Slovak roots but I should return to more serious matters.




Because the confidence was around 4 sigma per experiment and in 2016, the total number of collisions should be greater by a factor of 6-10 than in 2015, maybe, we could get up to \(4\times \sqrt{10}\sim 12\) sigma per experiment. But it's still just dozens of cernettes and we need to wait to the late summer to exceed 5 sigma.

In their paper written in Microsoft Word (which is hopefully not as stupid as most preprints written in Word rather than \(\rm\LaTeX\)), Orava and pals propose to use the protons from the trash bins to find thousands of cernettes per year. How is it possible?

When protons deviate from the right trajectory because they interact and repel in the "almost forward direction", they must be safely removed from the ring because they would pose a threat to the integrity of the collider.

There are 4,000 BLM (Beam Loss Monitoring) trash bins around the ring, equipped with detectors. And Orava and pals say that if a reaction\[

p+p\to p + p + X

\] took place in a major detector, and the final two protons were going in the almost forward direction, they may still be measured in the BLM system, matched to the other partner that was thrown to another BLM trash bin, and a measurement of their outgoing energies is sufficient to tell us the information about the mass of \(X\) – possibly the cernette (or other new particles) – without caring about the decay channel of \(X\).

It sounds very exciting. If the idea is right and may be realized quickly in practice, we could have a discovery of a new particle of the same mass within a month or two.

If you know that this paper is rubbish and you can explain why, please do so quickly! ;-) In particular, I don't even understand why the cross section for the forward-produced cernette is supposed to be so much higher than the normal production already detected by ATLAS and CMS. Can you explain the enhancement to me?

by Luboš Motl (noreply@blogger.com) at April 25, 2016 05:09 PM

April 24, 2016

The n-Category Cafe

Polygonal Decompositions of Surfaces

If you tell me you’re going to take a compact smooth 2-dimensional manifold and subdivide it into polygons, I know what you mean. You mean something like this picture by Norton Starr:

or this picture by Greg Egan:

(Click on the images for details.) But what’s the usual term for this concept, and the precise definition? I’m writing a paper that uses this concept, and I don’t want to spend my time proving basic stuff. I want to just refer to something.

I’m worried that CW decompositions of surfaces might include some things I don’t like, though I’m not sure.

Maybe I want PLCW decompositions, which seem to come in at least two versions: the old version discussed in C. Hog-Angeloni, W. Metzler, and A. Sieradski’s book Two-dimensional Homotopy and Combinatorial Group Theory, and a new version due to Kirillov. But I don’t even know if these two versions are the same!

One big question is whether one wants polygons to include ‘bigons’ or even ‘unigons’. For what I’m doing right now, I don’t much care. But I want to know what I’m including!

Another question is whether we can glue one edge of a polygon to another edge of that same polygon.

Surely there’s some standard, widely used notion here…

by john (baez@math.ucr.edu) at April 24, 2016 10:52 PM

April 22, 2016

Lubos Motl - string vacua and pheno

Lawyer John Dixon bastardizes SUSY in two new papers
CERN: CMS releases the open data from 2.5/fb of 2011 collisions. I hope that you have 100 spare TB on your hard disks. ;-) Also, Christopher Nolan's brain melted 45 minutes after he began to talk to Kip Thorne, Time Magazine reported while praising Thorne's contribution to LIGO. Let me emphasize that Nolan's brain meltdown wasn't due to any global warming.

Two of the new hep-th papers today were written by John Dixon who offers Gmail as his affiliation (well, so would I right now, but many more people would know who I am):
Canonical Transformations can Dramatically Simplify Supersymmetry

Squarks and Sleptons are not needed for the SSM. They can be, and they should be, transformed away
While the titles are longer than they should be, they're pretty bold and simple claims. You can – and you should – completely erase all scalar partners of known fermions in supersymmetric theories. And that's desirable because no squarks and sleptons have been discovered yet. Well, there is a slight problem: These claims are self-evident rubbish.

If SUSY can be talked about at all, the operators \(Q_\alpha\) with a spinor index have to exist and no one can prevent you from asking what is \(Q\ket{\mu}\) where \(\ket\mu\) is a state with one muon, for example. You simply have to get a bosonic result. The result of the action of some SUSY generators has to be nonzero because the anticommutator of \(Q\)'s contains the momentum which is nonzero. So the action has to be a state with one bosonic particle with the same momentum, perhaps dressed into some stuff related to the SUSY-breaking sector, or this state may be in a different superselection sector (but these issues have to go away if SUSY is restored, e.g. at high energies).

So if the action of all these supercharges were zero, the action of the momentum on \(\ket\mu\) would have to vanish as well – but it clearly doesn't.




Fine. So one reads the titles and sees that it looks totally stupid. But no one would really have written two complicated 20-page papers full of formalism if they were actually dedicated to an idea that an average physics student may see to be childishly wrong in several seconds, right? There must be something I am not getting, right?




So I keep on reading the two papers. A loophole showing that the obvious conclusion that the claim is self-evidently wrong must be written somewhere on the first pages, right? What is the ingenious new possibility that makes it possible to invalidate the simple argument above? I kept on reading and nothing.

The papers look just like physicists' papers. Many crackpots may be rather safely identified because they write their papers in Microsoft Word or something of this sort. But in this case, they are written in \(\rm\TeX\). There are lots of indices so that the "genre" looks indistinguishable from a textbook of supersymmetry, e.g. Wess and Bagger's book. But something is missing because it just doesn't make any sense.

After some time, you learn that the ingenious option that Dixon must have found has something to do with the "replacement of fields by zinns", whatever the latter (zinns) is supposed to be; with the "replacement of chiral multiplets with un-chiral multiplets", whatever un-chiral is supposed to be (some partly ghostly fields?); and with some ingenious action by some "canonical transformations" (those should be just transformations... something that doesn't qualitatively change physics but Dixon seems to conclude otherwise). One may also see that the author seems attached to the classical concept of a Poisson bracket. In a quantum theory, we ultimately need commutators, right? So why are the Poisson brackets everywhere?

All these pet concepts of Mr Dixon are related to some confusingly mixed status of the supercharges that are partly viewed as the BRST charge, it seems. So maybe the guy has basically discovered the topological field theories where the SUSY charges are reinterpreted as BRST charges or vice versa – or something similar but inequivalent and new. You expect some comprehensible explanation of these things but I think that you can never get it.

In the case of this lawyer, a layman who can only see the "formatting" would have no chance to see that these papers are completely nonsensical. Aside from the \(\rm\TeX\), there is also an impressive list of people who are thanked to in the acknowledgements, including Duff, Hull, Ramond, and West. Some of these people had to give Dixon some "endorsement" to post papers to hep-th, I believe.

I vaguely remembered that I have seen this name in the past. And indeed, there are several TRF blog posts about him. In this 2008 text about CyberSUSY, it was indeed observed that the guy confuses BRST charges and supercharges (and their different roles), spinors with scalars, the local symmetries and the global symmetries. But this John Dixon runs a blog CyberSUSY.blogspot.com, cool. Sadly, it has no traffic.

In 2012, Alejandro Rivero pointed out that John Dixon gave a wise advise to Paul Frampton concerning suitcases at the airport. More importantly, in 2013, John Dixon came here again to tell us that SUSY hasn't been relevant for physics. That's a bizarre starting point if you want to write long articles about SUSY, even new versions of it, which are clearly much less relevant for physics because it must be impossible to define e.g. the state \(Q\ket\mu\) in Dixon's scheme. Does he really believe that one may make meaningful let alone important contributions to the SUSY research if the first assumption he builds upon is that SUSY has been worthless? Understanding why SUSY is a fantastic structure is surely among the first prerequisites you need before you may do research of it.

By the way, you may check that Dixon's very similar 2008 paper on CyberSUSY has 4 citations according to Google Scholar. All of them are self-citations by newer papers by John Dixon. Quite an enthusiast if I avoid the term "vigorous masturbator". It seems that the same is almost entirely true for 20 or so papers that Dixon wrote in the new millennium.

OK, so I think that this guy has gotten some endorsement to post his papers on hep-th. I do believe that his endorsers know that the paper is almost certainly bullšit – or to say at least, they can't coherently explain the new correct idea or result that the papers contain. But they gave him the endorsement, anyway, perhaps to get rid of Dixon's annoying mails. Or because of compassion because they think that it's cruel to tell him that he's wasting his time because he's not on the right track to do anything useful in physics.

I can understand these motivations and there may exist others. But whatever the reason is, this bogus endorsement is circumventing the role of the endorsement system. If you know that it's extremely unlikely that you could ever use a similar paper in your own research – and no one except for Dixon has used his results yet – you simply shouldn't endorse it.

There's no serious problem when one paper by a lawyer is occasionally posted to hep-th and even two papers like this are fun and they only cost a second for an average reader of arXiv.org abstracts to be dealt with. They may be a pleasant distraction. But if everyone behaved like the endorsers of John Dixon's papers, hep-th would be flooded by similar stuff. The density of meaningful papers could drop to a low enough value so that people would start to be discouraged. They would feel that they're following viXra.org instead. They may need to look for new venues to share the information with other experts.

So I just don't think it's right to circumvent the endorsement system in the sake of compassion etc. I've had sort of similar comments about Free Harvard, Fair Harvard, some plans by Steve Hsu, Ralph Nader, and a few other comrades to abolish tuition at Harvard. You know, much of the special status of Harvard as the world's most famous university – or at least one that could have built the largest endowment – is the fact that it's hard to get there and in many cases, it requires both talent and some wealth or sponsors. It's not a bug, it's a virtue. Without these barriers, Harvard's status could drop to the level of good schools that lack the X-factor, however, such as Charles University in Prague. It's also mostly picking the smartest students in the country. But it's still not Harvard, and it's partly due to its missing links to the "very successful, wealthy, and almost aristocratic, circles".

Feudalism belongs to the history book but some "natural environment for wealth, success, and research" is desirable, anyway. This environment is compatible with some "social mobility". But the goals to make the "social mobility" perfect or 100% are absolutely counterproductive plans of a radical anarchist or an ideologue of a similar sort. A healthy society simply doesn't work and cannot work like that.

by Luboš Motl (noreply@blogger.com) at April 22, 2016 05:03 PM

Clifford V. Johnson - Asymptotia

All Thumbs…

I'm not going to lie. If you're not in the mood, thumbnailing can be the most utterly tedious thing: (click for larger view)

thumbs_process_20_april_2016
Yet, as the key precursor to getting more detailed page layout right, and [...] Click to continue reading this post

The post All Thumbs… appeared first on Asymptotia.

by Clifford at April 22, 2016 02:23 PM

Symmetrybreaking - Fermilab/SLAC

LHC data at your fingertips

The CMS collaboration has released 300 terabytes of research data.

Today the CMS collaboration at CERN released more than 300 terabytes (TB) of high-quality open data. These include more than 100 TB of data from proton collisions at 7 TeV, making up half the data collected at the LHC by the CMS detector in 2011. This release follows a previous one from November 2014, which made available around 27 TB of research data collected in 2010.

The data are available on the CERN Open Data Portal and come in two types. The primary datasets are in the same format used by the collaboration to perform research. The derived datasets, on the other hand, require a lot less computing power and can be readily analyzed by university or high school students.

CMS is also providing the simulated data generated with the same software version that should be used to analyze the primary datasets. Simulations play a crucial role in particle physics research. The data release is accompanied by analysis tools and code examples tailored to the datasets. A virtual machine image based on CernVM, which comes preloaded with the software environment needed to analyze the CMS data, can also be downloaded from the portal.

GIF: exploring CMS data
CERN

“Once we’ve exhausted our exploration of the data, we see no reason not to make them available publicly,” says Kati Lassila-Perini, a CMS physicist who leads these data preservation efforts. “The benefits are numerous, from inspiring high school students to the training of the particle physicists of tomorrow. And personally, as CMS’s data preservation coordinator, this is a crucial part of ensuring the long-term availability of our research data.”

The scope of open LHC data has already been demonstrated with the previous release of research data. A group of theorists at MIT wanted to study the substructure of jets—showers of hadron clusters recorded in the CMS detector. Since CMS had not performed this particular research, the theorists got in touch with the CMS scientists for advice on how to proceed. This blossomed into a fruitful collaboration between the theorists and CMS.

“As scientists, we should take the release of data from publicly funded research very seriously,” says Salvatore Rappoccio, a CMS physicist who worked with the MIT theorists. “In addition to showing good stewardship of the funding we have received, it also provides a scientific benefit to our field as a whole. While it is a difficult and daunting task with much left to do, the release of CMS data is a giant step in the right direction.”

Further, a CMS physicist in Germany tasked two undergraduates with validating the CMS Open Data by reproducing key plots from some highly cited CMS papers that used data collected in 2010. Using openly available documentation about CMS’s analysis software and with some guidance from the physicist, the students were able to recreate plots that look nearly identical to those from CMS, demonstrating what can be achieved with these data.

“We are very pleased that we can make all these data publicly available,” adds Lassila-Perini. “We look forward to how they are utilized outside our collaboration, for research as well as for building educational tools.”

 

A version of this article was originally published on the CMS website.

by Achintya Rao at April 22, 2016 12:23 PM

Tommaso Dorigo - Scientificblogging

Magic In Particle Reactions: Exclusive Photoproduction Of Upsilon Mesons
Exclusive production processes at hadron collider are something magical. You direct two trucks at 100 miles per hour one against the other head-on, and the two just gently push each other sideways, continuing their trip perfectly unaffected, but leave behind a new entity (a cart?) produced with the energy of the glancing collision. 

read more

by Tommaso Dorigo at April 22, 2016 10:44 AM

John Baez - Azimuth

Bleaching of the Great Barrier Reef

The chatter of gossip distracts us from the really big story, the Anthropocene: the new geological era we are bringing about. Here’s something that should be dominating the headlines: Most of the Great Barrier Reef, the world’s largest coral reef system, now looks like a ghostly graveyard.

Most corals are colonies of tiny genetically identical animals called polyps. Over centuries, their skeletons build up reefs, which are havens for many kinds of sea life. Some polyps catch their own food using stingers. But most get their food by symbiosis! They cooperate with single-celled organism called zooxanthellae. Zooxanthellae get energy from the sun’s light. They actually live inside the polyps, and provide them with food. Most of the color of a coral reef comes from these zooxanthellae.

When a polyp is stressed, the zooxanthellae living inside it may decide to leave. This can happen when the sea water gets too hot. Without its zooxanthellae, the polyp is transparent and the coral’s white skeleton is revealed—as you see here. We say the coral is bleached.

After they bleach, the polyps begin to starve. If conditions return to normal fast enough, the zooxanthellae may come back. If they don’t, the coral will die.

The Great Barrier Reef, off the northeast coast of Australia, contains over 2,900 reefs and 900 islands. It’s huge: 2,300 kilometers long, with an area of about 340,000 square kilometers. It can be seen from outer space!

With global warming, this reef has been starting to bleach. Parts of it bleached in 1998 and again in 2002. But this year, with a big El Niño pushing world temperatures to new record highs, is the worst.

Scientists have being flying over the Great Barrier Reef to study the damage, and divers have looked at some of the reefs in detail. Of the 522 reefs surveyed in the northern sector, over 80% are severely bleached and less than 1% are not bleached at all. The damage is less further south where the water is cooler—but most of the reefs are in the north:

The top expert on coral reefs in Australia, Terry Hughes, wrote:

I showed the results of aerial surveys of bleaching on the Great Barrier Reef to my students. And then we wept.

Imagine devoting your life to studying and trying to protect coral reefs, and then seeing this.

Some of the bleached reefs may recover. But as oceans continue to warm, the prospects look bleak. The last big El Niño was in 1998. With a lot of hard followup work, scientists showed that in the end, 16% of the world’s corals died in that event.

This year is quite a bit hotter.

So, global warming is not a problem for the future: it’s a problem now. It’s not good enough to cut carbon emissions eventually. We’ve got to get serious now.

I need to recommit myself to this. For example, I need to stop flying around to conferences. I’ve cut back, but I need to do much better. Future generations, living in the damaged world we’re creating, will not have much sympathy for our excuses.


by John Baez at April 22, 2016 12:27 AM

April 21, 2016

Sean Carroll - Preposterous Universe

Being: Human

cover6-150Anticipation is growing — in my own mind, if nowhere else — for the release of The Big Picture, which will be out on May 10. I’ve finally been able to hold the physical book in my hand, which is always a highlight of the book-writing process. And yes, there will be an audio book, which should come out the same time. I spent several days in March recording it, which taught me a valuable lesson: write shorter books.

There will also be something of a short book tour, hitting NYC, DC, Boston, Seattle, and the Bay Area, likely with a few other venues to be added later this summer. For details see my calendar page.

In many ways, the book is a celebration of naturalism in general, and poetic naturalism in particular. So to get you in the mood, here is a lovely short video from the Mothlight Creative, which celebrates naturalism in a more visual and visceral way. “I want to shiver with awe and wonder at the universe.”

Being: Human from Mothlight Creative on Vimeo.

by Sean Carroll at April 21, 2016 10:38 PM

Clifford V. Johnson - Asymptotia

Goodbye, and Thank You

May 27th 2011, at the Forum in Los Angeles. What a wonderful show. So generous - numerous encores and special guests well into the night. Thank you for the music, Prince (click for larger view):

prince_forum_la_27_05_2011_combined

(Amy. Tina. Jason.)

-cvj Click to continue reading this post

The post Goodbye, and Thank You appeared first on Asymptotia.

by Clifford at April 21, 2016 07:23 PM

ZapperZ - Physics and Physicists

Online Students - Are They As Good?
This is essentially a follow-up to my post on Education Technology.

So, after doing this for a while and trying to put two-and-two together, I'm having a bit of skepticism about online learning and education. I know it is in-fashion right now, and maybe in many other subjects, this is effective. But I don't see it for physics.

I've mentioned earlier on why students who undergo online learning via the online interface that they use often lack problem-solving techniques, which I consider as important as understanding the material itself. However, in this post, I also being to question if they actually know what we THINK they know. Let me explain.

My students do their homework assignment "online", as I've mentioned before. They have to complete this each week. I get to see how they perform, both individually, and as a group. I know what questions they got right, and what they got wrong. So I can follow up by going over questions that most students have problems with.

But here's the thing. Most students seem to be doing rather well if I simply base this on the online homework scores. In fact, just by looking at the HW statistics, they understand 3/4 of the material rather well. But do they?

I decided to do some in-class evaluation. I give them short, basic questions that cover the material from the previous week, something they did in their homework. And the result is mind-boggling. Many of them can't answer the simplest, most basic question. And I let them open their text and notes to answer these questions. Remember, these are the topics that they had just answered in the HW the previous week that were way more difficult than my in-class questions.

For example, a HW question may ask for the magnitude and direction of the electric field at a particular location due to 2 or more charges located at some distance away. So for my in-class question, I have a charge Q sitting at the origin of a cartesian coordinate, and I ask for the E-field at a distance, say 3 cm away. And then I say that if I put a charge q at that location, what is the force acting on it that charge? Simple, no? And they could look at their notes and text to solve this.

If the students could manage to solve the more difficult HW problem, the question I asked should be a breeze! So why did more than half of the class gave me answers as if they had never seen this material before?

This happened consistently. I will ask a very basic question that is way simpler than one of their HW question, and I get puzzling answers. There appears to be a huge disconnect between what they did in the online HW, and their actual knowledge of the very same material that they should have used to solve those HW problems. They performance in completing the online HW has no correlation to their understanding of the material.

All of this becomes painfully obvious during the final exam, where they have to sit for it in class, and write down the solution to the questions the old-fashion way. The majority of the students crashed-and-burned. Even when the questions were similar to the very same ones they solved in their HW, some did not even know how to start! And yes, they were allowed to look at their notes, texts, and their old HW during the finals.

So what are the reasons for this? Why is there such a disconnect between their performance online, and what they actually can do? While there might be a number of reasons for this, the only one that I find most plausible is that they had some form of assistance in completing their online work. This assistance may be in the form of (i) previously-done HW from another source and/or (ii) another person who is more knowledgeable or had taken the course before. The online performance that I see often does not accurately reflect the level of knowledge the students actually have.

So this led me into thinking about all these online courses that many schools are beginning to offer. Some even offer entire degree that you can get via online courses. I am well-aware of the conveniences of these forms of learning, and for the right students, this may be useful. However, I question the quality of knowledge of the students, on average, that went through an online course or degree. If my haunch is correct, how does one know that the work that has been done online was done purely by that student? Sure, you can randomize the questions and insert new things in there, but there is still the question on whether the student had an external assistance, be it partially or entirely.

I asked on here a long time ago if anyone have had any experience with students in physics who went through an online program, either partially or for an entire degree program. I haven't had any responses, which might indicate that it is still not very common. I certainly haven't encountered any physics graduate students that went through an online program.

Like I said, maybe this type of learning works well in many different areas. But I don't see how it is effective for physics, or any STEM subject area. Anyone knows how Arizona State University does it?

Zz.

by ZapperZ (noreply@blogger.com) at April 21, 2016 01:43 PM

The n-Category Cafe

Type Theory and Philosophy at Kent

I haven’t been around here much lately, but I would like to announce this workshop I’m running on 9-10 June, Type Theory and Philosophy. Following some of the links there will show, I hope, the scope of what may be possible.

One link is to the latest draft of an article I’m writing, Expressing ‘The Structure of’ in Homotopy Type Theory, which has evolved a little over the year since I posted The Structure of A.

by david (d.corfield@kent.ac.uk) at April 21, 2016 01:20 PM

Robert Helling - atdotde

The Quantum in Quantum Computing
I am sure, by now, all of you have seen Canada's prime minister  "explain" quantum computers at Perimeter. It's really great that politicians care about these things and he managed to say what is the standard explanation for the speed up of quantum computers compared to their classical cousins: It is because you can have superpositions of initial states and therefore "perform many operations in parallel".

Except of course, that this is bullshit. This is not the reason for the speed up, you can do the same with a classical computer, at least with a  probabilistic one: You can also as step one perform a random process (throw a coin, turn a Roulette wheel, whatever) to determine the initial state you start your computer with. Then looking at it from the outside, the state of the classical computer is mixed and the further time evolution also "does all the computations in parallel". That just follows from the formalism of (classical) statistical mechanics.

Of course, that does not help much since the outcome is likely also probabilistic. But it has the same parallelism. And as the state space of a qubit is all of a Bloch sphere, the state space of a classical bit (allowing mixed states) is also an interval allowing a continuum of intermediate states.

The difference between quantum and classical is elsewhere. And it has to do with non-commuting operators (as those are essential for quantum properties) and those allow for entanglement.

To be more specific, let us consider one of the most famous quantum algorithms, Grover's database lookup, There the problem (at least in its original form) is to figure out which of $N$ possible "boxes" contains the hidden coin. Classically, you cannot do better than opening one after the other (or possibly in a random pattern), which takes $O(N)$ steps (on average).

For the quantum version, you first have to say how to encode the problem. The lore is, that you start with an $N$-dimensional Hilbert space with a basis $|1\rangle\cdots|N\rangle$. The secret is that one of these basis vectors is picked. Let's call it $|\omega\rangle$ and it is given to you in terms of a projection operator $P=|\omega\rangle\langle\omega|$.

Furthermore, you have at your disposal a way to create the flat superposition $|s\rangle = \frac1{\sqrt N}\sum_{i=1}^N |i\rangle$ and a number operator $K$ that act like $K|k\rangle= k|k\rangle$, i.e. is diagonal in the above basis and is able to distinguish the basis elements in terms of its eigenvalues.

Then, what you are supposed to do is the following: You form two unitary operators $U_\omega = 1 - 2P$  (this multiplies $|\omega\rangle$ by -1 while being the identity on the orthogonal subspace, i.e. is a reflection on the plane orthogonal to $|\omega\rangle$) and $U_s = 2|s\rangle\langle s| - 1$ which reflects the vectors orthogonal to $|s\rangle$.

It is not hard to see that both $U_s$ and $U_\omega$ map the two dimensional place spanned by $|s\rangle$ and $|\omega\rangle$ into itself. They are both reflections and thus their product is a rotation by twice the angle between the two planes which is given in terms of the scalar product $\langle s|\omega\rangle =1/\sqrt{N}$ as $\phi =\sin^{-1}\langle s|\omega\rangle$.

But obviously, using a rotation by $\cos^{-1}\langle s|\omega\rangle$, one can rotate $|s\rangle$ onto $\omega$. So all we have to do is to apply the product $(U_sU\omega)^k$ where $k$ is the ratio between these two angles which is $O(\sqrt{N})$. (No need to worry that this is not an integer, the error is $O(1/N)$ and has no influence). Then you have turned your initial state $|s\rangle$ into $|omega\rangle$ and by measuring the observable $K$ above you know which box contained the coin.

Since this took only $O(\sqrt{N})$ steps this is a quadratic speed up compared to the classical case.

So how did we get this? As I said, it's not the superposition. Classically we could prepare the probabilistic state that opens each box with probability $1/N$. But we have to expect that we have to do that $O(N)$ times, so this is essential as fast as systematically opening one box after the other.

To have a better unified classical-quantum language, let us say that we have a state space spanned by $N$ pure states $1,\ldots,N$. What we can do in the quantum case is to turn an initial state which had probability $1/N$ to be in each of these pure states into one that is deterministically in the sought after state.

Classically, this is impossible since no time evolution can turn a mixed state into a pure state. One way to see this is that the entropy of the probabilistic state is $\log(N)$ while it is 0 for the sought after state. If you like classically, we only have the observables given by C*-algebra generated by $K$, i.e. we can only observe which box we are dealing with. Both $P$ and $U_\omega$ are also in this classical algebra (they are diagonal in the special basis) and the strict classical analogue would be that we are given a rank one projector in that algebra and we have to figure out which one.

But quantum mechanically, we have more, we also have $U_s$ which does not commute with $K$ and is thus not in the classical algebra. The trick really is that in this bigger quantum algebra generated by both $K$ and $U_s$, we can form a pure state that becomes the probabilistic state when restricted to the classical algebra. And as a pure state, we can come up with a time evolution that turns it into the pure state $|\omega\rangle$.

So, this is really where the non-commutativity and thus the quantumness comes in. And we shouldn't really expect Trudeau to be able to explain this in a two sentence statement.

PS: The actual speed up in the end comes of course from the fact that probabilities are amplitudes squared and the normalization in $|s\rangle$ is $1/\sqrt{N}$ which makes the angle to be rotated by proportional to $1/\sqrt{N}$.

by Robert Helling (noreply@blogger.com) at April 21, 2016 12:55 PM

Robert Helling - atdotde

One more resuscitation
This blog has been silent for almost two years for a number of reasons. First, I myself stopped reading blogs on a daily basis as in open Google Reader right after the arXiv an checking what's new. I had already stopped doing that due to time constraints before Reader was shut down by Google and I must say I don't miss anything. My focus shifted much more to Twitter and Facebook and from there, I am directed to the occasional blog post, but as I said, I don't check them systematically anymore. And I assume others do the same.

But from time to time I run into things that I would like to discuss on a blog. Where (as my old readers probably know) I am mainly interested in discussions. I don't write here to educate (others) but only myself. I write about something I found interesting and would like to have further input on.

Plus, this should be more permanent than a Facebook post (which is gone once scrolled out of the bottom of the screen) and more than the occasional 160 character remark on Twitter.

Assuming that others have adopted their reading habits in a similar way to the year 2016, I have set up If This Than That to announce new posts to FB and Twitter so others might have a chance to find them.

by Robert Helling (noreply@blogger.com) at April 21, 2016 11:56 AM

April 20, 2016

The n-Category Cafe

Coalgebraic Geometry

Hi everyone! As some of you may remember, some time ago I was invited to post on the Café, but regrettably I never got around to doing so until now. Mainly I thought that the posts I wanted to write would be old hat to Café veterans, and also I wasn’t used to the interface.

Recently I decided I could at least try occasionally linking to posts I’ve written over at Annoying Precision and seeing how that goes. So, I’ve written two posts on how to start thinking about cocommutative coalgebras as “distributions” on spaces of some sort:

For experts, there’s a background fact I’m dancing around but not stating explicitly, which is that over a field, the category of cocommutative coalgebras is equivalent to the opposite of the category of profinite commutative algebras, which we can interpret as a category of formal schemes. But these posts were already getting too long; I’m trying to say fewer things about each topic I write about so I can write about more topics.

by qchu (qchu@math.berkeley.edu) at April 20, 2016 09:55 PM

Cormac O’Raifeartaigh - Antimatter (Life in a puzzling universe)

End of the second semester

It’s hard to believe we have almost reached the end of the second teaching semester, but there it is. I’m always a bit sorry to see the end of classes but I think it’s important that students are given time to reflect on what they have learnt. With that in mind, I don’t understand why exams start in early May rather than June.

As regards research, I can now concentrate on a review paper that I have been unable to finish for months.Mind you, thanks to the open-plan layout of offices in our college,  there will be more – not less – noise and distraction for the next few months as staff are no longer in class. Whoever thought open-plan offices are a good idea for academic staff?

On top of finishing off my various modules, I took it upon myself to gave a research seminar this week. The general theory of relativity, Einstein’s greatest contribution to science, is 100 years old this month and I couldn’t resist the invitation to give a brief history of the theory, together with a synopsis of the observational evidence that has emerged supporting many strange predictions of the theory, from black holes to the expanding universe, from the ‘big bang’ to gravitational waves. It took a lot of prep, but I think the talk went well and it was a nice way to finish off the semester.

poster

The slides for the talk are here. Now the excitement is over and it’s back to the lonely business of writing research papers….


by cormac at April 20, 2016 08:27 PM

April 19, 2016

Symmetrybreaking - Fermilab/SLAC

Eight things you might not know about light

Light is all around us, but how much do you really know about the photons speeding past you?

There’s more to light than meets the eye. Here are eight enlightening facts about photons:

Illustration by Sandbox Studio, Chicago with Kimberly Boustead

1. Photons can produce shock waves in water or air, similar to sonic booms.

Nothing can travel faster than the speed of light in a vacuum. However, light slows down in air, water, glass and other materials as photons interact with atoms, which has some interesting consequences.

The highest-energy gamma rays from space hit Earth’s atmosphere moving faster than the speed of light in air. These photons produce shock waves in the air, much like a sonic boom, but the effect is to make more photons instead of sound. Observatories like VERITAS in Arizona look for those secondary photons, which are known as Cherenkov radiation. Nuclear reactors also exhibit Cherenkov light in the water surrounding the nuclear fuel.

 

Illustration by Sandbox Studio, Chicago with Kimberly Boustead

2. Most types of light are invisible to our eyes.

Colors are our brains’ way of interpreting the wavelength of light: how far the light travels before the wave pattern repeats itself. But the colors we see—called “visible” or “optical” light—are only a small sample of the total electromagnetic spectrum.

Red is the longest wavelength light we see, but stretch the waves more and you get infrared, microwaves (including the stuff you cook with) and radio waves. Wavelengths shorter than violet span ultraviolet, X-rays and gamma rays. Wavelength is also a stand-in for energy: The long wavelengths of radio light have low energy, and the short-wavelength gamma rays have the highest energy, a major reason they’re so dangerous to living tissue.

 

Illustration by Sandbox Studio, Chicago with Kimberly Boustead

3. Scientists can perform measurements on single photons.

Light is made of particles called photons, bundles of the electromagnetic field that carry a specific amount of energy. With sufficiently sensitive experiments, you can count photons or even perform measurements on a single one. Researchers have even frozen light temporarily.

But don’t think of photons like they are pool balls. They’re also wave-like: they can interfere with each other to produce patterns of light and darkness. The photon model was one of the first triumphs of quantum physics; later work showed that electrons and other particles of matter also have wave-like properties.

 

Illustration by Sandbox Studio, Chicago with Kimberly Boustead

4. Photons from particle accelerators are used in chemistry and biology.

Visible light’s wavelengths are larger than atoms and molecules, so we literally can’t see the components of matter. However, the short wavelengths of X-rays and ultraviolet light are suited to showing such small structure. With methods to see these high-energy types of light, scientists get a glimpse of the atomic world.

Particle accelerators can make photons of specific wavelengths by accelerating electrons using magnetic fields; this is called “synchrotron radiation.” Researchers use particle accelerators to make X-rays and ultraviolet light to study the structure of molecules and viruses and even make movies of chemical reactions.

 

Illustration by Sandbox Studio, Chicago with Kimberly Boustead

5. Light is the manifestation of one of the four fundamental forces of nature.

Photons carry the electromagnetic force, one of the four fundamental forces (along with the weak force, the strong force, and gravity). As an electron moves through space, other charged particles feel it thanks to electrical attraction or repulsion. Because the effect is limited by the speed of light, other particles actually react to where the electron was rather than where it actually is. Quantum physics explains this by describing empty space as a seething soup of virtual particles. Electrons kick up virtual photons, which travel at the speed of light and hit other particles, exchanging energy and momentum.

 

Illustration by Sandbox Studio, Chicago with Kimberly Boustead

6. Photons are easily created and destroyed.

Unlike matter, all sorts of things can make or destroy photons. If you’re reading this on a computer screen, the backlight is making photons that travel to your eye, where they are absorbed—and destroyed.

The movement of electrons is responsible for both the creation and destruction of the photons, and that’s the case for a lot of light production and absorption. An electron moving in a strong magnetic field will generate photons just from its acceleration.

Similarly, when a photon of the right wavelength strikes an atom, it disappears and imparts all its energy to kicking the electron into a new energy level. A new photon is created and emitted when the electron falls back into its original position. The absorption and emission are responsible for the unique spectrum of light each type of atom or molecule has, which is a major way chemists, physicists, and astronomers identify chemical substances.

 

Illustration by Sandbox Studio, Chicago with Kimberly Boustead

7. When matter and antimatter annihilate, light is a byproduct.

An electron and a positron have the same mass, but opposite quantum properties such as electric charge. When they meet, those opposites cancel each other, converting the masses of the particles into energy in the form of a pair of gamma ray photons.

 

Illustration by Sandbox Studio, Chicago with Kimberly Boustead

8. You can collide photons to make particles.

Photons are their own antiparticles. But here’s the fun bit: the laws of physics governing photons are symmetric in time. That means if we can collide an electron and a positron to get two gamma ray photons, we should be able to collide two photons of the right energy and get an electron-positron pair.

In practice that’s hard to do: successful experiments generally involve other particles than just light. However, inside the LHC, the sheer number of photons produced during collisions of protons means that some of them occasionally hit each other

Some physicists are thinking about building a photon-photon collider, which would fire beams of photons into a cavity full of other photons to study the particles that come out of collisions.

by Matthew R. Francis at April 19, 2016 02:17 PM

April 18, 2016

John Baez - Azimuth

Statistical Laws of Darwinian Evolution

guest post by Matteo Smerlak

Biologists like Steven J. Gould like to emphasize that evolution is unpredictable. They have a point: there is absolutely no way an alien visiting the Earth 400 million years ago could have said:

Hey, I know what’s gonna happen here. Some descendants of those ugly fish will grow wings and start flying in the air. Others will walk the surface of the Earth for a few million years, but they’ll get bored and they’ll eventually go back to the oceans; when they do, they’ll be able to chat across thousands of kilometers using ultrasound. Yet others will grow arms, legs, fur, they’ll climb trees and invent BBQ, and, sooner or later, they’ll start wondering “why all this?”.

Nor can we tell if, a week from now, the flu virus will mutate, become highly pathogenic and forever remove the furry creatures from the surface of the Earth.

Evolution isn’t gravity—we can’t tell in which directions things will fall down.

One reason we can’t predict the outcomes of evolution is that genomes evolve in a super-high dimensional combinatorial space, which a ginormous number of possible turns at every step. Another is that living organisms interact with one another in a massively non-linear way, with, feedback loops, tipping points and all that jazz.

Life’s a mess, if you want my physicist’s opinion.

But that doesn’t mean that nothing can be predicted. Think of statistics. Nobody can predict who I’ll vote for in the next election, but it’s easy to tell what the distribution of votes in the country will be like. Thus, for continuous variables which arise as sums of large numbers of independent components, the central limit theorem tells us that the distribution will always be approximately normal. Or take extreme events: the max of N independent random variables is distributed according to a member of a one-parameter family of so-called “extreme value distributions”: this is the content of the famous Fisher–Tippett–Gnedenko theorem.

So this is the problem I want to think about in this blog post: is evolution ruled by statistical laws? Or, in physics terms: does it exhibit some form of universality?

Fitness distributions are the thing

One lesson from statistical physics is that, to uncover universality, you need to focus on relevant variables. In the case of evolution, it was Darwin’s main contribution to figure out the main relevant variable: the average number of viable offspring, aka fitness, of an organism. Other features—physical strength, metabolic efficiency, you name it—matter only insofar as they are correlated with fitness. If we further assume that fitness is (approximately) heritable, meaning that descendants have the same fitness as their ancestors, we get a simple yet powerful dynamical principle called natural selection: in a given population, the lineage with the highest fitness eventually dominates, i.e. its fraction goes to one over time. This principle is very general: it applies to genes and species, but also to non-living entities such as algorithms, firms or language. The general relevance of natural selection as a evolutionary force is sometimes referred to as “Universal Darwinism”.

The general idea of natural selection is pictured below (reproduced from this paper):

It’s not hard to write down an equation which expresses natural selection in general terms. Consider an infinite population in which each lineage grows with some rate x. (This rate is called the log-fitness or Malthusian fitness to contrast it with the number of viable offspring w=e^{x\Delta t} with \Delta t the lifetime of a generation. It’s more convenient to use x than w in what follows, so we’ll just call x “fitness”). Then the distribution of fitness at time t satisfies the equation

\displaystyle{ \frac{\partial p_t(x)}{\partial t} =\left(x-\int d y\, y\, p_t(y)\right)p_t(x) }

whose explicit solution in terms of the initial fitness distribution p_0(x):

\displaystyle{ p_t(x)=\frac{e^{x t}p_0(x)}{\int d y\, e^{y t}p_0(y)} }

is called the Cramér transform of p_0(x) in large deviations theory. That is, viewed as a flow in the space of probability distributions, natural selection is nothing but a time-dependent exponential tilt. (These equations and the results below can be generalized to include the effect of mutations, which are critical to maintain variation in the population, but we’ll skip this here to focus on pure natural selection. See my paper referenced below for more information.)

An immediate consequence of these equations is that the mean fitness \mu_t=\int dx\, x\, p_t(x) grows monotonically in time, with a rate of growth given by the variance \sigma_t^2=\int dx\, (x-\mu_t)^2\, p_t(x):

\displaystyle{ \frac{d\mu_t}{dt}=\sigma_t^2\geq 0 }

The great geneticist Ronald Fisher (yes, the one in the extreme value theorem!) was very impressed with this relationship. He thought it amounted to an biological version of the second law of thermodynamics, writing in his 1930 monograph

Professor Eddington has recently remarked that “The law that entropy always increases—the second law of thermodynamics—holds, I think, the supreme position among the laws of nature”. It is not a little instructive that so similar a law should hold the supreme position among the biological sciences.

Unfortunately, this excitement hasn’t been shared by the biological community, notably because this Fisher “fundamental theorem of natural selection” isn’t predictive: the mean fitness \mu_t grows according to the fitness variance \sigma_t^2, but what determines the evolution of \sigma_t^2? I can’t use the identity above to predict the speed of evolution in any sense. Geneticists say it’s “dynamically insufficient”.

Two limit theorems

But the situation isn’t as bad as it looks. The evolution of p_t(x) may be decomposed into the evolution of its mean \mu_t, of its variance \sigma_t^2, and of its shape or type

\overline{p}_t(x)=\sigma_t p_t(\sigma_t x+\mu_t).

(We also call \overline{p}_t(x) the “standardized fitness distribution”.) With Ahmed Youssef we showed that:

• If p_0(x) is supported on the whole real line and decays at infinity as

-\ln\int_x^{\infty}p_0(y)d y\underset{x\to\infty}{\sim} x^{\alpha}

for some \alpha > 1, then \mu_t\sim t^{\overline{\alpha}-1}, \sigma_t^2\sim t^{\overline{\alpha}-2} and \overline{p}_t(x) converges to the standard normal distribution as t\to\infty. Here \overline{\alpha} is the conjugate exponent to \alpha, i.e. 1/\overline{\alpha}+1/\alpha=1.

• If p_0(x) has a finite right-end point x_+ with

p(x)\underset{x\to x_+}{\sim} (x_+-x)^\beta

for some \beta\geq0, then x_+-\mu_t\sim t^{-1}, \sigma_t^2\sim t^{-2} and \overline{p}_t(x) converges to the flipped gamma distribution

\displaystyle{ p^*_\beta(x)= \frac{(1+\beta)^{(1+\beta)/2}}{\Gamma(1+\beta)} \Theta[x-(1+\beta)^{1/2}] }

\displaystyle { e^{-(1+\beta)^{1/2}[(1+\beta)^{1/2}-x]}\Big[(1+\beta)^{1/2}-x\Big]^\beta }

Here and below the symbol \sim means “asymptotically equivalent up to a positive multiplicative constant”; \Theta(x) is the Heaviside step function. Note that p^*_\beta(x) becomes Gaussian in the limit \beta\to\infty, i.e. the attractors of cases 1 and 2 form a continuous line in the space of probability distributions; the other extreme case, \beta\to0, corresponds to a flipped exponential distribution.

The one-parameter family of attractors p_\beta^*(x) is plotted below:

These results achieve two things. First, they resolve the dynamical insufficiency of Fisher’s fundamental theorem by giving estimates of the speed of evolution in terms of the tail behavior of the initial fitness distribution. Second, they show that natural selection is indeed subject to a form of universality, whereby the relevant statistical structure turns out to be finite dimensional, with only a handful of “conserved quantities” (the \alpha and \beta exponents) controlling the late-time behavior of natural selection. This amounts to a large reduction in complexity and, concomitantly, an enhancement of predictive power.

(For the mathematically-oriented reader, the proof of the theorems above involves two steps: first, translate the selection equation into a equation for (cumulant) generating functions; second, use a suitable Tauberian theorem—the Kasahara theorem—to relate the behavior of generating functions at large values of their arguments to the tail behavior of p_0(x). Details in our paper.)

It’s useful to consider the convergence of fitness distributions to the attractors p_\beta^*(x) for 0\leq\beta\leq \infty in the skewness-kurtosis plane, i.e. in terms of the third and fourth cumulants of p_t(x).

The red curve is the family of attractors, with the normal at the bottom right and the flipped exponential at the top left, and the dots correspond to numerical simulations performed with the classical Wright–Fisher model and with a simple genetic algorithm solving a linear programming problem. The attractors attract!

Conclusion and a question

Statistics is useful because limit theorems (the central limit theorem, the extreme value theorem) exist. Without them, we wouldn’t be able to make any population-level prediction. Same with statistical physics: it only because matter consists of large numbers of atoms, and limit theorems hold (the H-theorem, the second law), that macroscopic physics is possible in the first place. I believe the same perspective is useful in evolutionary dynamics: it’s true that we can’t predict how many wings birds will have in ten million years, but we can tell what shape fitness distributions should have if natural selection is true.

I’ll close with an open question for you, the reader. In the central limit theorem as well as in the second law of thermodynamics, convergence is driven by a Lyapunov function, namely entropy. (In the case of the central limit theorem, it’s a relatively recent result by Arstein et al.: the entropy of the normalized sum of n i.i.d. random variables, when it’s finite, is a monotonically increasing function of n.) In the case of natural selection for unbounded fitness, it’s clear that entropy will also be eventually monotonically increasing—the normal is the distribution with largest entropy at fixed variance and mean.

Yet it turns out that, in our case, entropy isn’t monotonic at all times; in fact, the closer the initial distribution p_0(x) is to the normal distribution, the later the entropy of the standardized fitness distribution starts to increase. Or, equivalently, the closer the initial distribution p_0(x) to the normal, the later its relative entropy with respect to the normal. Why is this? And what’s the actual Lyapunov function for this process (i.e., what functional of the standardized fitness distribution is monotonic at all times under natural selection)?

In the plots above the blue, orange and green lines correspond respectively to

\displaystyle{ p_0(x)\propto e^{-x^2/2-x^4}, \quad p_0(x)\propto e^{-x^2/2-.01x^4}, \quad p_0(x)\propto e^{-x^2/2-.001x^4} }

References

• S. J. Gould, Wonderful Life: The Burgess Shale and the Nature of History, W. W. Norton & Co., New York, 1989.

• M. Smerlak and A. Youssef, Limiting fitness distributions in evolutionary dynamics, 2015.

• R. A. Fisher, The Genetical Theory of Natural Selection, Oxford University Press, Oxford, 1930.

• S. Artstein, K. Ball, F. Barthe and A. Naor, Solution of Shannon’s problem on the monotonicity of entropy, J. Am. Math. Soc. 17 (2004), 975–982.


by John Baez at April 18, 2016 01:00 AM

April 15, 2016

Tommaso Dorigo - Scientificblogging

Another Bet Won - With My Student!
Okay, this one was not about the umpteenth statistical fluctuation, hopelessly believed by somebody to be the start of a new era in particle physics. It's gotten too easy to place and win bets like that - the chance that the Standard Model breaks down due to some unexpected, uncalled-for resonance is so tiny that any bet against it is a safe one. And indeed I have won three bets of that kind so far (and cashed 1200 dollars and a bottle of excellent wine); plus, a fourth (for $100) is going to be payable soon.

read more

by Tommaso Dorigo at April 15, 2016 08:33 PM

Tommaso Dorigo - Scientificblogging

New Physics In The Angular Distribution Of B Decays ?
After decades of theoretical studies and experimental measurements, forty years ago particle physicists managed to construct a very successful theory, one which describes with great accuracy the dynamics of subnuclear particles. This theory is now universally known as the Standard Model of particle physics. Since then, physicists have invested enormous efforts in the attempt of breaking it down.

It is not a contradiction: our understanding of the physical world progresses as we construct a progressively more refined mathematical representation of reality. Often this is done by adding more detail to an existing framework, but in some cases a complete overhaul is needed. And we appear to be in that situation with the Standard Model. 

read more

by Tommaso Dorigo at April 15, 2016 12:46 PM

April 14, 2016

ZapperZ - Physics and Physicists

Debunking Three Baseball Myths
A nice article on the debunking of 3 baseball myths using physics. I'm not that aware of the first two, but that last one, "Swing down on the ball to hit farther" has always been something I thought was unrealistic. Doing that makes it more difficult to get a perfect contact, because the timing has to be just right.

This is no different than a serve in tennis, and why hitting the ball at its highest point during a serve gives you a better chance at getting at the racket's sweet spot.

Zz.

by ZapperZ (noreply@blogger.com) at April 14, 2016 04:44 PM

Symmetrybreaking - Fermilab/SLAC

Five fascinating facts about DUNE

One: The Deep Underground Neutrino Experiment will look for more than just neutrinos.

The Deep Underground Neutrino Experiment is a project of superlatives. It will use the world’s most intense neutrino beam and largest neutrino detector to study the weirdest and most abundant matter particles in the universe. More than 800 scientists from close to 30 countries are working on the project to crack some long-unanswered questions in physics. It’s part of a worldwide push to discover the missing pieces that could explain how the known particles and forces created the universe we live in. Here’s a two-minute animation that shows how the project will work:

Video of AYtKcZMJ_4c

Here are a few more surprising facts about DUNE you might not know:

1. Engineers will use a mile-long fishing line to aim the neutrino beam from Illinois to South Dakota.

DUNE will aim a neutrino beam 800 miles (1300 kilometers) straight through the Earth from Fermilab to the Sanford Underground Research Facility—no tunnel necessary. Although the beam spreads as it travels, like a flashlight beam, it’s important to aim the center of the beam as precisely as possible at DUNE so that the maximum number of neutrinos can create a signal. Since neutrinos are electrically neutral, they can’t be steered by magnets after they’ve been created. Hence everything must be properly aligned—to within a fraction of a millimeter—when the neutrinos are made, emerging from the collisions of protons with carbon atoms.

Properly aligning the neutrino beam means using the Global Positioning System (GPS) to relate Sanford Lab’s underground map to the coordinates of Fermilab’s geographic system, making sure everything speaks the same location language. Part of the process requires mapping points underground to points on the Earth’s surface. To do this, the alignment crew will drop what might be the longest plumb line in the world down the 4850-foot (1.5-kilometer) mineshaft. The current plan is to use very strong fishing line—a mile of it—attached to a heavy weight that is immersed in a barrel of oil to dampen the movement of the pendulum. A laser tracker will record the precise location of the line.

2. Mining crews will move enough rock for two Empire State Buildings up a 14-by-20-foot shaft.

To create caverns that are large enough to host the DUNE detectors, miners need to blast and remove more than 800,000 tons of rock from a mile underground. That’s the equivalent of 8 Nimitz-class aircraft carriers, a comparison often made by Chris Mossey, project director for the Long-Baseline Neutrino Facility (the name of the facility that will support DUNE). Mossey knows a thing or two about aircraft carriers: He happens to be a retired commander of the US Navy's Naval Facilities Engineering Council and oversaw the engineering, construction and maintenance services of US Navy facilities. But not everyone is that familiar with aircraft carriers, so alternatively you can impress your friends by saying that crews will move the weight equivalent of 2.2 Empire State Buildings, 80 Eiffel Towers, 4700 blue whales or 18 billion(ish) Twinkies.

3. The interior of the DUNE detectors will have about the same average temperature as Saturn’s atmosphere.

Argon, an element that makes up almost one percent of the air we breathe, will be the material of choice to fill the DUNE detectors, albeit in its liquid form. As trillions of neutrinos pass through the transparent argon, a handful will interact with an argon nucleus and produce other particles. Those, in turn, will create light and knock loose electrons. Both can be recorded and turned into data that show exactly when and how a neutrino interacted. To keep the argon liquid, the cryogenics system will have to maintain a temperature of around minus 300 degrees Fahrenheit, or minus 184 degrees Celsius. That’s slightly colder than the average temperature of the icy ammonia clouds on the upper layer of Saturn’s atmosphere.

4. The design of DUNE’s detector vessels is inspired by huge transport ships for gas.

DUNE’s set of four detectors will be the largest cryogenic instrument ever installed deep underground. You know who else needs to store and cool large volumes of liquid? The gas industry, which liquefies natural gas to transport it around the world using huge ships with powerful refrigerators. DUNE’s massive, insulated vessels will feature a membrane system that is similar to that used by liquid natural gas transport ships. A stainless steel frame sits inside an insulating layer, sandwiched between aluminum sheets. Multiple layers provide the strength to keep the liquid argon right where it should be—interacting with neutrinos.

5. DUNE will look for more than just neutrinos.

Then why did they name the experiment after the neutrino? Well, most of the experiment is designed to study neutrinos—how they change as they move through space, how they arrive from exploding stars, how neutrinos differ from their antimatter partners, and how they interact with other matter particles. At the same time, the large size of the DUNE detectors and their shielded location a mile underground also make them the perfect tool to continue the search for signs of proton decay. Some theories predict that protons (one of the building blocks that make up the atoms in your body) have a very long but finite lifetime. Eventually they will decay into other particles, creating a signal that DUNE hopes to discover. Fortunately for our atoms, the proton’s estimated lifespan is much longer than the time our universe has existed so far. Because proton decay is expected to be such a rare event, scientists need to monitor lots of protons to catch one in the act—and seventy thousand tons of argon atoms means around 1034 protons (that’s a 1 with 34 zeroes after it), which isn’t too shabby.

by Lauren Biron at April 14, 2016 02:01 PM

April 13, 2016

Clifford V. Johnson - Asymptotia

Great Big Exchange

great_big_story_piece

Here's a fun Great Big Story (CNN) video piece about the Science and Entertainment Exchange (and a bit about my work on Agent Carter). Click here for the piece.

(Yeah, the headline. Seems you can't have a story about science connecting with the rest of the culture without the word "nerd" being used somewhere...)

-cvj Click to continue reading this post

The post Great Big Exchange appeared first on Asymptotia.

by Clifford at April 13, 2016 07:53 PM

April 12, 2016

Symmetrybreaking - Fermilab/SLAC

Art draws out the beauty of physics

Labs around the world open their doors to aesthetic creation.

When it comes to quantum mechanics, it’s easier to show than tell.

That’s why artist residencies at particle physics labs play an important part in conveying their stories, according to CERN theorist Luis Alvarez-Gaume.

He recently spent some time demonstrating physics concepts to Semiconductor, a duo of visual artists from England known for exploring matter through the tools and processes of science. They’ve done multiple short films, museum pieces and festivals all over the world. In July they were awarded a CERN residency as part of the Collide@CERN Ars Electronica Award.

“I tried to show them how we develop an intuition for quantum mechanics by applying the principles and getting used to the way it functions,” Alvarez-Gaume says. “Because honestly, I cannot explain quantum mechanics even to a scientist.”

The physicist laughed when he made that statement, but the artists, Ruth Jarman and Joe Gerhardt, are comforted by the sentiment. They soaked up all they could during their two-month stay in late 2015 and are still processing interviews and materials they’ll use to develop a major work based on their experiences. 

“Particle physics is the most challenging subject we’ve ever worked with because it’s so difficult to create a tangible idea about it, and that’s kind of what we are all about,” Jarman says, adding that they are fully up for the challenge.

Besides speaking with theorists and experimentalists, the artists explored interesting spaces at CERN and filmed both the construction of a new generation of magnets and a workshop where scientists were developing prototypes of instruments.

“We also dug around a lot in the archives,” Gerhardt says. “It’s such an amazing place and we only really touched the surface.”

But they have a lot of faith in the process based on past experiences working in scientific settings.

A 2007 work called “Magnetic Movie” was based on a similar stay at NASA’s Space Sciences Laboratories at UC Berkeley, where the artists captured the "secret lives of invisible magnetic fields." In the film, brightly colored streams and blobs emanate from various rooms at the lab to the sounds of VLF (very low frequency) audio recordings and scientists talking.

“Are we observing a series of scientific experiments, the universe in flux or a documentary of a fictional world?” the artists ask on their website.

The piece won multiple awards at international film festivals. But, just as importantly to the artists, the scientists were excited about the way it celebrated their work, “even though it was removed from their context,” Jarman says.

Picturing the invisible

At the Department of Energy’s Fermilab, another group of artists has taken on the challenge of “visualizing the invisible.” Current artist-in-residence Ellen Sandor and her collaborative group (art)n have been brushing up on neutrinos and the machines that study them. 

Their goal is to put their own cutting-edge technologies to use in scientifically accurate and “transcendent” artworks that tell the story of Fermilab’s past, present and future, the artist says.

Sandor is known as a pioneer of virtual photography. In the 1980s she invented a new medium called PHSColograms, 3-D images that combine photography, holography, sculpture and computer graphics to create what she calls “immersive” experiences.

The group will use PHSColograms, sculpture, 3D printing, virtual reality and projection mapping in a body of work that will eventually be on display at the lab. 

“We want to tell the story with scientific visualization and also with abstraction,” Sandor says. “But all of the images will be exciting and artistic.”

The value of such rich digital visuals lies in what Sandor calls their “wow factor,” according to Sam Zeller, neutrino physicist and science advisor for the artist-in-residence program. 

“We scientists don’t always know how to hit that mark, but she does,” Zeller says. “These three-dimensional immersive images come closer to the video game environment. If we want to capture the imagination of school-age children, we can’t just stand in front of a poster and talk anymore.”

As co-spokesperson of the MicroBooNE experiment, Zeller and team are collaborating with the artists on virtual reality visualizations of a new detector technology called a liquid-argon time projection chamber. The detector components, as well as the reactions it detects, are sealed inside a stainless steel vessel out of view.

“Because she strives for scientific accuracy, we can use Sandor’s art to help us explain how our detector works and demonstrate it to the public,” Zeller says.

Growing collaborations

According to Monica Bello, head of Arts@CERN, programs that combine art and science are a growing trend around the globe.

Organizations such as the Arts Catalyst Centre for Art, Science & Technology in London commission science-related art worldwide, and galleries like Kapelica Gallery in Ljubljana, Slovenia, present contemporary art focused largely on science and technology.

US nonprofit Leonardo, The International Society for the Arts, Sciences and Technology, supports cross-disciplinary research and international artist and scientist residencies and events. 

“However, programs of this kind founded within scientific institutions and with full support are still rare,” Bello says. Yet, many labs, including TRIUMF in Canada and INFN in Italy, host art exhibits, events or occasional artist residencies.

“While we don’t bring on full-time artists continually, TRIUMF offers a suite of initiatives that explore the intersection of art and science,” says Melissa Baluk, communications coordinator at TRIUMF.  “A great example is our ongoing partnership with artist Ingrid Koenig of Emily Carr University of Art + Design here in Vancouver. Koenig tailors some of her fine art classes to these intersections, for example, courses called ‘Black Holes and Other Transformations of Energy’ and ‘Quantum Entanglements: Manifestations in Practice.’” 

The collaboration invites physicists to Koenig’s studio and draws her students to the lab. “It’s a wonderful partnership that allows all involved to discover news ways of thinking about the interconnections between art, science, and culture on a scale that works for us,” Baluk says. 

Fermilab’s robust commitment to the arts reaches back to founding director, physicist and artist Robert Wilson. His sculptures are still exhibited around the lab, says Georgia Schwender, curator of the Fermilab Art Gallery.

Schwender finds that art-science programs attract the community through the unconventional pairing of subjects; events such as the international Art@CMS exhibit last year at Fermilab are very well received. 

“It’s not just a physics or an art class,” she says. “People who might be a little afraid of the art or a little afraid of the science are less intimidated when you bring them together.”

Fermilab recently complemented its tradition of cultural engagement with a new artist residency, which began in 2014 with mixed media artist Lindsay Olson.

Art-physics interactions

Science as a subject for art has grown since Sandor’s first PHSCologram of the AIDS virus bloomed into a career of art-science collaborations.  

“In the beginning it was almost practical. People were dying, and we wanted to bring everything to the surface and leave nothing hidden,” the artist says. “By the 1990s I realized that scientists were the rock stars of the future, and that’s even truer today.”

Sandor relishes being part of the scientific process. Drawing out the hidden beauty of particle physics to create something scientifically accurate and artistically stunning has been one of the most satisfying projects to date, she says.

Like Sandor, Semiconductor works with authentic scientific data, but they also emphasize how the language of science influences our experience of nature. 

“The data represents something we can’t actually see, feel or touch,” Jarman says. “We reference the tools and processes of science and encourage the noise and the artifact to constantly remind people that it is man observing nature, but not actually how it is.”

Both Zeller and Alvarez-Gaume have personal interests in art and find value in the similarities and differences between the fields.

“Our objectives are very different, but our paths are similar,” Alvarez-Gaume says. “We experience inspiration, passion and frustration. We work through trial and error, failing most of the time.”

Like art, science is abstract but enjoyable, he adds. “Theoretical physicists will tell you there is beauty in science—a sense of awe. Art helps bring this to the surface. People are not interested in the details: They want to get a vision, a picture about why we think particle physics is interesting or exciting.”

Zeller finds her own inspiration in art-science collaborations. 

“One of the things that surprised me the most in working with artists was the fact that they could articulate much better than I could what it is that my research achieves for humankind, and this reinvigorated me with excitement about my work,” she says.

Yet, one key difference between art and science speaks for the need to nurture their growing intersections, Alvarez-Gaume says. 

“Science is inevitable; art is fragile. Without Einstein it may have taken many, many years, and many people working on it, but we still would have come up with his theories. If Beethoven died at age 5, we would not have the sonatas; art is not repeatable.”

And a world without art is not a world he would like to imagine.

by Angela Anderson at April 12, 2016 02:10 PM

April 11, 2016

John Baez - Azimuth

Diamonds and Triamonds

The structure of a diamond crystal is fascinating. But there’s an equally fascinating form of carbon, called the triamond, that’s theoretically possible but never yet seen in nature. Here it is:

In the triamond, each carbon atom is bonded to three others at 120° angles, with one double bond and two single bonds. Its bonds lie in a plane, so we get a plane for each atom.

But here’s the tricky part: for any two neighboring atoms, these planes are different. In fact, if we draw the bond planes for all the atoms in the triamond, they come in four kinds, parallel to the faces of a regular tetrahedron!

If we discount the difference between single and double bonds, the triamond is highly symmetrical. There’s a symmetry carrying any atom and any of its bonds to any other atom and any of its bonds. However, the triamond has an inherent handedness, or chirality. It comes in two mirror-image forms.

A rather surprising thing about the triamond is that the smallest rings of atoms are 10-sided. Each atom lies in 15 of these 10-sided rings.

Some chemists have argued that the triamond should be ‘metastable’ at room temperature and pressure: that is, it should last for a while but eventually turn to graphite. Diamonds are also considered metastable, though I’ve never seen anyone pull an old diamond ring from their jewelry cabinet and discover to their shock that it’s turned to graphite. The big difference is that diamonds are formed naturally under high pressure—while triamonds, it seems, are not.

Nonetheless, the mathematics behind the triamond does find its way into nature. A while back I told you about a minimal surface called the ‘gyroid’, which is found in many places:

The physics of butterfly wings.

It turns out that the pattern of a gyroid is closely connected to the triamond! So, if you’re looking for a triamond-like pattern in nature, certain butterfly wings are your best bet:

• Matthias Weber, The gyroids: algorithmic geometry III, The Inner Frame, 23 October 2015.

Instead of trying to explain it here, I’ll refer you to the wonderful pictures at Weber’s blog.

Building the triamond

I want to tell you a way to build the triamond. I saw it here:

• Toshikazu Sunada, Crystals that nature might miss creating, Notices of the American Mathematical Society 55 (2008), 208–215.

This is the paper that got people excited about the triamond, though it was discovered much earlier by the crystallographer Fritz Laves back in 1932, and Coxeter named it the Laves graph.

To build the triamond, we can start with this graph:

It’s called \mathrm{K}_4, since it’s the complete graph on four vertices, meaning there’s one edge between each pair of vertices. The vertices correspond to four different kinds of atoms in the triamond: let’s call them red, green, yellow and blue. The edges of this graph have arrows on them, labelled with certain vectors

e_1, e_2, e_3, f_1, f_2, f_3 \in \mathbb{R}^3

Let’s not worry yet about what these vectors are. What really matters is this: to move from any atom in the triamond to any of its neighbors, you move along the vector labeling the edge between them… or its negative, if you’re moving against the arrow.

For example, suppose you’re at any red atom. It has 3 nearest neighbors, which are blue, green and yellow. To move to the blue neighbor you add f_1 to your position. To move to the green one you subtract e_2, since you’re moving against the arrow on the edge connecting blue and green. Similarly, to go to the yellow neighbor you subtract the vector f_3 from your position.

Thus, any path along the bonds of the triamond determines a path in the graph \mathrm{K}_4.

Conversely, if you pick an atom of some color in the triamond, any path in \mathrm{K}_4 starting from the vertex of that color determines a path in the triamond! However, going around a loop in \mathrm{K}_4 may not get you back to the atom you started with in the triamond.

Mathematicians summarize these facts by saying the triamond is a ‘covering space’ of the graph \mathrm{K}_4.

Now let’s see if you can figure out those vectors.

Puzzle 1. Find vectors e_1, e_2, e_3, f_1, f_2, f_3 \in \mathbb{R}^3 such that:

A) All these vectors have the same length.

B) The three vectors coming out of any vertex lie in a plane at 120° angles to each other:

For example, f_1, -e_2 and -f_3 lie in a plane at 120° angles to each other. We put in two minus signs because two arrows are pointing into the red vertex.

C) The four planes we get this way, one for each vertex, are parallel to the faces of a regular tetrahedron.

If you want, you can even add another constraint:

D) All the components of the vectors e_1, e_2, e_3, f_1, f_2, f_3 are integers.

Diamonds and hyperdiamonds

That’s the triamond. Compare the diamond:

Here each atom of carbon is connected to four others. This pattern is found not just in carbon but also other elements in the same column of the periodic table: silicon, germanium, and tin. They all like to hook up with four neighbors.

The pattern of atoms in a diamond is called the diamond cubic. It’s elegant but a bit tricky. Look at it carefully!

To build it, we start by putting an atom at each corner of a cube. Then we put an atom in the middle of each face of the cube. If we stopped there, we would have a face-centered cubic. But there are also four more carbons inside the cube—one at the center of each tetrahedron we’ve created.

If you look really carefully, you can see that the full pattern consists of two interpenetrating face-centered cubic lattices, one offset relative to the other along the cube’s main diagonal.

The face-centered cubic is the 3-dimensional version of a pattern that exists in any dimension: the Dn lattice. To build this, take an n-dimensional checkerboard and alternately color the hypercubes red and black. Then, put a point in the center of each black hypercube!

You can also get the Dn lattice by taking all n-tuples of integers that sum to an even integer. Requiring that they sum to something even is a way to pick out the black hypercubes.

The diamond is also an example of a pattern that exists in any dimension! I’ll call this the hyperdiamond, but mathematicians call it Dn+, because it’s the union of two copies of the Dn lattice. To build it, first take all n-tuples of integers that sum to an even integer. Then take all those points shifted by the vector (1/2, …, 1/2).

In any dimension, the volume of the unit cell of the hyperdiamond is 1, so mathematicians say it’s unimodular. But only in even dimensions is the sum or difference of any two points in the hyperdiamond again a point in the hyperdiamond. Mathematicians call a discrete set of points with this property a lattice.

If even dimensions are better than odd ones, how about dimensions that are multiples of 4? Then the hyperdiamond is better still: it’s an integral lattice, meaning that the dot product of any two vectors in the lattice is again an integer.

And in dimensions that are multiples of 8, the hyperdiamond is even better. It’s even, meaning that the dot product of any vector with itself is even.

In fact, even unimodular lattices are only possible in Euclidean space when the dimension is a multiple of 8. In 8 dimensions, the only even unimodular lattice is the 8-dimensional hyperdiamond, which is usually called the E8 lattice. The E8 lattice is one of my favorite entities, and I’ve written a lot about it in this series:

Integral octonions.

To me, the glittering beauty of diamonds is just a tiny hint of the overwhelming beauty of E8.

But let’s go back down to 3 dimensions. I’d like to describe the diamond rather explicitly, so we can see how a slight change produces the triamond.

It will be less stressful if we double the size of our diamond. So, let’s start with a face-centered cubic consisting of points whose coordinates are even integers summing to a multiple of 4. That consists of these points:

(0,0,0)   (2,2,0)   (2,0,2)   (0,2,2)

and all points obtained from these by adding multiples of 4 to any of the coordinates. To get the diamond, we take all these together with another face-centered cubic that’s been shifted by (1,1,1). That consists of these points:

(1,1,1)   (3,3,1)   (3,1,3)   (1,3,3)

and all points obtained by adding multiples of 4 to any of the coordinates.

The triamond is similar! Now we start with these points

(0,0,0)   (1,2,3)   (2,3,1)   (3,1,2)

and all the points obtain from these by adding multiples of 4 to any of the coordinates. To get the triamond, we take all these together with another copy of these points that’s been shifted by (2,2,2). That other copy consists of these points:

(2,2,2)   (3,0,1)   (0,1,3)   (1,3,0)

and all points obtained by adding multiples of 4 to any of the coordinates.

Unlike the diamond, the triamond has an inherent handedness, or chirality. You’ll note how we used the point (1,2,3) and took cyclic permutations of its coordinates to get more points. If we’d started with (3,2,1) we would have gotten the other, mirror-image version of the triamond.

Covering spaces

I mentioned that the triamond is a ‘covering space’ of the graph \mathrm{K}_4. More precisely, there’s a graph T whose vertices are the atoms of the triamond, and whose edges are the bonds of the triamond. There’s a map of graphs

p: T \to \mathrm{K}_4

This automatically means that every path in T is mapped to a path in \mathrm{K}_4. But what makes T a covering space of \mathrm{K}_4 is that any path in T comes from a path in \mathrm{K}_4, which is unique after we choose its starting point.

If you’re a high-powered mathematician you might wonder if T is the universal covering space of \mathrm{K}_4. It’s not, but it’s the universal abelian covering space.

What does this mean? Any path in \mathrm{K}_4 gives a sequence of vectors e_1, e_2, e_3, f_1, f_2, f_3 and their negatives. If we pick a starting point in the triamond, this sequence describes a unique path in the triamond. When does this path get you back where you started? The answer, I believe, is this: if and only if you can take your sequence, rewrite it using the commutative law, and cancel like terms to get zero. This is related to how adding vectors in \mathbb{R}^3 is a commutative operation.

For example, there’s a loop in \mathrm{K}_4 that goes “red, blue, green, red”. This gives the sequence of vectors

f_1, -e_3, e_2

We can turn this into an expression

f_1 - e_3 + e_2

However, we can’t simplify this to zero using just the commutative law and cancelling like terms. So, if we start at some red atom in the triamond and take the unique path that goes “red, blue, green, red”, we do not get back where we started!

Note that in this simplification process, we’re not allowed to use what the vectors “really are”. It’s a purely formal manipulation.

Puzzle 2. Describe a loop of length 10 in the triamond using this method. Check that you can simplify the corresponding expression to zero using the rules I described.

A similar story works for the diamond, but starting with a different graph:

The graph formed by a diamond’s atoms and the edges between them is the universal abelian cover of this little graph! This graph has 2 vertices because there are 2 kinds of atom in the diamond. It has 4 edges because each atom has 4 nearest neighbors.

Puzzle 3. What vectors should we use to label the edges of this graph, so that the vectors coming out of any vertex describe how to move from that kind of atom in the diamond to its 4 nearest neighbors?

There’s also a similar story for graphene, which is hexagonal array of carbon atoms in a plane:

Puzzle 4. What graph with edges labelled by vectors in \mathbb{R}^2 should we use to describe graphene?

I don’t know much about how this universal abelian cover trick generalizes to higher dimensions, though it’s easy to handle the case of a cubical lattice in any dimension.

Puzzle 5. I described higher-dimensional analogues of diamonds: are there higher-dimensional triamonds?

References

The Wikipedia article is good:

• Wikipedia, Laves graph.

They say this graph has many names: the K4 crystal, the (10,3)-a network, the srs net, the diamond twin, and of course the triamond. The name triamond is not very logical: while each carbon has 3 neighbors in the triamond, each carbon has not 2 but 4 neighbors in the diamond. So, perhaps the diamond should be called the ‘quadriamond’. In fact, the word ‘diamond’ has nothing to do with the prefix ‘di-‘ meaning ‘two’. It’s more closely related to the word ‘adamant’. Still, I like the word ‘triamond’.

This paper describes various attempts to find the Laves graph in chemistry:

• Stephen T. Hyde, Michael O’Keeffe, and Davide M. Proserpio, A short history of an elusive yet ubiquitous structure in chemistry, materials, and mathematics, Angew. Chem. Int. Ed. 47 (2008), 7996–8000.

This paper does some calculations arguing that the triamond is a metastable form of carbon:

• Masahiro Itoh et al, New metallic carbon crystal, Phys. Rev. Lett. 102 (2009), 055703.

Abstract. Recently, mathematical analysis clarified that sp2 hybridized carbon should have a three-dimensional crystal structure (\mathrm{K}_4) which can be regarded as a twin of the sp3 diamond crystal. In this study, various physical properties of the \mathrm{K}_4 carbon crystal, especially for the electronic properties, were evaluated by first principles calculations. Although the \mathrm{K}_4 crystal is in a metastable state, a possible pressure induced structural phase transition from graphite to \mathrm{K}_4 was suggested. Twisted π states across the Fermi level result in metallic properties in a new carbon crystal.

The picture of the \mathrm{K}_4 crystal was placed on Wikicommons by someone named ‘Workbit’, under a Creative Commons Attribution-Share Alike 4.0 International license. The picture of the tetrahedron was made using Robert Webb’s Stella software and placed on Wikicommons. The pictures of graphs come from Sunada’s paper, though I modified the picture of \mathrm{K}_4. The moving image of the diamond cubic was created by H.K.D.H. Bhadeshia and put into the public domain on Wikicommons. The picture of graphene was drawn by Dr. Thomas Szkopek and put into the public domain on Wikicommons.


by John Baez at April 11, 2016 08:43 PM

April 08, 2016

Jester - Resonaances

April Fools' 16: Was LIGO a hack?

This post is an April Fools' joke. LIGO's gravitational waves are for real. At least I hope so ;) 

We have had recently a few scientific embarrassments, where a big discovery announced with great fanfares was subsequently overturned by new evidence.  We still remember OPERA's faster than light neutrinos which turned out to be a loose cable, or BICEP's gravitational waves from inflation, which turned out to be galactic dust emission... It seems that another such embarrassment is coming our way: the recent LIGO's discovery of gravitational waves emitted in a black hole merger may share a similar fate. There are reasons to believe that the experiment was hacked, and the signal was injected by a prankster.

From the beginning, one reason to be skeptical about LIGO's discovery was that the signal  seemed too beautiful to be true. Indeed, the experimental curve looked as if taken out of a textbook on general relativity, with a clearly visible chirp signal from the inspiral phase, followed by a ringdown signal when the merged black hole relaxes to the Kerr state. The reason may be that it *is* taken out of a  textbook. This is at least what is strongly suggested by recent developments.

On EvilZone, a well-known hacker's forum, a hacker using a nickname Madhatter was boasting that it was possible to tamper with scientific instruments, including the LHC, the Fermi satellite, and the LIGO interferometer.  When challenged, he or she uploaded a piece of code that allows one to access LIGO computers. Apparently, the hacker took advantage the same backdoor that allows the selected members of the LIGO team to inject a fake signal in order to test the analysis chain.  This was brought to attention of the collaboration members, who  decided to test the code. To everyone's bewilderment, the effect was to reproduce exactly the same signal in the LIGO apparatus as the one observed in September last year!

Even though the traces of a hack cannot be discovered, there is little doubt now that there was a foul play involved. It is not clear what was the motif of the hacker: was it just a prank, or maybe an elaborate plan to discredit the scientists. What is even more worrying is that the same thing could happen in other experiments. The rumor is that the ATLAS and CMS collaborations are already checking whether the 750 GeV diphoton resonance signal could also be injected by a hacker.

by Jester (noreply@blogger.com) at April 08, 2016 03:46 AM

April 07, 2016

Symmetrybreaking - Fermilab/SLAC

Physicists build ultra-powerful accelerator magnet

An international partnership to upgrade the LHC has yielded the strongest accelerator magnet ever created.

The next generation of cutting-edge accelerator magnets is no longer just an idea. Recent tests revealed that the United States and CERN have successfully co-created a prototype superconducting accelerator magnet that is much more powerful than those currently inside the Large Hadron Collider. Engineers will incorporate more than 20 magnets similar to this model into the next iteration of the LHC, which will take the stage in 2026 and increase the LHC’s luminosity by a factor of ten. That translates into a ten-fold increase in the data rate.

“Building this magnet prototype was truly an international effort,” says Lucio Rossi, the head of the High-Luminosity (HighLumi) LHC project at CERN. “Half the magnetic coils inside the prototype were produced at CERN, and half at laboratories in the United States.”

During the original construction of the Large Hadron Collider, US Department of Energy national laboratories foresaw the future need for stronger LHC magnets and created the LHC Accelerator Research Program (LARP): an R&D program committed to developing new accelerator technology for future LHC upgrades.

This 1.5-meter-long model, which is a fully functioning accelerator magnet, was developed by scientists and engineers at Fermilab, Brookhaven National Laboratory, Lawrence Berkeley National Laboratory, and CERN. The magnet recently underwent an intense testing program at Fermilab, which it passed in March with flying colors. It will now undergo a rigorous series of endurance and stress tests to simulate the arduous conditions inside a particle accelerator.

The 1.5-meter prototype magnet, the MQXF1 quadrupole, sits at Fermilab before testing.

G. Ambrosio (US-LARP and Fermilab), P. Ferracin and E. Todesco (CERN TE-MSC)

This new type of magnet will replace about 5 percent of the LHC’s focusing and steering magnets when the accelerator is converted into the High-Luminosity LHC, a planned upgrade which will increase the number and density of protons packed inside the accelerator. The HL-LHC upgrade will enable scientists to collect data at a much faster rate.

The LHC’s magnets are made by repeatedly winding a superconducting cable into long coils. These coils are then installed on all sides of the beam pipe and encased inside a superfluid helium cryogenic system. When cooled to 1.9 Kelvin, the coils can carry a huge amount of electrical current with zero electrical resistance. By modulating the amount of current running through the coils, engineers can manipulate the strength and quality of the resulting magnetic field and control the particles inside the accelerator.

The magnets currently inside the LHC are made from niobium titanium, a superconductor that can operate inside a magnetic field of up to 10 teslas before losing its superconducting properties. This new magnet is made from niobium-three tin (Nb3Sn), a superconductor capable of carrying current through a magnetic field of up to 20 teslas.

“We’re dealing with a new technology that can achieve far beyond what was possible when the LHC was first constructed,” says Giorgio Apollinari, Fermilab scientist and Director of US LARP. “This new magnet technology will make the HL-LHC project possible and empower physicists to think about future applications of this technology in the field of accelerators.”

A High-Luminosity LHC coil similar to those incorporated into the successful magnet prototype shows the collaboration between CERN and the LHC Accelerator Research Program, LARP.

Photo by Reidar Hahn, Fermilab

This technology is powerful and versatile—like upgrading from a moped to a motorcycle. But this new super material doesn’t come without its drawbacks.

“Niobium-three tin is much more complicated to work with than niobium titanium,” says Peter Wanderer, head of the Superconducting Magnet Division at Brookhaven National Lab. “It doesn’t become a superconductor until it is baked at 650 degrees Celsius. This heat-treatment changes the material’s atomic structure and it becomes almost as brittle as ceramic.”

Building a moose-sized magnet from a material more fragile than a teacup is not an easy endeavor. Scientists and engineers at the US national laboratories spent 10 years designing and perfecting a new and internationally reproducible process to wind, form, bake and stabilize the coils.

“The LARP-CERN collaboration works closely on all aspects of the design, fabrication and testing of the magnets,” says Soren Prestemon of the Berkeley Center for Magnet Technology at Berkeley Lab. “The success is a testament to the seamless nature of the collaboration, the level of expertise of the teams involved, and the ownership shown by the participating laboratories.”

This model is a huge success for the engineers and scientists involved. But it is only the first step toward building the next big supercollider.

“This test showed that it is possible,” Apollinari says. “The next step is it to apply everything we’ve learned moving from this prototype into bigger and bigger magnets.”

by Sarah Charley at April 07, 2016 06:41 PM

Lubos Motl - string vacua and pheno

\(P=NP\) and string landscape: 10 years later
The newborn Universe was a tiny computer but it still figured out amazing things
Two new papers: First, off-topic. PRL published a Feb 2016 LIGO paper claiming that the gravitational waves (Phys.Org says "gravity waves" and I find it OK) contribute much more to the background than previously believed. Also, Nature published a German-American-Canadian paper claiming that supermassive black holes are omnipresent because they saw one (with mass of 17 billions Suns) even in an isolated galaxy. Also, the LHC will open the gate to hell in 2016 again, due to the higher luminosity (or power, they don't care). CERN folks have recommended a psychiatrist to the conspiracy theorists such as Pope Francis. But God demands at most 4 inverse femtobarns a year because the fifth one is tickling Jesus' testicles.
Ten years ago, the research of the string landscape was a hotter topic than it is today. Because of some recent exchanges about \(P=NP\), I was led to recall the February 2006 paper by Michael Douglas and Frederik Denef,
Computational complexity of the landscape I
They wrote that the string landscape had lots of elements and, using the numerous computer scientists' \(P\neq NP\) lore, it's probably going to be permanently impossible to find the right string vacuum even if the string landscape predicts one.

The authors have promised the second paper in the series,
[48] F. Denef and M. R. Douglas, “Computational Complexity of the Landscape II: Cosmological Considerations,” to appear
but now, more than 10 years later, we are still waiting for this companion paper to appear. ;-) What new ideas and views do I have 10 years later?




Douglas and Denef are extremely smart men. And they started – or Douglas started – to look at the landscape from a "global" or "computational" perspective sometime in 2004 or so (see e.g. TRF blog posts with the two name that started in 2005. These two authors co-wrote 6 inequivalent papers and you may see that the paper on the complexity was the least cited one among them, although 56 citations is decent.




Papers trying to combine \(P\neq NP\) i.e. computer science with string phenomenology may be said to be "interdisciplinary". You may find people who are ready to equate "interdisciplinary" with "deep and amazing", for example Lee Smolin. However, the opinion of the actual, achieved physicists is different. They're mostly ambiguous about the "sign" of the adjective "interdisciplinary" and many of them are pretty seriously dismissive of papers with this adjective because they largely equate "interdisciplinary" with "written by cranks such as Lee Smolin".

So whether or not you view "interdisciplinary" things as things that are "automatically better than others" depends on whether or not you are a crackpot yourself. Don't get me wrong: I do acknowledge that there are interesting papers that could be classified as interdisciplinary. Sometimes, a new discipline ready to explode is born in that way. However, I do dismiss the people who use this "interdisciplinary spirit" as a cheap way to look smarter, broader, and deeper in the eyes of the ignorant masses. The content of these papers is rubbish almost as a matter of a general law. The "interdisciplinary" status is often used to claim that the papers don't have to obey the quality requirements of either "main" discipline.

But I want to mention more essential things here.

If you read the rather pedagogic Wikipedia article on \(P=NP\), the first example of an \(NP\) problem you will encounter is the "subset sum problem". It's \(NP\) because it takes at most \(n^p\) steps to verify a solution; \(n\) is the number of elements in the set.
For instance, does a subset of the set \(\{−2, −3, 15, 14, 7, −10\}\) add up to \(0\)? The answer "yes, because the subset \(\{−2, −3, −10, 15\}\) adds up to zero" can be quickly verified with three additions. There is no known algorithm to find such a subset in polynomial time (there is one, however, in exponential time, which consists of \(2^n-n-1\) tries), but such an algorithm exists if \(P = NP\); hence this problem is in \(NP\) (quickly checkable) but not necessarily in \(P\) (quickly solvable).
If the set has \(n\) elements, there are \(2^n-1\) ways to choose a non-empty subset because you have to pick the value of \(n\) bits. It's not hard to convince yourself that you should give up the search for the right subset of numbers – whose sum is zero – if \(n\) is too large and the numbers look hopelessly random. It looks like the manual verification of all possible \(2^n-1\) subsets is the only plausible way to approach the problem.

But the fact that you don't know where to start or you have convinced yourself to give up doesn't mean that no one can ever find a faster solution to find the right subset. You may want an algorithm that doesn't "discriminate" against some sets of \(n\) numbers. And indeed, it seems plausible that no fast non-discriminatory method to find the right solution exists.

However, the faster solution to the "subset sum problem" may very well be discriminatory. It may be an algorithm that chooses and cleverly combines very different strategies depending on what the numbers actually are. Imagine that some numbers in the set are of order \(1\), others are of order \(1,000\), others are of order \(1,000,000\) etc. With some extra assumptions, the small numbers don't matter and the "larger" numbers must approximately cancel "first" before you consider the smaller numbers.

In that case, you start with the "largest" numbers of order one million, and solve the "much smaller" problem, and you know which of these largest numbers are in the right subset and which of them aren't. Their sum isn't exactly equal to zero – it differs by a difference of order \(1,000\) – and you may pick the right subset of the numbers of order \(1,000\), and so on. With this hierarchy, you ultimately find the right subset "a scale after scale".

This doesn't work when the numbers are of the same order. But it's conceivable that there exist other special limiting situations in which you may find a method that "dramatically speeds up the search", and the patches of these special conditions when a "dramatic speed up is possible" actually cover the whole set \(\RR^n\). There are many clever things – dividing to many sub-problems, sorting but also Fourier transform, auxiliary zeta-like functions, whatever – that other people may come up with and get much further in solving the problem.

There's simply no proof that a polynomially fast algorithm finding the "subset with the sum equal to zero" doesn't exist. There's no proof that the algorithm exists, either; obviously, no one knows such an algorithm (which would be even more ambitious because the proof that an algorithm exists may be existential).

OK, Douglas and Denef were piling a speculation upon another speculation so they were surely \(P\neq NP\) believers. But the new "floors of speculations" (more physical speculations) they added on top of \(P\neq NP\) was even more controversial. Their basic opinion – with defeatist consequences – was simple. The search for the right "flux vacuum" in the landscape is analogous to the "zero subset problem". And there is an exponentially large number of ways to assign the fluxes – just like there is an exponentially large number of subsets. It seems that "checking the vacua one by one" is, similarly to the subset, the only possible strategy. So if the number of vacua is \(10^{500}\) or so, we will never find the right one.

Maybe. But maybe, this statement will turn out to be completely wrong, too.

Even if one believes that \(P\neq NP\) and there's no "polynomially fast" solution to the "zero subset problem" (and analogous but harder problems in the search for the right fluxes and the correct string vacuum), it doesn't imply that physicists will never be able to find the right vacuum. The reason is that it's enough to guarantee \(P\neq NP\) if there exists no algorithm that can solve the "zero subset problem" for any set of \(n\) real numbers in the whole \(\RR^n\), including the "most generic ones".

However, the analogous problem in the string landscape is in no way proven – or should be expected – to be "generic" in this sense. The vacua may always exhibit e.g. the hierarchy that I previously mentioned as a feature that may dramatically speed up the search. Or, more likely and more deeply, the numbers may have some hidden patterns and dual descriptions that allow us to calculate the "right choice of the bits" by a very different calculation than by trying an exponentially large number of options, one by one.

Denef and Douglas basically assumed that "all the numbers in this business are just hopeless, disordered mess", and there is an exponentially large number of options, so there's no chance to succeed because "brute force" is the only tool that can deal with generic mess and the required amount of brute force is too large. But this is an additional (and vaguely and emotionally formulated) assumption, not a fact. All of science is full of examples where this "it is hopeless" assumption totally fails – in fact, it fails almost whenever scientists succeed. ;-)

I am constantly reminded about the likely flaws of the "defeatist" attitude when I talk to the laymen, e.g. relatives, who are absolutely convinced that almost nothing can be scientifically determined or calculated. We play cards and I mention that the probability that from a nicely shuffled pack of cards, someone gets at least 8 wild cards (among 14) is 1 in half a million. I get immediately screamed at. How could this be calculated? It's insane, it's an infinitely difficult problem. They're clearly assuming something like "playing zillions of games is a necessary condition to estimate the answer, and even that isn't enough". And I am like What? I could calculate all these things when I was 9 years old, it's just damn basic combinatorics. ;-) The Pythagoriad contest 32 years ago was full of similar – and harder – problems. Clearly, with some computer assistance, I can crack much more complex problems than that. And in physics, we can in principle calculate almost everything we observe.

Now, Denef, Douglas, and other particle physicists aren't laymen in this sense but they can be victims of the same "I don't know what to do so it must be impossible" fallacy. I am often tempted to surrender to this fallacy but I try not to. The claim that "something is impossible to calculate" requires much more solid evidence than the observation that "I don't know where to start".

Let me give you a simple example from quantum field theory. Calculate the tree-level scattering amplitude in the \(\NNN=4\) gauge theory with a large number \(n\) of external gluons. Choose their helicity to be "maximally helicity violating" (or higher than that). The number of Feynman diagrams contributing to the scattering amplitude may be huge – exponentially or factorially growing with \(n\), at least if you allow loops – but we know that the scattering amplitude is extremely simple (or zero, in the super-extreme cases) because of other considerations. This claim supports a part of the twistor/amplituhedron minirevolution and may be proven by recursive identities and similar tools.

So it's clearly not true that the brute force incorporation of all the diagrams is the only way or the fastest way to find the solution. There exist hidden structures in the theory that allow you to find the answer more effectively by transformations, twistors, or – and this theme is really deep and omnipresent in recent 20 years of theoretical physics – through dual descriptions (which are equivalent but the equivalence looks really shocking at the beginning).

It's plausible that even though one string vacuum is correct, humans have no chance to find it. But it's also plausible that they will find it. They may localize the viable subsets by applying many filters, or they may calculate the relevant parameters or the cosmological constant using an approximate or multi-stage scheme that allows one to pick tiny viable subsets and dramatically simplify the problem. Or our vacuum can simply be a special one in the landscape. It may be the "simplest one" in some topological sense; and/or the early cosmology may produce the probability for the right vacuum that vastly exceeds the probability of others, and so on. The loopholes are uncountable. The fact that you may write down a "story" whose message is that "the problem is hard" doesn't mean much. A "story" that implicitly assumes that you shall listen to no other stories is just a matter of a propaganda.

But I want to mention one point about the Douglas-Denef research direction that I have never paid much attention to. It's largely related to the second paper in the "complexity and the landscape" series that they promised but never delivered, one that was more focused on the early cosmology. Did the paper exist? Was it shown to be wrong? What was in it? The abstract of the first, published paper, tells us something about the second, non-existent paper:
In a companion paper, we apply this point of view to the question of how early cosmology might select a vacuum.
This sentence actually sketches a totally wonderful question that I had never articulated cleanly enough. It's possible that the only problem that this question created was that it made Douglas and Denef realize something that invalidates pretty much the basic assumptions or philosophy of their whole research direction (the basic assumption is basically defeatism and frustration). What do I mean?

It seems to me that these two men have realized something potentially far-reaching and it's this simple idea:
It seems that the newborn Universe was a rather small quantum mechanical system with a limited number of degrees of freedom – perhaps something comparable to a 10-qubit quantum computer to be produced in 2020. But if that's so, all the problems that this "cosmic computer" was expected to solve must be reasonably simple!
This looks like a potentially powerful – and perhaps more far-reaching than the Douglas-Denef \(P\neq NP\) defeatist – realization. You know, if the very young quantum computer had the task to find the right values of fluxes to produce a vacuum with a tiny cosmological constant, a task that may be considered similar to the "zero subset problem" for a large value of \(n\) – then it seems that the relatively modest "cosmic computer" must have had a way to solve the problem because we're here, surrounded by the cosmological constant of a tiny magnitude.

This realization sounds great and encouraging. However, it may be largely if not completely invalidated by the anthropic reasoning. The anthropic reasoning postulates that all the Universes with the totally wrong values of the cosmological constant and other parameters also exist, just like our hospitable Universe. They only differ by the absence of The Reference Frame weblog in them. So no serious physics blogger and his serious readers can discuss the question why the early cosmic computer decided to pick one value of the cosmological constant or another. If you embrace this anthropic component of the "dynamics", it is no longer necessary that the "right vacuum was quickly picked/calculated by a tiny cosmic quantum computer". Instead, the selection could have been made "anthropically" by a comparison of quattuoroctogintillions of Universes much later, after they have evolved for billions of years and became large. (The number is comparable to the income I currently get in Adventure Capitalist when I move the Android clock to 1970 and back to 2037 LOL. I haven't played it today. It's surely a good game to teach you not to be terrified by large but still "just exponential" integers.)

Now, the fifth section of the Douglas-Denef Complexity I paper is dedicated to the more advanced "quantum computing" issues. This section was arguably a "demo" of the Complexity II paper that has never appeared. The most important guy who is cited is Scott Aaronson. The first four references in the Denef-Douglas paper point to papers by Aaronson – your desire to throw up only weakens once you realize that the authors in the list of references are alphabetically sorted. ;-)

I believe that I have only known the name of Scott Aaronson since late 2006 when this Gentleman entered the "String Wars" by boasting that he didn't give a damn about the truth about physics, and as all other corrupt šitheads, he will parrot the opinions of the highest bidder and seek to maximize profit from all sides of the "String Wars". He immediately became a role model of the Academic corruption in my eyes.

So Douglas and Denef actually talk about lots of the complexity classes that Aaronson's texts (and his book) were always full of, \(NP\), co-\(NP\), \(NP\) to the power of co-\(NP\), \(DP\), \(PH\), \(PSPACE\), and so on.

But it seems sort of striking how they ignored a realization that could have been the most innovative one in their research:
Our brains shouldn't give up the search for the right vacuum defining the Universe around us because the newborn Universe was, in some sense, smaller than our brains but it managed to find the right answer, too.
Again, I am neither optimistic nor pessimistic when it comes to all these questions, e.g. the question whether the humans will ever find the correct stringy vacuum. I think it's OK and actually unavoidable when individual researchers have their favorite answers. But when it comes to important assumptions that haven't been settled in one way or another, it's absolutely critical for the whole thinking mankind to cover both (or all) possibilities.

It's essential that in the absence of genuine evidence (or, better, proofs), people in various "opinion minorities" are not being hunted, eliminated, or forbidden. The anthropic principle has been an example of that. The more you believe in some kind of a (strong enough) anthropic principle, the more you become convinced that the amount of fundamental progress in the future will be close (and closer) to zero.

But high-energy fundamental physicists are arguably smart and tolerant enough that they realize that no real (non-circular, not just "plausible scenario") evidence exists that would back the defeatist scenarios and many other opinions. That's why the people, even though you could probably divide them to "anthropic" and "non-anthropic" camps rather sharply, have largely stopped bloody arguments. They know that if a clear proof of their view existed, they could have already turned it into a full-fledged (and probably quantitative) theory that almost everyone must be able to check. Such a complete theory answering these big questions doesn't exist in the literature so people realize that despite the differences in their guesses, all of them are comparably ignorant.

Almost everyone has realized that the papers that exist simply aren't solid evidence that may rationally settle the big questions. It's important that some people keep on trying to isolate the right vacuum because there exists a possibility that they will succeed – but they must be allowed to try. I think that the dangerous herd instinct and the suppression of ad hoc random minorities is much more brutal in many other fields of science (and especially in fields that are "not quite science").

by Luboš Motl (noreply@blogger.com) at April 07, 2016 01:58 PM

April 05, 2016

Symmetrybreaking - Fermilab/SLAC

Six weighty facts about gravity

Perplexed by gravity? Don’t let it get you down.

Gravity: we barely ever think about it, at least until we slip on ice or stumble on the stairs. To many ancient thinkers, gravity wasn’t even a force—it was just the natural tendency of objects to sink toward the center of Earth, while planets were subject to other, unrelated laws.

Of course, we now know that gravity does far more than make things fall down. It governs the motion of planets around the Sun, holds galaxies together and determines the structure of the universe itself. We also recognize that gravity is one of the four fundamental forces of nature, along with electromagnetism, the weak force and the strong force. 

The modern theory of gravity—Einstein’s general theory of relativity—is one of the most successful theories we have. At the same time, we still don’t know everything about gravity, including the exact way it fits in with the other fundamental forces. But here are six weighty facts we do know about gravity.

 

Illustration by Sandbox Studio, Chicago with Ana Kova

1. Gravity is by far the weakest force we know.

Gravity only attracts—there’s no negative version of the force to push things apart. And while gravity is powerful enough to hold galaxies together, it is so weak that you overcome it every day. If you pick up a book, you’re counteracting the force of gravity from all of Earth.

For comparison, the electric force between an electron and a proton inside an atom is roughly one quintillion (that’s a one with 30 zeroes after it) times stronger than the gravitational attraction between them. In fact, gravity is so weak, we don’t know exactly how weak it is.

 

Illustration by Sandbox Studio, Chicago with Ana Kova

2. Gravity and weight are not the same thing.

Astronauts on the space station float, and sometimes we lazily say they are in zero gravity. But that’s not true. The force of gravity on an astronaut is about 90 percent of the force they would experience on Earth. However, astronauts are weightless, since weight is the force the ground (or a chair or a bed or whatever) exerts back on them on Earth.

Take a bathroom scale onto an elevator in a big fancy hotel and stand on it while riding up and down, ignoring any skeptical looks you might receive. Your weight fluctuates, and you feel the elevator accelerating and decelerating, yet the gravitational force is the same. In orbit, on the other hand, astronauts move along with the space station. There is nothing to push them against the side of the spaceship to make weight. Einstein turned this idea, along with his special theory of relativity, into general relativity.

 

Illustration by Sandbox Studio, Chicago with Ana Kova

3. Gravity makes waves that move at light speed.

General relativity predicts gravitational waves. If you have two stars or white dwarfs or black holes locked in mutual orbit, they slowly get closer as gravitational waves carry energy away. In fact, Earth also emits gravitational waves as it orbits the sun, but the energy loss is too tiny to notice.

We’ve had indirect evidence for gravitational waves for 40 years, but the Laser Interferometer Gravitational-wave Observatory (LIGO) only confirmed the phenomenon this year. The detectors picked up a burst of gravitational waves produced by the collision of two black holes more than a billion light-years away.

One consequence of relativity is that nothing can travel faster than the speed of light in vacuum. That goes for gravity, too: If something drastic happened to the sun, the gravitational effect would reach us at the same time as the light from the event.

 

Illustration by Sandbox Studio, Chicago with Ana Kova

4. Explaining the microscopic behavior of gravity has thrown researchers for a loop.

The other three fundamental forces of nature are described by quantum theories at the smallest of scales— specifically, the Standard Model. However, we still don’t have a fully working quantum theory of gravity, though researchers are trying.

One avenue of research is called loop quantum gravity, which uses techniques from quantum physics to describe the structure of space-time. It proposes that space-time is particle-like on the tiniest scales, the same way matter is made of particles. Matter would be restricted to hopping from one point to another on a flexible, mesh-like structure. This allows loop quantum gravity to describe the effect of gravity on a scale far smaller than the nucleus of an atom.

A more famous approach is string theory, where particles—including gravitons—are considered to be vibrations of strings that are coiled up in dimensions too small for experiments to reach. Neither loop quantum gravity nor string theory, nor any other theory is currently able to provide testable details about the microscopic behavior of gravity.

 

 
Illustration by Sandbox Studio, Chicago with Ana Kova

5. Gravity might be carried by massless particles called gravitons.

In the Standard Model, particles interact with each other via other force-carrying particles. For example, the photon is the carrier of the electromagnetic force. The hypothetical particles for quantum gravity are gravitons, and we have some ideas of how they should work from general relativity. Like photons, gravitons are likely massless. If they had mass, experiments should have seen something—but it doesn’t rule out a ridiculously tiny mass.

 

 
Illustration by Sandbox Studio, Chicago with Ana Kova

6. Quantum gravity appears at the smallest length anything can be.

Gravity is very weak, but the closer together two objects are, the stronger it becomes. Ultimately, it reaches the strength of the other forces at a very tiny distance known as the Planck length, many times smaller than the nucleus of an atom.

That’s where quantum gravity’s effects will be strong enough to measure, but it’s far too small for any experiment to probe. Some people have proposed theories that would let quantum gravity show up at close to the millimeter scale, but so far we haven’t seen those effects. Others have looked at creative ways to magnify quantum gravity effects, using vibrations in a large metal bar or collections of atoms kept at ultracold temperatures.

It seems that, from the smallest scale to the largest, gravity keeps attracting scientists’ attention. Perhaps that’ll be some solace the next time you take a tumble, when gravity grabs your attention too.

by Matthew R. Francis at April 05, 2016 02:02 PM

Lubos Motl - string vacua and pheno

Sonified raw Higgs data sound like Beethoven, Wagner
Update: the credibility of all the information below is impaired by the date, April 1st

LIGO's gravitational waves sound like music of a sort. Particle physicists at CERN have finally transformed the Higgs-producing proton-proton collision data into musical patterns carefully and the result was somewhat surprising.



The spectrum almost exactly resembles Ludwig van Beethoven's Fateful Fifth Symphony (for Japanese readers, it is the Anthem of Asagohan Breakfasts; don't forget that if you need a Japanese loan, tomate natotata). You can see that the accuracy is overwhelming – famous CERN professors such as Rebeca Einstein were literally dancing to the tune when the data were sonified.




The finding may be said to be a fruit of the 2014 research project of CERN physicists who began to stick random things to the collider. They have also placed the Royal Albert Hall to the trajectory of the LHC beam.

But the CERN press release makes it obvious that the CERN researchers aren't fully aware of all the far-reaching implications of their discovery. The second part of the spectrum actually isn't Beethoven's; it is Richard Wagner's Ride of the Valkyries.




You don't need a physics PhD to figure out that this finding means that there are at least two Higgs bosons. The appearance of the two musical compositions more or less proves supersymmetry. The LHC teams have missed this lesson that wasn't hiding too well in the data but they haven't missed the big picture. They believe that when they look for music in the Higgs and gauge boson sector, the tunes will prove string theory.

LIGO has sonified the yet-to-be-announced GW151226 gravitational waves. What was hiding inside was Johann Strauss' Blue Danube Waltz. We're apparently entering a new epoch of science, the era of unification of mind, matter, and music (three more explanations of the term M-theory).

Some true visionaries, e.g. the Nobel prize winner Brian Josephson, have anticipated this unification of mind, matter, and music for years. Many of us were laughing at them and calling them psychiatrically ill crackpots but now we are being shown that Josephson and his colleagues were sane all along and they should be finally released from the psychiatric asylums.

The particular theoretical predictions suggesting the German tunes were originally calculated by folks at the Fermilab. George W. Bush has famously corrected a 25% error resulting from a miscalculation of the tau neutrino branching ratio.



By the way, after self-driving cars, Google Holland finally revealed a product they should have produced earlier, the self-driving bikes. Meanwhile, Seznam.cz, Google's main Czech nation-wide competitor, has finally presented a simplified version of its maps service. I must admit that the map of Czechia looks simpler now, indeed.

Fidorka cookies will now beat competition by its magnetic wrapping – you can attach the round chocolate bar to your friend or your car etc. ;-)

Pastebin has switched to CERN's official font, Comic Sans.

YouTube allows you to watch existing videos in 360°, thanks to Snoopavision. Only Google has disabled its revolutionary Mic Drop Feature in Gmail, a button that allowed the users to end the conversation. The reason for the retirement of the great feature was just a few million lost jobs.

Czech carmaker Škoda Auto finally introduced a brand new version of Superb for the U.K. market. The new model differs from the 3-months-old one by an extra dog umbrella.

by Luboš Motl (noreply@blogger.com) at April 05, 2016 06:15 AM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
May 04, 2016 01:21 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at