Particle Physics Planet


February 13, 2016

Christian P. Robert - xi'an's og

Glen Coe Salomon SkyRace [sept. 16-18, 2016]

After pondering a (little) while about whether or not joining the Skyline Scotland races, I decided to register for the Ring of Steall skyrace! It is a 25km trail run starting from Kinlochleven and going over the five Munroes of the Ring of Steall facing Ben Nevis, as well as down to Glen Nevis in the middle. Not as impressive as the Glen Coe skyrace the day after, with its 52km and 4600m positive differential!

I climb this ridge in Winter with Jérôme Accardo (twice) and Peter Green (once), in the most beautiful day I ever had when mountaineering in Scotland. The route should be easier in September with (hopefully!) no ice or snow… although one never knows!

BunneinBeag, Ring of Steall, February 2003, with J. Accardo and P. Green, the most exhilarating and sunniest Scottish day ever!


Filed under: Mountains, pictures, Running, Travel Tagged: Ben Nevis, Glen Nevis, Glencoe, Kinlochleven, munroes, Ring of Steal, Salomon, Scotland, skyrace

by xi'an at February 13, 2016 11:16 PM

Peter Coles - In the Dark

Lessons from LIGO

At the end of a very exciting week I had the pleasure last night of toasting LIGO and the future of gravitational wave astronomy with champagne at the RAS Club in London. Two members of the LIGO collaboration were there, Alberto Vecchio and Mike Cruise (both from Birmingham); Alberto had delivered a very nice talk earlier in the day summarising the LIGO discovery while Mike made a short speech at the club.

This morning I found this very nice video produced by California Institute of Technology (CalTech) which discusses the history of the LIGO experiment:

It has taken over 40 years of determination and hard work to get this far. You can see pictures of some of the protagonists from Thursday’s press conference, such as Kip Thorne, when they were much younger. I bet there were times during the past four decades when they must have doubted that they would ever get there, but they kept the faith and now can enjoy the well-deserved celebrations. They certainly will all be glad they stuck with gravitational waves now, and all must be mighty proud!

Mike Cruise made two points in his speech that I think are worth repeating here. One is that we think of the LIGO discovery is a triumph of physics. It is that, of course. But the LIGO consortium of over a thousand people comprises not only physicists, but also various kinds of engineers, designers, technicians and software specialists. Moreover the membership of LIGO is international. It’s wonderful that people from all over the world can join forces, blend their skils and expertise, and achieve something remarkable. There’s a lesson right there for those who would seek to lead us into small-minded isolationism.

The other point was that the LIGO discovery provides a powerful testament for university research. LIGO was a high-risk experiment that took decades to yield a result. It’s impossible to imagine any commercial company undertaking such an endeavour, so this could only have happened in an institution (or, more correctly, a network of institutions) committed to “blue skies” science. This is research done for its own sake, not to create a short-term profit but to enrich our understanding of the Universe. Asking  profound questions and trying to answer them is one of the things that makes us human. It’s a pity we are so obsessed with wealth and property that we need to be reminded of this, but clearly we do.

The current system of Research Assessment in the UK requires university research to generate “impact” outside the world of academia in a relatively short timescale. That pressure is completely at odds with experiments like LIGO. Who would start an experiment now that would take 40 years to deliver?  I’ve said it time and time again to my bosses at the University of Sussex that if you’re serious about supporting physics you have to play a long game because it requires substantial initial investment and generates results only very slowly.  I worry what future lies in store for physics if the fixation on market-driven research continues much longer.

Finally, I couldn’t resist making a comment about another modern fixation – bibliometrics. The LIGO discovery paper in Physical Review Letters has 1,004 authors. By any standard this is an extraordinarily significant article, but because it has over a thousand authors it stands to be entirely excluded by the Times Higher when they compile the next World University Rankings.  Whatever the science community or the general public thinks about the discovery of gravitational waves, the bean-counters deem it worthless. We need to take a stand against this sort of nonsense.

 

 

 

 


by telescoper at February 13, 2016 02:45 PM

John Baez - Azimuth

The Quagga

The quagga was a subspecies of zebra found only in South Africa’s Western Cape region. After the Dutch invaded, they hunted the quagga to extinction. While some were taken to zoos in Europe, breeding programs failed. The last wild quagga died in 1878, and the very last quagga died in an Amsterdam zoo in 1883.

Only one was ever photographed—the mare shown above, in London. Only 23 stuffed and mounted quagga specimens exist. There was one more, but it was destroyed in Königsberg, Germany, during World War II. There is also a mounted head and neck, a foot, 7 complete skeletons, and samples of various tissues.

The quagga was the first extinct animal to have its DNA analyzed. It used to be thought that the quagga was a distinct species from the zebra. After some argument, a genetic study published in 2005 convinced most people that the quagga is a subspecies of the zebra. It showed that the quagga diverged from the other zebra subspecies only between 120,000 and 290,000 years ago, during the Pleistocene.

In 1987, a natural historian named Reinhold Rau started the Quagga Project. He was goal was to breed zebras into quaggas by selecting for quagga-like traits, most notably the lack of stripes on the back half of its body.

The founding population consisted of 19 zebras from Namibia and South Africa, chosen because they had reduced striping on the rear body and legs. The first foal was born in 1988.

By now, members of the Quagga Project believe they have recreated the quagga. Here they are:

Rau-quagga (zebra subspecies)

The new quaggas are called ‘rau–quaggas’ to distinguish them from the original ones. Do they look the same as the originals? It’s hard for me to decide. Old paintings show quite a bit of variability:

This is an 1804 illustration by Samuel Daniell, which served as the basis of a claimed subspecies of quagga, Equus quagga danielli. Perhaps they just have variable coloring.

Why try to resurrect the quagga? Rau is no longer alive, but Eric Harley, a retired professor of chemical pathology at the University of Cape Town, had this to say:

It’s an attempt to try and repair ecological damage that was done a long time ago in some sort of small way. It is also to try and get a representation back of a charismatic animal that used to live in South Africa.

We don’t do genetic engineering, we aren’t cloning, we aren’t doing any particularly clever sort of embryo transfers—it is a very simple project of selective breeding. If it had been a different species the whole project would have been unjustifiable.

The current Quagga Project chairman, Mike Gregor, has this to say:

I think there is controversy with all programmes like this. There is no way that all scientists are going to agree that this is the right way to go. We are a bunch of enthusiastic people trying to do something to replace something that we messed up many years ago.

What we’re not doing is selecting some fancy funny colour variety of zebra, as is taking place in other areas, where funny mutations have taken place with strange colouring which may look amusing but is rather frowned upon in conservation circles.

What we are trying to do is get sufficient animals—ideally get a herd of up to 50 full-blown rau-quaggas in one locality, breeding together, and then we would have a herd we could say at the very least represents the original quagga.

We obviously want to keep them separate from other populations of plains zebra otherwise we simply mix them up again and lose the characteristic appearance.

The quotes are from here:

• Lawrence Bartlett, South Africa revives ‘extinct’ zebra subspecies, Phys.org, 12 February 2016.

This project is an example of ‘resurrection biology’, or ‘de-extinction’:

• Wikipedia, De-extinction.

Needless to say, it’s a controversial idea.


by John Baez at February 13, 2016 03:13 AM

February 12, 2016

Christian P. Robert - xi'an's og

sweet red bean paste [あん]

I am just back from watching this Japanese movie by Naomi Kawase that came out last year and won Un certain regard award at the Cannes festival. It is indeed a movie with a most unusual “regard” and as such did not convince many critics. For instance, one Guardian critic summed up his view with the qualification of a “preposterous and overly sentimental opener to this year’s Un Certain Regard serves up major disappointment”. (As a contrapunto the finereview in Les Cahiers du Cinéma catches the very motives I saw in the movie.) And of course one can watch the movie as a grossly stereotypical and unreservedly sentimental lemon if one clings to realism. For me, who first and mistakenly went to see it as an ode to Japanese food (in the same vein as Tampopo!), it unrolled as a wonderful tale that got deeper and deeper consistence, just like the red bean jam thickening over the fire. There is clearly nothing realistic in the three characters and in the way they behave, from the unnaturally cheerful and wise old woman Tokue to the overly mature high-school student looking after the introspective cook. That no-one seemed aware of a sanatorium of lepers at the centre of town and that the customers move from ecstatic about the taste of the bean jam made by Tokue to scared by her (former) leprosy and that the awful owner of the shop where Sentaro cooks can be so obviously pressuring him, all this does not work for a real story, but it fits perfectly the philosophical tale that An is and the reflection it raises. While I am always bemused by the depth and wholeness in the preparation of the Japanese food, the creation of a brilliant red bean jam is itself tangential to the tale (and I do not feel like seeking dorayaki when exiting the cinema), which is more about discovering one’s inner core and seeking harmony through one’s realisations. (I know this definitely sounds like cheap philosophy, but I still feel somewhat and temporarily enlightened from following the revolutions of those three characters towards higher spheres in the past two hours!)


Filed under: Books, Kids, pictures, Travel Tagged: あん, Cannes film festival, Japanese cuisine, Japanese translation, leprosy, Naomi Kawase, red beans, Tokyo

by xi'an at February 12, 2016 11:16 PM

Symmetrybreaking - Fermilab/SLAC

Daya Bay discovers a mismatch

The latest measurements from the Daya Bay neutrino experiment in China don’t align with predictions from nuclear theory.

A new result from the Daya Bay experiment has revealed a possible flaw in predictions from nuclear theory. 

“Nobody expected that from neutrino physics,” says Anna Hayes, a nuclear theorist at Los Alamos National Laboratory. “They uncovered something that nuclear physics was unaware of for 40 years.”

Neutrinos are produced in a variety of processes, including the explosion of stars and nuclear fusion in the sun. Closer to home, they’re created in nuclear reactors. The Daya Bay experiment studies neutrinos—specifically, electron antineutrinos—streaming from a set of nuclear reactors located about 30 miles northeast of Hong Kong.

In a paper published this week in Physical Review Letters, Daya Bay scientists provided the most precise measurement ever of the neutrino spectrum—that is, the number of neutrinos produced at different energies—at nuclear reactors. The experiment also precisely measured the flux, the total number of neutrinos emitted.

Neither of these measurements agreed with predictions from established models, causing scientists to scramble for answers from both theory and experiment.

Counting neutrinos

To make the record-breaking measurement, Daya Bay scientists amassed the world’s largest sample of reactor antineutrinos—more than 300,000 collected over the course of 217 days. They used six detectors, each filled with 20 tons of gadolinium-doped liquid scintillator. They were able to measure the particles’ energy to better than 1 percent precision. The experiment is supported by several institutions around the world, including the US Department of Energy and the National Science Foundation.

The Daya Bay scientists found that, overall, the reactors they study produced 6 percent fewer antineutrinos than predicted. This is consistent with past measurements by other experiments. The discrepancy has been called the “reactor antineutrino anomaly.”

This isn’t the first time neutrinos have gone missing. During the Davis experiment, which ran in the 1960s in Homestake Mine in South Dakota, physicists found that the majority of the solar neutrinos they were looking for—fully two-thirds of them—simply weren’t there.

With some help from the SNO experiment in Canada, physicists later discovered the problem: Neutrinos come in three types, and the detector at Homestake could see only one of them. A large fraction of the solar neutrinos they expected to see were changing into the other two types as they traveled to the Earth. The Super-Kamiokande experiment in Japan later discovered oscillations in atmospheric neutrinos as well.

Scientists have wondered whether something similar could explain Daya Bay’s missing 6 percent.

Theorists have predicted the existence of a fourth type of neutrino called a sterile neutrino, which might interact with other matter only through gravity. It could be that the missing neutrinos at Daya Bay are actually transforming away into undetectable sterile neutrinos.

Hitting a bump

However, the other half of today’s Daya Bay result could throw cold water on that idea.

In combining their two measurements—the flux and the spectra—Daya Bay scientists found an unexpected bump, an excess of the particles at around 5 million electronvolts. This represents a deviation from theoretical predictions of about 10 percent. 

“Experimentally, this is a tour de force, to show that this bump is not an artifact of their detectors,” says theorist Alexander Friedland of SLAC National Accelerator Laboratory. But, he says, “the need to invoke sterile neutrinos is now in question.”

That’s because the large discrepancy suggests a different story: The neutrinos might not be missing after all; the predictions from nuclear theory could just be incomplete.

“These results do not rule out the sterile neutrino possibility,” Friedland says. “But the foundation on which the original sterile neutrino claims were based has been shaken.”

As Daya Bay co-spokesperson Kam-Biu Luk of the University of California at Berkeley and Lawrence Berkeley National Laboratory said in a press release, “this unexpected disagreement between our observation and predictions strongly suggested that the current calculations would need some refinement.”

What comes next

To investigate further, some scientists have proposed building new detectors near smaller reactors with more refined fuel sources—to cut out ambiguity as to which decay processes are producing the neutrinos.

Others have proposed placing detectors closer to the neutrino source—to avoid giving the particles the chance to escape by oscillating into different types. The Short-Baseline Neutrino Program, currently under construction at Fermi National Accelerator Laboratory, will do just that.

Whatever the cause of the mismatches between experiment and theory, these latest measurements will certainly be useful in interpreting results from future experiments, said Daya Bay co-spokesperson Jun Cao, of the Institute of High Energy Physics in China, in the press release.

“These improved measurements will be essential for next-generation reactor neutrino experiments.”

by Kathryn Jepsen at February 12, 2016 09:37 PM

Jester - Resonaances

LIGO: what's in it for us?
I mean us theoretical particle physicists. With this constraint, the short answer is not much.  Of course, every human being must experience shock and awe when picturing the phenomenon observed by LIGO. Two black holes spiraling into each other and merging in a cataclysmic event which releases energy equivalent to 3 solar masses within a fraction of a second.... Besides, one can only  admire the ingenuity that allows us to detect here on Earth a disturbance of the gravitational field created in a galaxy 1.3 billion light years away. In more practical terms, the LIGO announcement starts the era of gravitational wave astronomy and thus opens a new window on the universe. In particular, LIGO's discovery is a first ever observation of a black hole binary, and we should soon learn more about the ubiquity of astrophysical systems containing one or more black holes. Furthermore, it is possible that we will discover completely new objects whose existence we don't even suspect. Still, all of the above is what I fondly call dirty astrophysics on this blog,  and it does not touch upon any fundamental issues. What are the prospects for learning something new about those?

In the long run, I think we can be cautiously optimistic. While we haven't learned anything unexpected from today's LIGO announcement, progress in gravitational wave astronomy should eventually teach us something about fundamental physics. First of all, advances in astronomy, inevitably brought by this new experimental technique,  will allow us to better measure the basic parameters of the universe. This in turn will provide us information about aspects of fundamental physics that can affect the entire universe, such as e.g. the dark energy. Moreover, by observing phenomena occurring in strong gravitational fields and of which signals propagate over large distances, we can place constraints on modifications of Einstein gravity such as the graviton mass (on the downside,  often there is no consistent alternative theory that can be constrained).

Closer to our hearts, one potential source of gravitational waves is a strongly first-order phase transition. Such an event may have occurred as the early universe was cooling down.  Below a certain critical temperature a symmetric phase of the high-energy theory may no longer be energetically preferred, and the universe enters a new phase where the symmetry is broken. If the transition is violent (strongly first-order in the physics jargon), bubbles of the new phase emerge, expand, and collide, until they fill the entire visible universe. Such a dramatic event produces gravitational waves with the amplitude that may be observable by future experiments.   Two examples of phase transitions we suspect to have occurred are the QCD phase transition  around T=100 MeV, and the electroweak phase transition around T=100 GeV. The Standard Model predicts that neither is first order, however new physics beyond the Standard Model may change that conclusion. Many examples of required  new physics have been proposed to modify the electroweak phase transition, for example models with additional Higgs scalars, or with warped extra dimensions.  Moreover, the phase transition could be related to symmetry breaking in a hidden sector that is very weakly or not at all coupled (except via gravity) to ordinary matter.  Therefore, by observing or putting limits on phase transitions in the early universe we will obtain complementary information about the fundamental theory at high energies.    

Gravitational waves from phase transitions are typically predicted to peak at frequencies much smaller than the ones probed by LIGO (35 to 250 Hz). The next generation of gravitational telescopes will be more equipped to detect such a signal thanks to a much larger arm-length (see figure borrowed from here). This concerns especially the eLISA space interferometer which will probe millihertz frequencies. Even smaller frequencies can be probed by pulsar timing arrays which search for signals of gravitational waves using stable pulsars for an antenna.  The worry is that the  interesting signal may be obscured by astrophysical backgrounds, such as (oh horror) gravitational wave emission from white dwarf binaries. Another  interesting beacon for future experiments is to detect gravitational waves from inflation (almost discovered 2 years ago via another method by the BICEP collaboration).  However, given the constraints from the CMB observations,  the inflation signal may well be too weak even for the future  giant space interferometers like DECIGO or BBO.

To summarize, the importance of the LIGO discovery for the field  of particle physics is mostly the boost it gives to  further experimental efforts in this direction.  Hopefully, the eLISA project will now take off, and other ideas will emerge. Once gravitational wave experiments become sensitive to sub-Hertz frequencies, they will start probing the parameter space of interesting theories beyond the Standard Model.  

Thanks YouTube! It's the first time I see a webcast of a highly anticipated event running smoothly in spite of 100000 viewers. This can be contrasted with Physical Review Letters who struggled to make one damn pdf file accessible ;) 

by Jester (noreply@blogger.com) at February 12, 2016 09:04 PM

astrobites - astro-ph reader's digest

Creating a Cosmic Inventory of Rocky Planets

Title: Terrestrial planets across space and time
Authors: Erik Zackrisson, Per Calissendorff, Juan González, Andrew Benson, Anders Johansen, and Markus Janson
First Author’s Affiliation: Uppsala University, Sweden
Paper status: Submitted for review

When I was born in 1989, there were just nine planets known throughout the entire universe (poor Pluto).  Now there are between 1600 and 2100 confirmed exoplanets (depending on whom you ask).  All these planets are located in a small region of our home galaxy (the Milky Way). However, there are billions of other galaxies. Therefore, you might ask just how many do we expect throughout the entire universe?  This paper attempts to create a “cosmic inventory” of terrestrial (rocky) planets by combining the fields of cosmology, galaxy formation, and exoplanet science to estimate the number of planets around FGKM stars.  (To distinguish stars of different masses, astronomers give them different letters.  The pneumonic, from most massive stars to least massive, is “Oh Be A Fine Girl/Guy, Kiss Me”.  For reference, the Sun is a G-dwarf star.)

The Recipe

The most direct ingredient that goes into the recipe for the cosmic inventory of exoplanets is the material out of which planets form.  Elements heavier than hydrogen and helium are the most important because they can create the solid material to build up planets. These elements are called “metals” (yes, even elements like oxygen and neon). The amount of metals in an object is called its metallicity.  Planet-forming regions around newborn stars (called protoplanetary disks) are likely to form more large, gas giant planets if the disk has a high metallicity. This is because there’s more solid material around to clump together early when there is still a lot of gas around to capture.  These massive planets get jostled around by each other during the early stages of planet formation and might toss out some of the small, rocky planets that might also form in the system.  On the other hand, if you have too little metal in your protoplanetary disk, you might not have enough solid material to even make a terrestrial planet in the first place.  This leads to a Goldilocks-style “not too much, not too little” scenario for metallicity.  All of this metallicity-dependence is added to our recent knowledge of how many and what kind of planets form around what kind of stars, which is based on the ~2000 planets astronomers have already discovered.

So now that we have a model for how to make planets around stars, we have to create the stars and the galaxies they live in.  The authors use what are called semi-analytic models to do this, which are basically simple galaxy evolution models tuned to fit our observations of galaxies both now and in the past.  (As you look farther away, you are also looking farther back in time, so we know how galaxies looked like when they were much younger.)  These semi-analytic models are a quick way to model a representative sample volume of the universe without spending years running a simulation.  These models show galaxies growing and evolving, and they can track the births, deaths, and properties of stars, such as their masses, ages, and metallicities.  The authors then tack on their planet formation model on top of it all to calculate the number and types of planets that orbit all these different types of stars in all of the galaxies.

The Cosmic Inventory

The average age of terrestrial planets in the local universe is calculated to be 8.1 Gyr old.  The oldest planets are found in the largest galaxies because, since the largest galaxies formed earlier, planets were also allowed to form earlier.  Some of the more massive stars with terrestrial planets have already died out, taking their planets with them as they went.  However, since stars <0.95 times the mass of the Sun have lifetimes longer than the current age of the universe, this does not affect the majority of stars.  In all, about 15% of all terrestrial planets have been lost due to the deaths of their host stars.

The average age of a planet orbiting FGKM stars (black) and FGK stars (blue) as a function of the (logarithmic) mass of its host galaxy. More massive galaxies host older planets, and for smaller galaxies, the planets orbiting M-dwarfs are typically a bit younger than the planets orbiting FGK stars.

The authors’ model also makes a strong prediction that about 1/3 of all terrestrial planets should be orbiting stars that are either more metal-rich or (more likely) more metal-poor than any planet-hosting star we currently know of.  Another noteworthy estimate is that they calculate that a Milky Way-like galaxy should have about 100 billion terrestrial planets around M-dwarfs, but only 2 billion stars around more Sun-like FGK stars.  This is in line with several other papers claiming that the number of planets orbiting M-dwarfs vastly outnumbers those orbiting Sun-like stars.

So what’s the total number of the cosmic inventory of terrestrial planets for the entire universe?  The authors estimate that the entire observable universe has 8 \times 10^{20} terrestrial planets, 98% of which are orbiting M-dwarfs. This fact is why many research groups all around the globe have shifted their attention to M-dwarfs in the search for more rocky planets.

by Joseph Schmitt at February 12, 2016 06:30 PM

Peter Coles - In the Dark

LIGO at the Royal Astronomical Society

image

My monthly trip to London for the Royal Astronomical Society Meeting allowed me not only to get out of the office for the day but also to attend a nice talk by Alberto Vecchio about yesterday’s amazing results.

I hear that we will be having champagne at the club later on to celebrate. In the meantime here’s a little Haiku I wrote on the theme:

Two black holes collide
A billion years ago.
LIGO feels the strain.


by telescoper at February 12, 2016 05:24 PM

ATLAS Experiment

The hills are alive, with the sound of gravitational waves
Gravitational Wave Discovery

Presentation by Barry C. Barish on 11 Feb 2016 in the CERN Main Auditorium on LIGO and the discovery of gravitational waves caused by the merging of two black holes. IMAGE: M. Brice, © 2016 CERN.

It’s 16:00 CET at CERN and I’m sitting in the CERN Main Auditorium. The room is buzzing with excitement, not unlike the day in 2012 when the Higgs discovery was announced in this very room. But today the announcement is not from CERN, but the LIGO experiment which is spread across two continents. Many expect the announcement to be about a discovery of gravitational waves, as predicted by Einstein in 1916, but which have remained elusive until today…

LIGO uses interferometry to detect gravitational waves as they pass through the Earth. Where do gravitational waves strong enough to be detected on Earth come from? Few objects in the Universe are massive enough, but two black holes spiralling towards each other and eventually merging could give just such a strong and characteristic signal. At 16:29 CET, this is exactly what LIGO announced had been observed, followed by extended applause.

Black Holes Merging

Simulation of two massive black holes merging, based on data collected from the LIGO collaboration on 14 Sep. 2015. IMAGE: LIGO Collaboration © 2016 SXS

Scientists at CERN are excited about this discovery. Not only because it has been a much sought after treasure – with searches starting over 50 years ago with Joseph Weber – but also because it could have a direct link to some of the searches we are performing with the ATLAS detector at the LHC. Gravitational waves are described by the general theory of relativity as proposed by Einstein, and encompass massive objects (both stationary or moving very fast) at very large (cosmological) distance scales.

At CERN we are interested in a coherent and testable theory for gravity at the very small scale, so-called quantum gravity. The LHC is used to accelerate protons up to velocities very close to the speed of light, colliding them together at enormous energies within detectors placed around its 27 km circumference. Detectors such as ATLAS and CMS act as giant digital cameras and try to work out what happened during that interaction. It is in the data collected by these experiments that some theories suggest a theoretical particle called the graviton could be found. The gravitational waves mentioned in the announcement yesterday, should actually be related to a massless version of the graviton.

Z' Decay

Simulation of a Z’ boson decaying to two muons in the ATLAS Detector. IMAGE: ATLAS Collaboration © 2016 CERN

The experiments at the LHC are not sensitive to this kind of graviton or the gravity waves detected by LIGO. However, in quantum theories of gravity massive states of the graviton could also exist, being created within the ATLAS detector and subsequently decaying into pairs of particles such as electrons, muons or photons. All of these signatures of a graviton and more have been searched for using the ATLAS detector ([1], [2], [3]), and the observation of such a particle with the statistical precision that is required to claim a discovery in our field (5 sigma), would be a direct observation of quantum gravity. It is interesting to note that it is at a statistical significance of 5.1 sigma that LIGO claimed its discovery yesterday.

But gravity is a peculiar force, unlike any other we know. For one it is extremely weak – so weak that it loses in a tug of war over a metal nail, with the gravitational pull of the entire earth on one side and a small hand-held magnet (using the electromagnetic force) on the other. It is when you realise how weak gravity is that you begin to comprehend how titanic the spiraling and merging of those two black holes must have been to allow them to be detected on Earth, over a billion light years away.

It is also for this reason that most of the theories of quantum gravity involve extra spatial dimensions. It is suggested that within these extra dimensions, gravity has a similar strength to the other forces of nature, and it is just in our three known spatial dimensions that we feel its diluted strength. In the popular extra dimensional theories, the size of these other dimensions could either be small, with a warped geometry, or very large (micrometres!!!), with a flat geometry [60 second guide to extra dimensions]. It is precisely because we explore such high energy scales (and thus small distance scales!) with the ATLAS detector, that we could probe these extra dimensions (if they exist) and potentially observe a massive graviton. However, other theories suggest that gravity might not be like a normal force at all, that it is simply due to space-time geometry. This would be unlike the other forces of nature that we know of, which have particles that communicate the strength of the force during interactions (in the theory of quantum gravity, this would be the graviton).

So the announcement yesterday of gravitational waves being discovered is exciting, because it could help point us in the right direction when looking for a massive version of the graviton (if extra spatial dimensions exist) here at the ATLAS experiment. Do these waves exhibit a behaviour that could shed light on quantum gravity? Perhaps using wave-particle duality – a phenomenon that already describes the duplicitous nature of light as both particles (photons) and waves (electromagnetic spectrum)? Conversely, could the details of this discovery put a dent in all of our current theories of quantum gravity and require theorists to go back to the drawing board?

With the startup of the LHC again in March, collecting up to 10 times more data this year than we did last year, I might be sitting in that room again not too long from now, with a discovery announcement of our own.


Daniel Hayden Daniel Hayden is a postdoctoral researcher for Michigan State University, using the ATLAS Detector to search for Exotic new particles such as the Z’ or Graviton, decaying to two electron or muons. Born in the UK, he currently lives in Geneva, Switzerland, after obtaining his PhD in Particle Physics from Royal Holloway, University of London. In his spare time Dan loves going to the cinema, hanging out with friends, and talking… a lot.

by Daniel Hayden at February 12, 2016 02:11 PM

Emily Lakdawalla - The Planetary Society Blog

“Upside Down & Inside Out” - OK Go Makes Art at Zero-G
OK Go just dropped their most spectacular - and daring - music video yet, “Upside Down & Inside Out.” Filmed in microgravity over many parabolic flights in Russia, “Upside Down & Inside Out” sets a new precedent for what’s possible as artists consider our future in space.

February 12, 2016 02:00 PM

Christian P. Robert - xi'an's og

new version of abcrf
fig-tree near Brisbane, Australia, Aug. 18, 2012Version 1.1 of our R library abcrf version 1.1  is now available on CRAN.  Improvements against the earlier version are numerous and substantial. In particular,  calculations of the random forests have been parallelised and, for machines with multiple cores, the computing gain can be enormous. (The package does along with the random forest model choice paper published in Bioinformatics.)

Filed under: R, Statistics, University life Tagged: ABC model choice, bioinformatics, CRAN, parallelisation, R, R package, random forests

by xi'an at February 12, 2016 01:12 PM

Tommaso Dorigo - Scientificblogging

My Thoughts On The LIGO-VIRGO Result
I believe that the recent discovery of gravitational waves has been described in enough detail by reporters and bloggers around, that my own contribution here would be pointless. Of course I am informed of the facts and reasonably knowledgeable about the topic, and my field of research is not too distant from the one that produced the discovery, so I could in principle offer something different from what you can find by just googling around. But I have a better idea.
What I think you cannot read elsewhere are the free thoughts I had as I listened to the announcement by the VIRGO collaboration. So maybe this may be a different kind of contribution, and of some interest to you.

read more

by Tommaso Dorigo at February 12, 2016 12:49 PM

Emily Lakdawalla - The Planetary Society Blog

The Sea That Has Become Known
Artist Porter McDonald describes his latest painting, Mare Cognitum, which features NASA's Ranger 7 spacecraft.

February 12, 2016 12:00 PM

CERN Bulletin

Congratulations on the direct detection of gravitational waves

This week saw the announcement of an extraordinary physics result: the first direct detection of gravitational waves by the LIGO Scientific Collaboration, which includes the GEO team, and the Virgo Collaboration, using the twin Laser Interferometer Gravitational-wave Observatory (LIGO) detectors located in Livingston, Louisiana, and Hanford, Washington, USA.

 

Albert Einstein predicted gravitational waves in a paper published 100 years ago in 1916. They are a natural consequence of the theory of general relativity, which describes the workings of gravity and was published a few months earlier. Until now, they have remained elusive.

Gravitational waves are tiny ripples in space-time produced by violent gravitational phenomena. Because the fractional change in the space-time geometry can be at the level of 10-21 or smaller, extremely sophisticated, high-sensitivity instruments are needed to detect them. Recently, the Advanced LIGO detector increased its sensitivity by almost a factor of four, which was crucial for the reported observation.

The by-now familiar GW150914 signal, recorded on 14 September 2015, is attributed to the merger of two massive black holes - 30-40 solar masses each -occurring at a distance of about 400 Megaparsecs. Since gravitational waves travel at the speed of light, this catastrophic event happened more than 1 billion years ago. This observation represents another crucial milestone in the experimental verification of general relativity, and opens the door to a new phase of exploration of the universe: gravitational wave astronomy.

The importance of this result for physics is huge. Much of what we take for granted in modern society rests on two pillars, theoretical frameworks that emerged at around the same time. General relativity is one. Quantum mechanics is the other. GPS positioning systems would not work without general relativity, while much of the electronics industry is built on quantum mechanics. Yet the two theories seem to be incompatible.

Four years ago at CERN, we dotted the ‘i’s and crossed the ‘t’s of the Standard Model of particle physics with the discovery of the Higgs boson, the messenger of the Brout-Englert-Higgs mechanism. This was the last missing ingredient of the Standard Model: the quantum theory that describes fundamental particles and all of their interactions, with the exception of gravity. This week’s discovery paves the way to significant improvements in our understanding of gravity through future measurements of gravitational waves. Results such as these two come along only very rarely, and it is a privilege to be able to see them. They also spur us on to the greatest challenge of our time in physics: the reconciliation of general relativity and quantum mechanics.

Congratulations to the LIGO and Virgo Collaborations for this extraordinary contribution to fundamental physics!

Fabiola Gianotti


To learn more about this new discovery, read the article "The discovery uncovered" available in this issue.

February 12, 2016 11:02 AM

CERN Bulletin

Annex A1: cornerstone of the five-yearly review
As a reminder; the purpose of the five-yearly review is to review the financial and social conditions of all CERN personnel whether employed (MPE) or associated (MPA)! In December 2015, the CERN Council approved the package proposed by the Management. Early this year, and as final act of the 2015 five-yearly review, the CERN Council may decide, if necessary and appropriate, to review the procedures defined in Annex A1 and applicable to future five-yearly reviews. At the meeting of TREF (TRipartite Employment Forum)  in early March 2016 discussions will take place between all stakeholders (representatives of Member States, Management and the Staff Association) and Council will take a decision in June 2016, on the basis of the Management's recommendations. What does Annex A1 say and where can I find it? Annex A1 (Article S V 1.02) is a part of the Staff Rules and Regulations. This annex defines: The five-yearly review of the financial and social conditions of the staff, fellows and MPA; The annual review of the basic salaries and stipends, but also of the subsistence allowances and family benefits. The aim of the Five-yearly review is to ensure that the financial and social conditions offered by the Organization allow it to recruit and retain staff members of the highest competence and integrity coming from all Member States. Regarding fellows, the conditions must remain attractive compared to those offered by comparable research institutions; for the MPA’s the conditions must take into account the highest cost-of-living in the local region of the Organization. Annex A1 also defines the procedure for carrying out the five-year reviews with the following steps: Identification of the main recruitment markets, a report on staff recruitment and retention, and a proposal identifying the financial and social conditions to be reviewed. The data collection partly entrusted to the Organization for Economic Cooperation and Development (OECD). The comparison of financial and social conditions. Management’s proposals, following an internal consultation process, in particular with the Staff Association, and the decision of the CERN Council. In all cases, comparative studies requested by the CERN Council must include the salaries, which are compared with those amongst the most competitive (Flemming principle) for career paths AA to B, and with the most competitive (Noblemaire principle) for career paths C to G. However, the results of the comparisons constitute only a guide for the Director-General to use in making his proposals, and for the Council in taking its decision relating to any adjustment of the financial and social conditions of staff members. Thus, no obligation exists for the Management and the Member States to partly or fully take into account the results of these comparative studies, as long as the attractiveness of the Organization and its ability to retain staff do not suffer. Finally Annex A1 defines also the annual review of basic salaries and stipends. The calculation, according to an established formula, is based on data collected locally and in the Member states. Once more this year, the calculated cost variation index (CVI) turned out to be negative, so that, for the fifth consecutive year, no annual adjustment of the basic salaries took place. Information concerning the tripartite discussions will be communicated as they evolve... Stay tuned, and for more information on these issues do not hesitate to discuss with your staff delegates! “The purpose of the Flemming principle is to ensure that the pay of international civil servants matches the best conditions of service at the duty station […]. Its purpose is to ensure parity of pay between international civil servants in the General Service category and the best-paid local workers in comparable jobs.” (ILOAT Judgment 1641)         “The Noblemaire principle, which dates back to the days of the League of Nations and which the United Nations took over, embodies two rules. One is that, to keep the international civil service as one, its employees shall get equal pay for work of equal value, whatever their nationality or the salaries earned in their own country. The other rule is that in recruiting staff from their full membership international organisations shall offer pay that will draw and keep citizens of countries where salaries are highest.” (ILOAT Judgment 825)

by Staff Association at February 12, 2016 10:26 AM

Emily Lakdawalla - The Planetary Society Blog

Field Report From Mars: Sol 4284 - February 11, 2016
Opportunity is continuing to explore the outcrops Marathon Valley, on the west rim of Endeavour crater.

February 12, 2016 01:19 AM

February 11, 2016

Christian P. Robert - xi'an's og

The answer is e, what was the question?!

Sceaux, June 05, 2011A rather exotic question on X validated: since π can be approximated by random sampling over a unit square, is there an equivalent for approximating e? This is an interesting question, as, indeed, why not focus on e rather than π after all?! But very quickly the very artificiality of the problem comes back to hit one in one’s face… With no restriction, it is straightforward to think of a Monte Carlo average that converges to e as the number of simulations grows to infinity. However, such methods like Poisson and normal simulations require some complex functions like sine, cosine, or exponential… But then someone came up with a connection to the great Russian probabilist Gnedenko, who gave as an exercise that the average number of uniforms one needs to add to exceed 1 is exactly e, because it writes as

\sum_{n=0}^\infty\frac{1}{n!}=e

(The result was later detailed in the American Statistician as an introductory simulation exercise akin to Buffon’s needle.) This is a brilliant solution as it does not involve anything but a standard uniform generator. I do not think it relates in any close way to the generation from a Poisson process with parameter λ=1 where the probability to exceed one in one step is e⁻¹, hence deriving  a Geometric variable from this process leads to an unbiased estimator of e as well. As an aside, W. Huber proposed the following elegantly concise line of R code to implement an approximation of e:

1/mean(n*diff(sort(runif(n+1))) > 1)

Hard to beat, isn’t it?! (Although it is more exactly a Monte Carlo approximation of

\left(1-\frac{1}{n}\right)^n

which adds a further level of approximation to the solution….)


Filed under: Books, R, Statistics Tagged: Buffon's needle, cross validated, Gnedenko, Monte Carlo integration, Poisson process, simulation

by xi'an at February 11, 2016 11:16 PM

Alexey Petrov - Symmetry factor

“Ladies and gentlemen, we have detected gravitational waves.”

The title says it all. Today, The Light Interferometer Gravitational-Wave Observatory  (or simply LIGO) collaboration announced the detection of gravitational waves coming from the merger of two black holes located somewhere in the Southern sky, in the direction of the Magellanic  Clouds.  In the presentation, organized by the National Science Foundation, David Reitze (Caltech), Gabriela Gonzales (Louisiana State), Rainer Weiss (MIT), and Kip Thorn (Caltech), announced to the room full of reporters — and thousand of scientists worldwide via the video feeds — that they have seen a gravitational wave event. Their paper, along with a nice explanation of the result, can be seen here.

LIGO

The data that they have is rather remarkable. The event, which occurred on 14 September 2015, has been seen by two sites (Livingston and Hanford) of the experiment, as can be seen in the picture taken from their presentation. It likely happened over a billion years ago (1.3B light years away) and is consistent with the merger of two black holes, of 29 and 46 solar masses. The resulting larger black hole’s mass is about 62 solar masses, which means that about 3 solar masses of energy (29+36-62=3) has been radiated in the form of gravitational waves. This is a huge amount of energy! The shape of the signal is exactly what one should expect from the merging of two black holes, with 5.1 sigma significance.

It is interesting to note that the information presented today totally confirms the rumors that have been floating around for a couple of months. Physicists like to spread rumors, as it seems.

ligoSince the gravitational waves are quadrupole, the most straightforward way to see the gravitational waves is to measure the relative stretches of the its two arms (see another picture from the MIT LIGO site) that are perpendicular to each other. Gravitational wave from black holes falling onto each other and then merging. The LIGO device is a marble of engineering — one needs to detect a signal that is very small — approximately of the size of the nucleus on the length scale of the experiment. This is done with the help of interferometry, where the laser beams bounce through the arms of the experiment and then are compared to each other. The small change of phase of the beams can be related to the change of the relative distance traveled by each beam. This difference is induced by the passing gravitational wave, which contracts one of the arms and extends the other. The way noise that can mimic gravitational wave signal is eliminated should be a subject of another blog post.

This is really a remarkable result, even though it was widely expected since the (indirect) discovery of Hulse and Taylor of binary pulsar in 1974! It seems that now we have another way to study the Universe.


by apetrov at February 11, 2016 07:13 PM

Marco Frasca - The Gauge Connection

They did it!

ResearchBlogging.org

This is a great moment in history of physics: Gravitational waves were directly detected by the merging of two black holes by the LIGO Collaboration. This is a new world we arrived at and there will be a lot to be explored and understood. I do not know if it is for the direct proof of existence of gravitational waves or black holes that fixes this great moment forever in the memory of mankind. But by today we have both!

You can find an excellent recount here. This is the paper

LIGO's PRL

 

Thank you for this great work!

Abbott, B., Abbott, R., Abbott, T., Abernathy, M., Acernese, F., Ackley, K., Adams, C., Adams, T., Addesso, P., Adhikari, R., Adya, V., Affeldt, C., Agathos, M., Agatsuma, K., Aggarwal, N., Aguiar, O., Aiello, L., Ain, A., Ajith, P., Allen, B., Allocca, A., Altin, P., Anderson, S., Anderson, W., Arai, K., Arain, M., Araya, M., Arceneaux, C., Areeda, J., Arnaud, N., Arun, K., Ascenzi, S., Ashton, G., Ast, M., Aston, S., Astone, P., Aufmuth, P., Aulbert, C., Babak, S., Bacon, P., Bader, M., Baker, P., Baldaccini, F., Ballardin, G., Ballmer, S., Barayoga, J., Barclay, S., Barish, B., Barker, D., Barone, F., Barr, B., Barsotti, L., Barsuglia, M., Barta, D., Bartlett, J., Barton, M., Bartos, I., Bassiri, R., Basti, A., Batch, J., Baune, C., Bavigadda, V., Bazzan, M., Behnke, B., Bejger, M., Belczynski, C., Bell, A., Bell, C., Berger, B., Bergman, J., Bergmann, G., Berry, C., Bersanetti, D., Bertolini, A., Betzwieser, J., Bhagwat, S., Bhandare, R., Bilenko, I., Billingsley, G., Birch, J., Birney, R., Birnholtz, O., Biscans, S., Bisht, A., Bitossi, M., Biwer, C., Bizouard, M., Blackburn, J., Blair, C., Blair, D., Blair, R., Bloemen, S., Bock, O., Bodiya, T., Boer, M., Bogaert, G., Bogan, C., Bohe, A., Bojtos, P., Bond, C., Bondu, F., Bonnand, R., Boom, B., Bork, R., Boschi, V., Bose, S., Bouffanais, Y., Bozzi, A., Bradaschia, C., Brady, P., Braginsky, V., Branchesi, M., Brau, J., Briant, T., Brillet, A., Brinkmann, M., Brisson, V., Brockill, P., Brooks, A., Brown, D., Brown, D., Brown, N., Buchanan, C., Buikema, A., Bulik, T., Bulten, H., Buonanno, A., Buskulic, D., Buy, C., Byer, R., Cabero, M., Cadonati, L., Cagnoli, G., Cahillane, C., Bustillo, J., Callister, T., Calloni, E., Camp, J., Cannon, K., Cao, J., Capano, C., Capocasa, E., Carbognani, F., Caride, S., Diaz, J., Casentini, C., Caudill, S., Cavaglià, M., Cavalier, F., Cavalieri, R., Cella, G., Cepeda, C., Baiardi, L., Cerretani, G., Cesarini, E., Chakraborty, R., Chalermsongsak, T., Chamberlin, S., Chan, M., Chao, S., Charlton, P., Chassande-Mottin, E., Chen, H., Chen, Y., Cheng, C., Chincarini, A., Chiummo, A., Cho, H., Cho, M., Chow, J., Christensen, N., Chu, Q., Chua, S., Chung, S., Ciani, G., Clara, F., Clark, J., Cleva, F., Coccia, E., Cohadon, P., Colla, A., Collette, C., Cominsky, L., Constancio, M., Conte, A., Conti, L., Cook, D., Corbitt, T., Cornish, N., Corsi, A., Cortese, S., Costa, C., Coughlin, M., Coughlin, S., Coulon, J., Countryman, S., Couvares, P., Cowan, E., Coward, D., Cowart, M., Coyne, D., Coyne, R., Craig, K., Creighton, J., Creighton, T., Cripe, J., Crowder, S., Cruise, A., Cumming, A., Cunningham, L., Cuoco, E., Canton, T., Danilishin, S., D’Antonio, S., Danzmann, K., Darman, N., Da Silva Costa, C., Dattilo, V., Dave, I., Daveloza, H., Davier, M., Davies, G., Daw, E., Day, R., De, S., DeBra, D., Debreczeni, G., Degallaix, J., De Laurentis, M., Deléglise, S., Del Pozzo, W., Denker, T., Dent, T., Dereli, H., Dergachev, V., DeRosa, R., De Rosa, R., DeSalvo, R., Dhurandhar, S., Díaz, M., Di Fiore, L., Di Giovanni, M., Di Lieto, A., Di Pace, S., Di Palma, I., Di Virgilio, A., Dojcinoski, G., Dolique, V., Donovan, F., Dooley, K., Doravari, S., Douglas, R., Downes, T., Drago, M., Drever, R., Driggers, J., Du, Z., Ducrot, M., Dwyer, S., Edo, T., Edwards, M., Effler, A., Eggenstein, H., Ehrens, P., Eichholz, J., Eikenberry, S., Engels, W., Essick, R., Etzel, T., Evans, M., Evans, T., Everett, R., Factourovich, M., Fafone, V., Fair, H., Fairhurst, S., Fan, X., Fang, Q., Farinon, S., Farr, B., Farr, W., Favata, M., Fays, M., Fehrmann, H., Fejer, M., Feldbaum, D., Ferrante, I., Ferreira, E., Ferrini, F., Fidecaro, F., Finn, L., Fiori, I., Fiorucci, D., Fisher, R., Flaminio, R., Fletcher, M., Fong, H., Fournier, J., Franco, S., Frasca, S., Frasconi, F., Frede, M., Frei, Z., Freise, A., Frey, R., Frey, V., Fricke, T., Fritschel, P., Frolov, V., Fulda, P., Fyffe, M., Gabbard, H., Gair, J., Gammaitoni, L., Gaonkar, S., Garufi, F., Gatto, A., Gaur, G., Gehrels, N., Gemme, G., Gendre, B., Genin, E., Gennai, A., George, J., Gergely, L., Germain, V., Ghosh, A., Ghosh, A., Ghosh, S., Giaime, J., Giardina, K., Giazotto, A., Gill, K., Glaefke, A., Gleason, J., Goetz, E., Goetz, R., Gondan, L., González, G., Castro, J., Gopakumar, A., Gordon, N., Gorodetsky, M., Gossan, S., Gosselin, M., Gouaty, R., Graef, C., Graff, P., Granata, M., Grant, A., Gras, S., Gray, C., Greco, G., Green, A., Greenhalgh, R., Groot, P., Grote, H., Grunewald, S., Guidi, G., Guo, X., Gupta, A., Gupta, M., Gushwa, K., Gustafson, E., Gustafson, R., Hacker, J., Hall, B., Hall, E., Hammond, G., Haney, M., Hanke, M., Hanks, J., Hanna, C., Hannam, M., Hanson, J., Hardwick, T., Harms, J., Harry, G., Harry, I., Hart, M., Hartman, M., Haster, C., Haughian, K., Healy, J., Heefner, J., Heidmann, A., Heintze, M., Heinzel, G., Heitmann, H., Hello, P., Hemming, G., Hendry, M., Heng, I., Hennig, J., Heptonstall, A., Heurs, M., Hild, S., Hoak, D., Hodge, K., Hofman, D., Hollitt, S., Holt, K., Holz, D., Hopkins, P., Hosken, D., Hough, J., Houston, E., Howell, E., Hu, Y., Huang, S., Huerta, E., Huet, D., Hughey, B., Husa, S., Huttner, S., Huynh-Dinh, T., Idrisy, A., Indik, N., Ingram, D., Inta, R., Isa, H., Isac, J., Isi, M., Islas, G., Isogai, T., Iyer, B., Izumi, K., Jacobson, M., Jacqmin, T., Jang, H., Jani, K., Jaranowski, P., Jawahar, S., Jiménez-Forteza, F., Johnson, W., Johnson-McDaniel, N., Jones, D., Jones, R., Jonker, R., Ju, L., Haris, K., Kalaghatgi, C., Kalogera, V., Kandhasamy, S., Kang, G., Kanner, J., Karki, S., Kasprzack, M., Katsavounidis, E., Katzman, W., Kaufer, S., Kaur, T., Kawabe, K., Kawazoe, F., Kéfélian, F., Kehl, M., Keitel, D., Kelley, D., Kells, W., Kennedy, R., Keppel, D., Key, J., Khalaidovski, A., Khalili, F., Khan, I., Khan, S., Khan, Z., Khazanov, E., Kijbunchoo, N., Kim, C., Kim, J., Kim, K., Kim, N., Kim, N., Kim, Y., King, E., King, P., Kinzel, D., Kissel, J., Kleybolte, L., Klimenko, S., Koehlenbeck, S., Kokeyama, K., Koley, S., Kondrashov, V., Kontos, A., Koranda, S., Korobko, M., Korth, W., Kowalska, I., Kozak, D., Kringel, V., Krishnan, B., Królak, A., Krueger, C., Kuehn, G., Kumar, P., Kumar, R., Kuo, L., Kutynia, A., Kwee, P., Lackey, B., Landry, M., Lange, J., Lantz, B., Lasky, P., Lazzarini, A., Lazzaro, C., Leaci, P., Leavey, S., Lebigot, E., Lee, C., Lee, H., Lee, H., Lee, K., Lenon, A., Leonardi, M., Leong, J., Leroy, N., Letendre, N., Levin, Y., Levine, B., Li, T., Libson, A., Littenberg, T., Lockerbie, N., Logue, J., Lombardi, A., London, L., Lord, J., Lorenzini, M., Loriette, V., Lormand, M., Losurdo, G., Lough, J., Lousto, C., Lovelace, G., Lück, H., Lundgren, A., Luo, J., Lynch, R., Ma, Y., MacDonald, T., Machenschalk, B., MacInnis, M., Macleod, D., Magaña-Sandoval, F., Magee, R., Mageswaran, M., Majorana, E., Maksimovic, I., Malvezzi, V., Man, N., Mandel, I., Mandic, V., Mangano, V., Mansell, G., Manske, M., Mantovani, M., Marchesoni, F., Marion, F., Márka, S., Márka, Z., Markosyan, A., Maros, E., Martelli, F., Martellini, L., Martin, I., Martin, R., Martynov, D., Marx, J., Mason, K., Masserot, A., Massinger, T., Masso-Reid, M., Matichard, F., Matone, L., Mavalvala, N., Mazumder, N., Mazzolo, G., McCarthy, R., McClelland, D., McCormick, S., McGuire, S., McIntyre, G., McIver, J., McManus, D., McWilliams, S., Meacher, D., Meadors, G., Meidam, J., Melatos, A., Mendell, G., Mendoza-Gandara, D., Mercer, R., Merilh, E., Merzougui, M., Meshkov, S., Messenger, C., Messick, C., Meyers, P., Mezzani, F., Miao, H., Michel, C., Middleton, H., Mikhailov, E., Milano, L., Miller, J., Millhouse, M., Minenkov, Y., Ming, J., Mirshekari, S., Mishra, C., Mitra, S., Mitrofanov, V., Mitselmakher, G., Mittleman, R., Moggi, A., Mohan, M., Mohapatra, S., Montani, M., Moore, B., Moore, C., Moraru, D., Moreno, G., Morriss, S., Mossavi, K., Mours, B., Mow-Lowry, C., Mueller, C., Mueller, G., Muir, A., Mukherjee, A., Mukherjee, D., Mukherjee, S., Mukund, N., Mullavey, A., Munch, J., Murphy, D., Murray, P., Mytidis, A., Nardecchia, I., Naticchioni, L., Nayak, R., Necula, V., Nedkova, K., Nelemans, G., Neri, M., Neunzert, A., Newton, G., Nguyen, T., Nielsen, A., Nissanke, S., Nitz, A., Nocera, F., Nolting, D., Normandin, M., Nuttall, L., Oberling, J., Ochsner, E., O’Dell, J., Oelker, E., Ogin, G., Oh, J., Oh, S., Ohme, F., Oliver, M., Oppermann, P., Oram, R., O’Reilly, B., O’Shaughnessy, R., Ott, C., Ottaway, D., Ottens, R., Overmier, H., Owen, B., Pai, A., Pai, S., Palamos, J., Palashov, O., Palomba, C., Pal-Singh, A., Pan, H., Pan, Y., Pankow, C., Pannarale, F., Pant, B., Paoletti, F., Paoli, A., Papa, M., Paris, H., Parker, W., Pascucci, D., Pasqualetti, A., Passaquieti, R., Passuello, D., Patricelli, B., Patrick, Z., Pearlstone, B., Pedraza, M., Pedurand, R., Pekowsky, L., Pele, A., Penn, S., Perreca, A., Pfeiffer, H., Phelps, M., Piccinni, O., Pichot, M., Pickenpack, M., Piergiovanni, F., Pierro, V., Pillant, G., Pinard, L., Pinto, I., Pitkin, M., Poeld, J., Poggiani, R., Popolizio, P., Post, A., Powell, J., Prasad, J., Predoi, V., Premachandra, S., Prestegard, T., Price, L., Prijatelj, M., Principe, M., Privitera, S., Prix, R., Prodi, G., Prokhorov, L., Puncken, O., Punturo, M., Puppo, P., Pürrer, M., Qi, H., Qin, J., Quetschke, V., Quintero, E., Quitzow-James, R., Raab, F., Rabeling, D., Radkins, H., Raffai, P., Raja, S., Rakhmanov, M., Ramet, C., Rapagnani, P., Raymond, V., Razzano, M., Re, V., Read, J., Reed, C., Regimbau, T., Rei, L., Reid, S., Reitze, D., Rew, H., Reyes, S., Ricci, F., Riles, K., Robertson, N., Robie, R., Robinet, F., Rocchi, A., Rolland, L., Rollins, J., Roma, V., Romano, J., Romano, R., Romanov, G., Romie, J., Rosińska, D., Rowan, S., Rüdiger, A., Ruggi, P., Ryan, K., Sachdev, S., Sadecki, T., Sadeghian, L., Salconi, L., Saleem, M., Salemi, F., Samajdar, A., Sammut, L., Sampson, L., Sanchez, E., Sandberg, V., Sandeen, B., Sanders, G., Sanders, J., Sassolas, B., Sathyaprakash, B., Saulson, P., Sauter, O., Savage, R., Sawadsky, A., Schale, P., Schilling, R., Schmidt, J., Schmidt, P., Schnabel, R., Schofield, R., Schönbeck, A., Schreiber, E., Schuette, D., Schutz, B., Scott, J., Scott, S., Sellers, D., Sengupta, A., Sentenac, D., Sequino, V., Sergeev, A., Serna, G., Setyawati, Y., Sevigny, A., Shaddock, D., Shaffer, T., Shah, S., Shahriar, M., Shaltev, M., Shao, Z., Shapiro, B., Shawhan, P., Sheperd, A., Shoemaker, D., Shoemaker, D., Siellez, K., Siemens, X., Sigg, D., Silva, A., Simakov, D., Singer, A., Singer, L., Singh, A., Singh, R., Singhal, A., Sintes, A., Slagmolen, B., Smith, J., Smith, M., Smith, N., Smith, R., Son, E., Sorazu, B., Sorrentino, F., Souradeep, T., Srivastava, A., Staley, A., Steinke, M., Steinlechner, J., Steinlechner, S., Steinmeyer, D., Stephens, B., Stevenson, S., Stone, R., Strain, K., Straniero, N., Stratta, G., Strauss, N., Strigin, S., Sturani, R., Stuver, A., Summerscales, T., Sun, L., Sutton, P., Swinkels, B., Szczepańczyk, M., Tacca, M., Talukder, D., Tanner, D., Tápai, M., Tarabrin, S., Taracchini, A., Taylor, R., Theeg, T., Thirugnanasambandam, M., Thomas, E., Thomas, M., Thomas, P., Thorne, K., Thorne, K., Thrane, E., Tiwari, S., Tiwari, V., Tokmakov, K., Tomlinson, C., Tonelli, M., Torres, C., Torrie, C., Töyrä, D., Travasso, F., Traylor, G., Trifirò, D., Tringali, M., Trozzo, L., Tse, M., Turconi, M., Tuyenbayev, D., Ugolini, D., Unnikrishnan, C., Urban, A., Usman, S., Vahlbruch, H., Vajente, G., Valdes, G., Vallisneri, M., van Bakel, N., van Beuzekom, M., van den Brand, J., Van Den Broeck, C., Vander-Hyde, D., van der Schaaf, L., van Heijningen, J., van Veggel, A., Vardaro, M., Vass, S., Vasúth, M., Vaulin, R., Vecchio, A., Vedovato, G., Veitch, J., Veitch, P., Venkateswara, K., Verkindt, D., Vetrano, F., Viceré, A., Vinciguerra, S., Vine, D., Vinet, J., Vitale, S., Vo, T., Vocca, H., Vorvick, C., Voss, D., Vousden, W., Vyatchanin, S., Wade, A., Wade, L., Wade, M., Waldman, S., Walker, M., Wallace, L., Walsh, S., Wang, G., Wang, H., Wang, M., Wang, X., Wang, Y., Ward, H., Ward, R., Warner, J., Was, M., Weaver, B., Wei, L., Weinert, M., Weinstein, A., Weiss, R., Welborn, T., Wen, L., Weßels, P., Westphal, T., Wette, K., Whelan, J., Whitcomb, S., White, D., Whiting, B., Wiesner, K., Wilkinson, C., Willems, P., Williams, L., Williams, R., Williamson, A., Willis, J., Willke, B., Wimmer, M., Winkelmann, L., Winkler, W., Wipf, C., Wiseman, A., Wittel, H., Woan, G., Worden, J., Wright, J., Wu, G., Yablon, J., Yakushin, I., Yam, W., Yamamoto, H., Yancey, C., Yap, M., Yu, H., Yvert, M., Zadrożny, A., Zangrando, L., Zanolin, M., Zendri, J., Zevin, M., Zhang, F., Zhang, L., Zhang, M., Zhang, Y., Zhao, C., Zhou, M., Zhou, Z., Zhu, X., Zucker, M., Zuraw, S., Zweizig, J., & , . (2016). Observation of Gravitational Waves from a Binary Black Hole Merger Physical Review Letters, 116 (6) DOI: 10.1103/PhysRevLett.116.061102


Filed under: Astronomy, Astrophysics, General Relativity, Physics Tagged: Black holes, Gravitational waves, LIGO

by mfrasca at February 11, 2016 07:04 PM

Symmetrybreaking - Fermilab/SLAC

LIGO sees gravitational waves

The experiment confirms the last piece of Einstein’s general theory of relativity.

There’s officially a new way to look at the universe, and it’s not with a telescope.

After weeks of speculation, the LIGO Scientific Collaboration and Virgo Collaboration confirmed that they have seen waves in the very fabric of space-time, generated when two orbiting black holes spiraled into one another. So begins an era of gravitational wave astronomy.

These ripples in space were predicted as part of Albert Einstein’s general theory of relativity 100 years ago. While they had been measured indirectly through observation of orbiting pulsars, they had never been directly observed – until now.

“We have detected gravitational waves,” David Reitze, LIGO’s executive director, told a packed room at the National Press Club in Washington, DC, today. “We did it.”

The Advanced Laser Interferometer Gravitational-wave Observatory, or LIGO, picked up signatures of space stretching and warping as the black holes released energy in the form of gravitational waves 1.3 billion years ago. The black holes, 29 and 36 times the mass of the sun and 150 kilometers in diameter, merged to form a larger black hole with 62 solar masses, releasing the rest of the energy in gravitational waves that started speeding towards Earth when multicellular life there was just developing.

The waves from the black hole merger were brief, lasting mere milliseconds. But the output from that collision was “50 times greater than all the power put out by all the stars in the universe put together,” says Kip Thorne, professor of physics at Caltech.

The signal arrived during an engineering test on Sept. 14 last year, a few days before the formal start of Advanced LIGO’s first observing run, which lasted from Sept. 18 until mid-January.

“The signal took a billion years to come to Earth and produce this tiny distortion in our detectors that we are very proud to measure,” says Gabriela González, Louisiana State University professor and spokesperson for the LIGO Scientific Collaboration.

LIGO uses two identical interferometers in Louisiana and Washington to search for gravitational waves. At each one, a laser beam is split so it travels down a pair of perpendicular arms. At the end of each 4-kilometer-long tube, it bounces off a mirror and heads back toward the origin, where it recombines with the rest of the light.

LIGO uses identical interferometers in Livingston, Louisiana (above), and Hanford, Washington.

LIGO Laboratory

Without gravitational waves, the distance remains identical, and the light waves cancel each other. But if a gravitational wave passes through, it stretches space-time in one direction and compresses it in another. This makes one arm of the interferometer longer than the other, and the waves of light don’t match up as they should, revealing the telltale signal. LIGO is so sensitive, it can detect if the distance between its mirrors changes by 1/10,000 the width of a proton – a positively minuscule measurement.

“This was a truly scientific moonshot,” Reitze says. “And we did it. We landed on the moon.”

Reitze says the collaboration took months of rechecking to make sure that the signal was not a test or a false signal. Scientists confirmed that the signal was a gravitational wave that beautifully matched the prediction made by supercomputers and Einstein’s theory. In addition to being the first direct observation of gravitational waves, this also provided the first proof that binary black holes exist in the universe.

The initial installation of LIGO ran for several years without seeing gravitational waves before beginning a five-year upgrade to create Advanced LIGO. The system is currently running four times better than when it turned off in 2010, but there are still improvements to go. At full capacity, LIGO will be 10 times more sensitive. Upgrades include an additional mirror, a more powerful laser source and improved sensors and seismic isolation.

“We are going to have a huge richness of gravitational wave signals in LIGO” over the coming years, Thorne says.

Large optic inspection at the LIGO Livingston Laboratory.

LIGO Laboratory

LIGO, which is jointly operated by MIT and Caltech and has collaborators from more than 80 institutions worldwide, is scheduled for a six-month observing run later this year. The project will also eventually be joined later this year by Europe’s Advanced Virgo, a third interferometer that could help triangulate the location of wave-generating objects in the sky. This will help telescope-based astronomers aim their lenses at the right spot to look for optical counterparts when experiments like LIGO see a signal. Japan and India are also slated to have gravitational wave experiments.

And there’s plenty to study. While this signal emerged from the dance of two black holes, gravitational waves could also come from orbiting neutron stars or a black hole devouring a neutron star. There could even be relic gravitational waves left over from the big bang.

Scientists are interested in learning more about the properties of gravitational waves and using them to figure out just how many neutron stars and black holes are around or how binary systems form and change. But the questions don’t stop there. Data coming out of LIGO could help address how matter behaves in extreme conditions, whether general relativity is the right theory of gravity or if the black holes that actually exist line up with the black holes predicted by Einstein’s theory.

“Now that we have detectors able to detect these systems, now that we know binary black holes are out there, we’ll be listening to the universe,” González says.

Gravitational wave astronomy will help scientists peer into our universe in a new way. The course of history has been expanding from visible light–to radio waves, microwaves, gamma rays and even neutrinos. This first direct observation of gravitational radiation is just the next wave of information from the universe.

“This is a very, very special moment,” says France Córdova, director of the National Science Foundation, which funds the LIGO observatories. “It’s seeing our universe with new eyes in an entirely new way.”

Signals of the binary black hole merger appeared at both LIGO detectors.

LIGO Laboratory/NSF

by Lauren Biron at February 11, 2016 06:25 PM

Quantum Diaries

A peine une brise, mais elle secoue le monde entier

Aujourd’hui, les scientifiques du Laser Interferometer Gravitational-Wave Observatory ou LIGO ont fièrement annoncé avoir détecté les toutes premières ondes gravitationnelles. Décrites il y a exactement cent ans dans la Théorie de la Relativité Générale par Albert Einstein, ces ondes, qu’on a longtemps crues être beaucoup trop faibles pour être captées, ont enfin été détectées.

En 1916, Einstein décrit la gravitation comme une déformation de l’espace et du temps, comme si l’espace n’était qu’un tissu qui s’étire en présence d’objets massifs. Un espace vide serait semblable à un drap tendu. Un objet se déplaçant dans cet espace, comme par exemple une balle de ping-pong, suivrait simplement la surface du drap. Laissez tomber un objet lourd sur ce drap et le tissu sera déformé. La balle de ping-pong ne roulera plus en ligne droite, mais suivra naturellement la courbe de l’espace déformé.

En tombant sur le drap, l’objet lourd créera de petites ondulations qui se propageront autour de lui, comme des vaguelettes à la surface de l’eau. De même, le Big Bang ou une collision entre deux trous noirs peut aussi créer des ondulations qui atteindraient éventuellement la Terre.

C’est ce type d’ondulations que LIGO a enfin détectées, comme l’explique cette excellente vidéo (mais en anglais). Les scientifiques de LIGO ont utilisé un interféromètre, un appareil muni de deux branches identiques tel qu’indiqué sur l’image ci-dessous. Un laser (en bas à gauche) émet un faisceau de lumière qui vient frapper un morceau de verre (au centre). La moitié du faisceau est réfléchie, l’autre poursuit son chemin. Les deux faisceaux parcourent exactement la même distance (4 km) avant d’être réfléchis par un miroir.

LIGO-1

Un faisceau de lumière, telle une vague à la surface de l’eau, possède des crêtes et des creux. Au retour, les deux faisceaux se chevauchent à nouveau, mais la longueur des branches est telle que la position des crêtes du premier faisceau est décalée par rapport à celle de l’autre, de telle sorte qu’ils se neutralisent. Par conséquent, un détecteur situé à droite ne décèlerait aucune lumière.

LIGO-2

Imaginez maintenant qu’une vague gravitationnelle, produite par exemple par une collision entre deux trous noirs, se propage à travers l’interféromètre. Le « tissus » de l’espace serait étiré puis comprimé sous le passage de cette onde. La longueur des branches de l’interféromètre serait modifiée, décalant ainsi les crêtes et les creux. Les deux faisceaux ne s’annuleraient plus. Un détecteur détecterait une lumière oscillante durant le passage d’une onde gravitationnelle à travers l’appareil.

Le défi de cette expérience consiste à éliminer toutes sources de vibrations, qu’elles proviennent des vagues de l’océan, d’un tremblement de terre, ou même du trafic car elles produiraient des effets semblables. Les faisceaux laser voyagent donc dans des tuyaux à vide et les miroirs sont montés sur des ressorts et suspendus à de fins fils. On amortit ainsi les vibrations externes par un facteur de 10 milliards.

Pour s’assurer qu’un signal provient réellement d’une onde gravitationnelle et non pas d’une autre perturbation, LIGO utilisent deux interféromètres identiques et distants de plus de 3000 km. L’un se trouve en Louisiane, l’autre dans l’état de Washington.

Et voici ce signal, produit lors de la fusion de deux trous noirs d’environ 50 km mais trente fois plus massifs que le soleil. Cette collision a généré une onde gravitationnelle qui s’est propagé pendant un milliard d’années avant d’atteindre la Terre le 14 septembre dernier. L’onde a modifié la longueur des branches de l’interféromètre de 4 km d’à peine un millième de la taille d’un proton. Une petite oscillation durant seulement 20 millisecondes, accélérant rapidement puis disparaissant, exactement tel que prédit par les équations de la relativité générale.

Ligo-3

Donc quand les deux instruments ont détecté simultanément ce signal, leur coïncidence n’a laissé aucun doute. Il ne pouvait s’agir que d’ondes gravitationnelles. LIGO n’a détecté que la partie classique de ces ondes. On ne sait toujours pas si les ondes gravitationnelles sont quantifiées ou pas, et si elles s’accompagnent d’une particule appelée le graviton.

Pendant des siècles, les astronomes ont utilisé des ondes électromagnétiques comme la lumière pour explorer l’Univers. Les ondes gravitationnelles fourniront un nouvel outil pour pousser l’exploration de l’Univers encore plus loin. Ce que ces ondes nous apprendrons vaudra bien d’avoir attendu cent longues années pour les découvrir.

Pauline Gagnon

Pour en savoir plus sur la physique des particules et les enjeux du LHC, consultez mon livre : « Qu’est-ce que le boson de Higgs mange en hiver et autres détails essentiels».

Pour recevoir un avis lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution.

LIGO-4

L’interféromètre de LIGO sur le site de Hanford dans l’état de Washington avec ses branches de 4 km de longueur. ©NASA

by Pauline Gagnon at February 11, 2016 06:14 PM

Quantum Diaries

A faint ripple shakes the World

Today, scientists from the Laser Interferometer Gravitational-Wave Observatory or LIGO have proudly announced having detected the first faint ripples caused by gravitational waves. First predicted exactly one hundred years ago by Albert Einstein in the Theory of General Relativity, these gravitational waves, long believed to be too small to be seen, have at long last been detected.

In 1916, Einstein explained that gravitation is a distortion of space and time, as if it was a fabric that could be distorted by the presence of massive objects. An empty space would be like a taut sheet. Any object, like a ping-pong ball travelling in that space, would simply follow the surface of the sheet. Drop a heavy object on the sheet, and the fabric will be distorted. The ping-pong ball would no longer roll along a straight line but would naturally follow the curve of the distorted space.

A heavy object falling on that sheet would generate small ripples around it. Likewise, the Big Bang or collisions between black holes would also create ripples that would eventually reach the Earth.

These were the small disturbances LIGO was set to find. As explained in this excellent video, the scientists used an interferometer, that is, an apparatus with two identical arms as shown below. A laser (bottom left corner) emits a beam of light that hits a piece of glass (center). Half of the beam is reflected, half of it keeps going on. The two beams travel exactly the same distance (4 km), hit a mirror and bounce back.

LIGO-1

A light beam is a wave, and just like waves at the surface of water, it has crests and troughs. The arms length is such that when the beams return and overlap again, the two sets of waves are shifted with respect to each other, such that they cancel each other out. Hence, a detector placed at the bottom right corner would see no light at all.LIGO-2

Now imagine that a gravitational wave, produced by the collisions of two black holes for example, sweeps across the interferometer. The fabric of space would be stretched then compressed as the wave passes through. And so the length of the arms would change, shifting the pattern of crests and troughs. The two beams would no longer cancel each other. A light-sensitive detector would now detect some light that would pulsate as the gravitational wave sweeps across the apparatus.

The challenge is that any vibration caused by waves crashing on the shore, earthquakes, or even heavy traffic would disturb such an experiment by producing similar effects. So the laser beams travel in vacuum and the mirrors are mounted on shock-absorbing springs and suspended on fine wires to dampen any vibration by a factor of 10 billion.

To ensure a signal really comes from a gravitational wave and not from some other disturbance, LIGO used two identical laboratories located more than 3000 km apart in the USA, one in Louisiana, one in Washington State.

And here is the signal generated when two black holes, 50 km in diameter but 30 times more massive than the Sun, merged. This collision sent a gravitational wave that traveled for about a billion year before reaching the Earth on 14 September 2015. This wave changed the length of the 4-km arms by one thousandth of the size of a proton. A tiny ripple that lasted a mere 20 milliseconds, accelerating quickly before disappearing, exactly as General Relativity predicted.

Ligo-3

So when both instruments detected the same signal, the coincidence between the two left no doubt. It really was from gravitational waves. So far, the LIGO experiment only detected the classical part of these waves. We still do not know if gravitational waves are quantized or not, that is, if they come with a particle called the graviton.

For centuries, astronomers have used electromagnetic waves such as light to explore the Universe. Gravitational waves will provide a new tool to study it even further. Other experiments such as BICEP2 are already looking for the ripples left over from the Big Bang. What we will learn from these waves will be well worth the hundred-year long wait from their prediction to their discovery.

Pauline Gagnon

To learn more on particle physics, don’t miss my book, out this July.

To be alerted of new postings, follow me on Twitter: @GagnonPauline  or sign-up on this mailing list to receive an e-mail notification.

 LIGO-4

The LIGO interferometer in Hanford, Washington State, USA, with its 4km-long arms. ©NASA

by Pauline Gagnon at February 11, 2016 06:09 PM

Peter Coles - In the Dark

Gravitational waves detected. Einstein was right … again

Some more reaction to the LIGO result…

CQG+

Clifford Will Clifford Will is the Editor-in-Chief of Classical and Quantum Gravity

As if celebrating the 100th birthday of general relativity weren’t enough, the LIGO-Virgo collaboration has provided “the icing on the cake” with today’s announcement of the first direct detection of gravitational waves. At press conferences in the USA and Europe, and in a paper in Physical Review Letters published afterward, the team announced the detection of a signal from a system of two merging black holes.

The signal arrived on 14 September, 2015 (its official designation is GW150914), and was detected by both the Hanford and Livingston advanced detectors of the LIGO observatory (the advanced Virgo instrument in Italy is not yet online). It was detected first by

View original post 477 more words


by telescoper at February 11, 2016 06:04 PM

David Berenstein, Moshe Rozali - Shores of the Dirac Sea

Whoop!

That is the sound of the gravitational waves hitting the LIGO detector. A chirp.

That is also the sound of the celebratory hurrah’s from the gravity community. We  finally have experimental (observational) confirmation to a prediction made by Einstein’s theory of general relativity 100 years ago.

The quest to hear gravitational waves started about 50 years ago by Webber and it is only now that enough sensitivity is available in the detectors to be able to hear the ripples of spacetime as they pass through the earth.

The particular event in question turned the equivalent of 3 solar masses into gravitational waves in a few seconds. This is much brighter in power than the brightest supernova. Remember that when supernova collapse, the light emitted from them gets trapped in the shells of ejected mater and the rise of the signal and afterglow is extended to months. This was brighter in energy than all the output of all the stars in the visible universe combined! The event of Sep. 14 2015 recorded the merger of two black holes of intermediate masses (about 30 solar masses each) about 1.3 billion lightyears away.

The official press release is here, the PRL paper is here.

The New York times has a nice movie and article to mark this momentous scientific breakthrough.

Congratulations to the LIGO team.

 

Usual caveat: Now we wait for confirmation, yadda yadda.

 

 

 

 

 


Filed under: gravity, Physics Tagged: gravitational waves, gravity, LIGO

by dberenstein at February 11, 2016 05:53 PM

Quantum Diaries

“Ladies and gentlemen, we have detected gravitational waves.”

The title says it all. Today, The Light Interferometer Gravitational-Wave Observatory  (or simply LIGO) collaboration announced the detection of gravitational waves coming from the merger of two black holes located somewhere in the Southern sky, in the direction of the Magellanic  Clouds.  In the presentation, organized by the National Science Foundation, David Reitze (Caltech), Gabriela Gonzales (Louisiana State), Rainer Weiss (MIT), and Kip Thorn (Caltech), announced to the room full of reporters — and thousand of scientists worldwide via the video feeds — that they have seen a gravitational wave event. Their paper, along with a nice explanation of the result, can be seen here.

LIGO

The data that they have is rather remarkable. The event, which occurred on 14 September 2015, has been seen by two sites (Livingston and Hanford) of the experiment, as can be seen in the picture taken from their presentation. It likely happened over a billion years ago (1.3B light years away) and is consistent with the merger of two black holes, of 29 and 46 solar masses. The resulting larger black hole’s mass is about 62 solar masses, which means that about 3 solar masses of energy (29+36-62=3) has been radiated in the form of gravitational waves. This is a huge amount of energy! The shape of the signal is exactly what one should expect from the merging of two black holes, with 5.1 sigma significance.

It is interesting to note that the information presented today totally confirms the rumors that have been floating around for a couple of months. Physicists like to spread rumors, as it seems.

ligoSince the gravitational waves are quadrupole, the most straightforward way to see the gravitational waves is to measure the relative stretches of the its two arms (see another picture from the MIT LIGO site) that are perpendicular to each other. Gravitational wave from black holes falling onto each other and then merging. The LIGO device is a marble of engineering — one needs to detect a signal that is very small — approximately of the size of the nucleus on the length scale of the experiment. This is done with the help of interferometry, where the laser beams bounce through the arms of the experiment and then are compared to each other. The small change of phase of the beams can be related to the change of the relative distance traveled by each beam. This difference is induced by the passing gravitational wave, which contracts one of the arms and extends the other. The way noise that can mimic gravitational wave signal is eliminated should be a subject of another blog post.

This is really a remarkable result, even though it was widely expected since the (indirect) discovery of Hulse and Taylor of binary pulsar in 1974! It seems that now we have another way to study the Universe.

by Alexey at February 11, 2016 05:33 PM

Clifford V. Johnson - Asymptotia

What Fantastic News!

einstein_and_binary_atlantic_graphicThis is an amazing day for humanity! Notice I said humanity, not science, not physics - humanity. The LIGO experiment has announced the discovery of a direct detection of gravitational waves (actual ripples in spacetime itself!!), opening a whole new window with which to see and understand the universe. This is equivalent to Galileo first pointing a telescope at the sky and beginning to see things like the moons of Jupiter and the phases of venus for the first time. Look how much we learned following from that... so we've a lot to look forward to. It is 100 years ago since gravitational waves were predicted, and we've now seen them directly for the first time!

Actually, more has been discovered in this announcement:- The signal came from the merger of two large (stellar) black holes, and so this is also the first direct confirmation of such black holes' existence! (We've known about them [...] Click to continue reading this post

The post What Fantastic News! appeared first on Asymptotia.

by Clifford at February 11, 2016 04:57 PM

ZapperZ - Physics and Physicists

LIGO Reports Detection of Gravitational Wave
LIGO has officially acknowledged of the detection of gravitational wave.

Now, in a paper published in Physical Review Letters on February 11, the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo collaborations announce the detection of just such a black hole merger — knocking out two scientific firsts at once: the first direct detection of gravitational waves and the first observation of the merger of so-called binary black holes. The detection heralds a new era of astronomy — using gravitational waves to “listen in” on the universe.

In the early morning hours of September 14, 2015 — just a few days after the newly upgraded LIGO began taking data — a strong signal, consistent with merging black holes, appeared simultaneously in LIGO's two observatories, located in Hanford, Washington and Livingston, Louisiana.

Notice that this is the FIRST time I'm even mentioning this here, considering that for the past 2 weeks, at least, the rumors about this have been flying around all over the place.

Looks like if this is confirmed, we know in which area the next Nobel prize will be awarded to.

There is also a sigh of relief, because we have been searching for this darn thing for years, if not decades. It is another aspect of General Relativity that is finally detected.

Zz.

by ZapperZ (noreply@blogger.com) at February 11, 2016 04:35 PM

CERN Bulletin

Hommage
Christian Roy nous a quitté le 31 janvier dernier. Il avait 79 ans. Sa disparition a profondément touché tous ceux, très nombreux, qui l’ont côtoyé et fréquenté au cours de sa très longue carrière au CERN. Ses proches collègues et ses chefs connaissaient bien ses qualités professionnelles et son extrême dévouement au service de l’Organisation. Mais c’est à ses qualités d’homme que nous pensons aujourd’hui, celles qu'il a mises au service de l’Association. En effet, Christian Roy a fait partie du petit nombre de délégués du personnel qui, pendant de longues années, ont sacrifié une part importante de leur temps de loisir et de leur vie familiale pour travailler à l’amélioration des conditions de vie de leurs collègues, actifs ou pensionnés, présents et futurs. Parmi ces délégués, certains ont joué un rôle particulier dans l’histoire des relations sociales de l’Organisation parce qu'ils ont activement travaillé, durant une longue période, à améliorer ces relations. Pour Christian Roy cette activité a été particulièrement intense et longue. En effet, tout récemment encore, en 2014, ne s’était-il pas engagé au GAC-EPA pour s’occuper des problèmes des pensionnés face à l’administration fiscale française? Christian a d’abord été, pendant de longues années, membre du Comité exécutif et  Président de l’Association du Personnel.  Quelques années auparavant, Guy Maurin, un de ses prédécesseurs à la présidence de l’Association, avait donné une orientation nouvelle aux relations avec la Direction. Il avait aussi instauré de nouvelles méthodes de travail en groupe, notamment pour faire valoir le point de vue du Personnel dans la gestion du système de pensions. Christian Roy a tiré profit de ces changements pour élargir le rôle de l’Association. Poussé notamment par son goût pour les questions juridiques, il l’a amenée à intervenir plus activement dans l’élaboration des textes qui régissent le droit du travail. Il l’a aussi conduit à peser réellement dans les décisions concernant plus généralement toutes les conditions d’emploi du personnel. C’est ainsi qu'il a marqué de son empreinte les relations sociales dans l’Organisation. Sa largesse de vue et ses indéniables talents d’orateur lui donnaient souvent un pouvoir de conviction auquel il était difficile de résister. Nous nous souvenons de la détermination avec laquelle, en dépit des réticences de nombreux délégués, il avait réussi à organiser dans le CERN une première manifestation du Personnel, qui s’était massivement mobilisé dans un impressionnant cortège, pour montrer au Comité des Finances son ferme refus des mesures qu'il s’apprêtait à prendre. C’est sous sa Présidence aussi que l’Association a pris l’initiative des rencontres informelles avec les délégués d’États membres. L’objectif était de mieux préparer les discussions formelles portant sur les questions de politique du personnel, grâce à une meilleure connaissance des points de vue des uns et des autres. Ces rencontres perdurent en raison de leur utilité avérée pour tous les partis. Quelques années après sa présidence, Christian avait encore accepté de représenter l’Association avec le Président de l’époque, Franco Francia, et le Vice-Président Daniele Amati, au sein de la Commission RESCO pour la Révision Quinquennale de 1980. Cette participation de l’Association à ces travaux était alors toute nouvelle et nos trois représentants n’y siégeaient que sur ”invitation” de la Direction. Il a fallu attendre quelques années encore pour que cette Commission devienne officiellement la Commission Tripartite que nous connaissons aujourd’hui, où la délégation de l’Association siège à part entière. Par ses qualités personnelles, Christian Roy a joué un rôle important dans ces grandes étapes des relations entre les instances dirigeantes du CERN et le Personnel. C’est à ces qualités-là que tout le Personnel doit beaucoup.

by Hommage at February 11, 2016 04:29 PM

CERN Bulletin

Nursery School
Enrolments 2016-2017 Enrolments for the school year 2016-2017 to the Nursery, the Nursery school and the school will take place on 7, 8 and 9 March 2016 from 8 to 10 am at the Nursery School. Registration forms will be available from Thursday 3rd March. More information on the website: http://nurseryschool.web.cern.ch/.

by Nursery school at February 11, 2016 04:24 PM

CERN Bulletin

Cine club
Wednesday 17 February 2016 at 20:00 CERN Council Chamber Knockin' on Heaven's Door Directed by Thomas Jahn Germany, 1997, 87 minutes Two young men, Martin and Rudi, both suffering from terminal cancer, get to know each other in a hospital room. They drown their desperation in tequila and decide to take one last trip to the sea. Drunk and still in pyjamas they steal the first fancy car they find, a 60's Mercedes convertible. The car happens to belong to a bunch of gangsters, which immediately start to chase it, since it contains more than the pistol Martin finds in the glove box. Original version German / English ; English subtitles Wednesday 24 February 2016 at 20:00 CERN Council Chamber Bandits Directed by Katja von Garnier Germany / France, 1997, 110 minutes Four female cons who have formed a band in prison get a chance to play at a police ball outside the walls. They take the chance to escape. Being on the run from the law they even make it to sell their music and become famous outlaws. Original version German / English; English subtitles

by Cine club at February 11, 2016 04:16 PM

Sean Carroll - Preposterous Universe

Gravitational Waves at Last

ONCE upon a time, there lived a man who was fascinated by the phenomenon of gravity. In his mind he imagined experiments in rocket ships and elevators, eventually concluding that gravity isn’t a conventional “force” at all — it’s a manifestation of the curvature of spacetime. He threw himself into the study of differential geometry, the abstruse mathematics of arbitrarily curved manifolds. At the end of his investigations he had a new way of thinking about space and time, culminating in a marvelous equation that quantified how gravity responds to matter and energy in the universe.

Not being one to rest on his laurels, this man worked out a number of consequences of his new theory. One was that changes in gravity didn’t spread instantly throughout the universe; they traveled at the speed of light, in the form of gravitational waves. In later years he would change his mind about this prediction, only to later change it back. Eventually more and more scientists became convinced that this prediction was valid, and worth testing. They launched a spectacularly ambitious program to build a technological marvel of an observatory that would be sensitive to the faint traces left by a passing gravitational wave. Eventually, a century after the prediction was made — a press conference was called.

Chances are that everyone reading this blog post has heard that LIGO, the Laser Interferometric Gravitational-Wave Observatory, officially announced the first direct detection of gravitational waves. Two black holes, caught in a close orbit, gradually lost energy and spiraled toward each other as they emitted gravitational waves, which zipped through space at the speed of light before eventually being detected by our observatories here on Earth. Plenty of other places will give you details on this specific discovery, or tutorials on the nature of gravitational waves, including in user-friendly comic/video form.

What I want to do here is to make sure, in case there was any danger, that nobody loses sight of the extraordinary magnitude of what has been accomplished here. We’ve become a bit blasé about such things: physics makes a prediction, it comes true, yay. But we shouldn’t take it for granted; successes like this reveal something profound about the core nature of reality.

Some guy scribbles down some symbols in an esoteric mixture of Latin, Greek, and mathematical notation. Scribbles originating in his tiny, squishy human brain. (Here are what some of those those scribbles look like, in my own incredibly sloppy handwriting.) Other people (notably Rainer Weiss, Ronald Drever, and Kip Thorne), on the basis of taking those scribbles extremely seriously, launch a plan to spend hundreds of millions of dollars over the course of decades. They concoct an audacious scheme to shoot laser beams at mirrors to look for modulated displacements of less than a millionth of a billionth of a centimeter — smaller than the diameter of an atomic nucleus. Meanwhile other people looked at the sky and tried to figure out what kind of signals they might be able to see, for example from the death spiral of black holes a billion light-years away. You know, black holes: universal regions of death where, again according to elaborate theoretical calculations, the curvature of spacetime has become so pronounced that anything entering can never possibly escape. And still other people built the lasers and the mirrors and the kilometers-long evacuated tubes and the interferometers and the electronics and the hydraulic actuators and so much more, all because they believed in those equations. And then they ran LIGO (and other related observatories) for several years, then took it apart and upgraded to Advanced LIGO, finally reaching a sensitivity where you would expect to see real gravitational waves if all that fancy theorizing was on the right track. 

And there they were. On the frikkin’ money.

ligo-signal

Our universe is mind-bogglingly vast, complex, and subtle. It is also fantastically, indisputably knowable.

yeah_science_breaking_bad

I got a hard time a few years ago for predicting that we would detect gravitational waves within five years. And indeed, the track record of such predictions has been somewhat spotty. Outside Kip Thorne’s office you can find this record of a lost bet — after he predicted that we would see them before 1988. (!)

kip-bet-1

But this time around I was pretty confident. The existence of overly-optimistic predictions in the past doesn’t invalidate the much-better predictions we can make with vastly updated knowledge. Advanced LIGO represents the first time when we would have been more surprised not to see gravitational waves than to have seen them. And I believed in those equations.

I don’t want to be complacent about it, however. The fact that Einstein’s prediction has turned out to be right is an enormously strong testimony to the power of science in general, and physics in particular, to describe our natural world. Einstein didn’t know about black holes; he didn’t even know about lasers, although it was his work that laid the theoretical foundations for both ideas. He was working at a level of abstraction that reached as far as he could (at the time) to the fundamental basis of things, how our universe works at the deepest of levels. And his theoretical insights were sufficiently powerful and predictive that we could be confident in testing them a century later. This seemingly effortless insight that physics gives us into the behavior of the universe far away and under utterly unfamiliar conditions should never cease to be a source of wonder.

We’re nowhere near done yet, of course. We have never observed the universe in gravitational waves before, so we can’t tell for sure what we will see, but plausible estimates predict between one-half and several hundred events per year. Hopefully, the success of LIGO will invigorate interest in other ways of looking for gravitational waves, including at very different wavelengths. Here’s a plot focusing on three regimes: LIGO and its cousins on the right, the proposed space-based observatory LISA in the middle, and pulsar-timing arrays (using neutron stars throughout the galaxy as a giant gravitational-wave detector) on the left. Colorful boxes are predicted sources; solid lines are the sensitivities of different experiments. Gravitational-wave astrophysics has just begun; asking us what we will find is like walking up to Galileo and asking him what else you could discover with telescopes other than moons around Jupiter.

grav-wave-detectors-sources

For me, the decade of the 2010’s opened with five big targets in particle physics/gravitation/cosmology:

  1. Discover the Higgs boson.
  2. Directly detect gravitational waves.
  3. Directly observe dark matter.
  4. Find evidence of inflation (e.g. tensor modes) in the CMB.
  5. Discover a particle not in the Standard Model.

The decade is about half over, and we’ve done two of them! Keep up the good work, observers and experimentalists, and the 2010’s will go down as a truly historic decade in physics.

by Sean Carroll at February 11, 2016 03:42 PM

Peter Coles - In the Dark

LIGO: Live Reaction Blog

So the eagerly awaited press conference happened this afternoon. It started in unequivocal fashion.

“We detected gravitational gravitational waves. We did it!”

As rumoured, the signal corresponds to the coalescence of two black holes, of masses 29 and 36 times the mass of the Sun.

The signal arrived in September 2015, very shortly after Advanced LIGO was switched on. There’s synchronicity for you! The LIGO collaboration have done wondrous things getting their sensitivity down to such a level that they can measure such a tiny effect, but there still has to be an event producing a signal to measure. Collisions of two such massive black holes are probably extremely rare so it’s a bit of good fortune that one happened just at the right time. Actually it was during an engineering test!

Here are the key results:

 

LIGO

 

Excellent signal to noise! I’m convinced! Many congratulations to everyone involved in LIGO! This has been a heroic effort that has taken many years of hard slog. They deserve the highest praise, as do the funding agencies who have been prepared to cover the costs of this experiment over such a long time. Physics of this kind is a slow burner, but it delivers spectacularly in the end!

You can find the paper here, although the server seems to be struggling to cope! One part of the rumour was wrong, however, the result is not in Nature, but in Physical Review Letters. There will no doubt be many more!

And right on cue here is the first batch of science papers!

No prizes for guessing where the 2016 Nobel Prize for Physics is heading, but in a collaboration of over 1000 people across the world which few will receive the award?

So, as usual, I had a day filled with lectures, workshops and other meetings so I was thinking I would miss the press conference entirely, but in the end I couldn’t resist interrupting a meeting with the Head of the Department of Mathematics to watch the live stream…

P.S. A quick shout out the UK teams involved in this work, including many old friends in the Gravitational Physics Group at Cardiff University (see BBC News item here) and Jim Hough and Sheila Rowan from Glasgow. If any of them are reading this, enjoy your trip to Stockholm!


by telescoper at February 11, 2016 03:41 PM

ZapperZ - Physics and Physicists

This Educational Video on Accelerators Doesn't Get It
OK, before you send me hate mail and comments, I KNOW that I'm hard on this guy. He was probably trying to make a sincere and honest effort to explain something based on what he knew. And besides, this video is from 2009 and maybe he has understood a lot more since then.

But still, this video is online, and someone pointed this out to me. I get a lot of these kinds of "references" from folks online, especially with Wikipedia entries. And try as I might to ignore most of these things, they ARE out there, and some of these sources do have not only misleading information, but also outright wrong information.

This video, made presumably by a high-school science teacher, tries to explain what a particle accelerator is. Unfortunately, he described what a particle accelerator CAN do (i.e. use it in high energy physics colliders), but completely neglected the description of a "particle accelerator". This is a common error because most people associate particle accelerator with high energy physics, and think that they are one and the same.

They are not!



As I've stated in an earlier post, more than 95% of particle accelerators on earth has NOTHING to do with high energy physics. One of these things might even be in your doctors office, to generate x-rays to look at your insides. So using high energy physics experiment to explain what a particle accelerator is is like using creme brulee to describe what a dessert is. Sure, it can be a dessert, but it is such a small, SMALL part of a dessert.

A particle accelerator  is, to put it bluntly, a device to accelerator particles! Period. Once they are accelerated, the charge particles can then be used for whatever they are needed for.

Now, that may sounds trivial to you, but I can assure you that it isn't. Not only does one need to accelerate the charge particles to a set energy, but in some cases, the "quality" of the accelerated particles must be of a certain standard. Case in point is a quantity called "emittance". If these are electrons, and they are to be used to generate light in a free-electron laser, then the required emittance, especially the transverse emittance, can extremely low (in fact, the lower the better). This is where the study of beam physics is crucial (which is a part of accelerator physics).

The point I'm trying to make here is that the word "particle accelerator" is pretty generic and quite independent of "high energy physics" or "particle collider". Many accelerators don't even collide these particles as part of its operation (in fact, many do NOT want these particles to collider, such as in synchrotron radiation facilities).

What this teacher neglected to describe is HOW a particle accelerator works. The idea that there are these accelerating structures with a wide range of geometries, and they can have either static electric field, or oscillating electric field insides of these structures, that are responsible for accelerating these charged particles, be it electrons, protons, positrons, antiprotons, heavy nucleus, etc... And even for high energy physics experiments, they don't usually collide with a "fixed" target, as implied in the video. Both LEP, the Tevatron, the LHC, etc. all collide with beams moving in the opposite direction. The proposed International Linear Collider is a linear accelerator that will collide positrons and electrons moving toward each other in opposite direction.

So while the intention of this video is noble, unfortunately, the information content is suspect, and it missed its target completely. It does not really explain what a particle accelerator really is, merely what it can be used for. It also perpetuates the fallacy that particle accelerators are only for these exotic experiments, when they are definitely not.

Zz.

by ZapperZ (noreply@blogger.com) at February 11, 2016 03:06 PM

astrobites - astro-ph reader's digest

Opening Our Ears to the Universe: LIGO observes Gravitational Waves!

Title: Observation of Gravitational Waves from a Binary Black Hole Merger
Authors: The LIGO Scientific Collaboration and The Virgo Collaboration
Accepted by Physical Review Letters

*THE PRESS CONFERENCE ANNOUNCING THIS DISCOVERY IS ON NOW! TUNE IN HERE!!!*

Disclaimer: I am one of over 1000 scientists in the LIGO-Virgo Collaboration. Many members have been working for decades to accomplish this feat of scientific discovery. Twelve papers were written in companion to the gravitational wave detection paper and are filled with more amazing results. This bite just scratches the surface of the implications of this discovery. See here for much more!

 

 

In 1916, the year after correctly formulating the theory of general relativity, Albert Einstein predicted that accelerating masses create ripples that propagate through the fabric of spacetime. However, Einstein himself believed that any attempts to detect these “gravitational waves” would prove futile, as the effect that they have on their environment is miniscule. Almost poetically, a century after Einstein’s prediction this elusive phenomenon has been validated. On September 14th, 2015, a new window to the Universe was opened.

Detecting a ripple in spacetime

Though gravitational waves are invisible, they do have a measurable effect on the space they travel through by causing distances to shrink and stretch. That is where the Laser Interferometer Gravitational Wave Observatory (LIGO) comes in. The LIGO detectors in Livingston, Louisiana and Hanford, Washington utilize laser light as a very precise stopwatch to measure this effect. The detectors are identical Michelson interferometers, shooting powerful lasers down equal-length cavities four kilometers in length. Since the speed of light is constant, if the race down the interferometer arms and back is a “tie” it means that the light traveled the exact same distance and the arms are exactly the same length. LIGO is set up to have the beams destructively interfere in this case, resulting in no signal in the detectors. However, if one of these arms is stretched or shrunk, say by a gravitational wave, the race will not be a tie; the beam traveling down the shorter arm will win the race and interfere with the beam traveling down the longer arm, creating a signal.

 

Screen Shot 2016-02-10 at 9.45.00 AM

Figure 1. A simplified diagram of an Advanced LIGO detector. The upper-left inset shows the location and orientation of the two LIGO detectors, indicating the light travel time between the detectors. The upper-right inset maps the instrument noise for the Livingston (L1) and Hanford (H1) detectors. The noise is dominated by seismic activity at low frequencies and shot noise at high frequencies. Narrow spikes are caused from various sources, such as electric power grid harmonics and vibrational modes of the suspension system. Figure 3 of the detection paper.

 

The signal that LIGO detected on September 14th, referred to as GW150914, came from the merger of two black holes that were about 36 and 29 times the mass of the Sun. During the second before these giants merged, the energy released in gravitational waves from the system was 10 times greater that the energy released by all the stars in the observable universe! However, since spacetime is very “stiff” and the black holes merged over a billion lightyears away, even an event as powerful as this created a minuscule effect on the space that Earth occupies.

Whisper from the Universe

The strength of signals detected by LIGO is given by a dimensionless quantity known as strain, which is essentially the change in the length of the interferometer’s arms divided by the arm length. LIGO can detect strains analogous to a change in distance in the 4 kilometer-long arms that is 10,000 times smaller than the width of a proton. For another perspective of how tiny this strain is, if the length of the interferometer arms were instead an Earth-Sun distance of 150 million kilometers, this strain would only be change of distance in these hypothetical arms as small as a hydrogen atom! Over its 1/5th of a second in the LIGO frequency band, GW150914 reached a peak strain of 10^-21.

Screen Shot 2016-02-10 at 9.47.17 AM

Figure 2. GW150914 observed by the Hanford detector (left column) and Livingston detector (right column). The top row shows the strain measured in each of the detectors. The second row shows the strain in the 35-350 Hz band (cutting out low and high frequencies), a numerical relativity waveform for a system with parameters consistent with GW150914 (solid line), and the 90% credible regions for two independent waveform reconstructions (gray). The third row shows noise residuals after subtracting the filtered numerical relativity waveform from the filtered detector time series data. The bottom row shows a time-frequency representation of the strain data, with the distinctive “chirp” of the signal frequency increasing over time. Figure 1 of the detection paper.

Since LIGO needs to be sensitive to the tiny signals of gravitational waves, it is also susceptible to various environmental and instrumental sources of noise. LIGO data is inherently noise-dominated. Though coherence between the two detectors is used as an initial screening (astrophysical signals are expected to be seen in both detectors with a time difference less than or equal to the light travel time between the detectors), sophisticated data analysis techniques are still necessary to search through the data for true astrophysical signals. One of the methods used involves searching for genuinely “loud” signals. However, trying to find a gravitational wave signal is like trying to hear a single conversation in a very loud party – it greatly helps to know how the particular conversation is meant to sound. Equipped with the equations of general relativity, simulated waveforms are created for thousands of compact binary merger systems. These “template” waveforms are compared to the LIGO data as another means to search for true astrophysical signals. Both of these search techniques recovered GW150914, and provided the 5-sigma confidence that is a standard for scientific discovery. This means that the rate at which a signal analogous to GW150914 is created by noise is less than 1 in every 203,000 years.

Deciphering the signal

Utilizing the waveform detected by LIGO and the best-fit numerical relativity waveforms, properties of the binary black hole system were estimated. The waveforms of binary black holes depend on 15 intrinsic (i.e. spin, mass) and extrinsic (i.e. sky location, inclination) parameters. Bayesian statistical techniques such as Markov Chain Monte Carlo estimated the mass of the black holes to be ~36 and ~29 solar masses, the redshift at which the merger occurred to be ~0.09, the final mass of the merged black hole to be ~62 solar masses (the missing 3 solar masses of energy is what was released by gravitational waves during the merger), and the sky location of the event.

Screen Shot 2016-02-11 at 8.51.55 AM

Figure 3. The estimated sky location derived from the most accurate parameter estimation code. The different colors mark 10% increases in the probability than GW150914 came from these regions. Note that we do not expect to see an electromagnetic counterpart for the merging of two black holes. Figure derived from figure 4 of the companion paper Properties of the binary black hole merger GW150914.

In addition to being the first direct detection of gravitational waves, GW150914 is loaded with astrophysical implications. This event provides the first observational evidence of binary black hole systems, and tells us that these systems can merge within the age of the Universe. GW150914 also provides the first evidence that “heavy” black holes actually exist in nature; before this discovery indirect evidence of stellar mass black holes only uncovered masses up to ~20 solar masses. As an added bonus, GW150914 provided a unique test to Einstein’s 100-year-old theory (which passed with flying colors), as well as being the first true test of the strong-field regime of general relativity.

Screen Shot 2016-02-10 at 9.44.38 AM

Figure 4. Images from a numerical relativity model (top), the gravitational wave strain (middle), and the relative velocity and separation of the black holes (bottom) for GW150914 over time. The very close separation of the two objects before merger (which is derived from the waveform) indicates that they are indeed black holes. Figure 2 of the detection paper.

The discovery of GW150914 represent results from the first month of advanced LIGO’s first observing run, which lasted a total of about 4 months. During this first month, another possible signal was found: LVT151012. Though much weaker than GW150914 and not significant enough to be declared a “detection,” it is likely astrophysical and from the coalescence of two black holes. From the detection of GW150914 and possible detection of LVT151012 during the first month, the possible rate of binary black hole mergers for systems analogous to these can be derived. If both GW150914 and LVT151012 are included, the rate of mergers for these classes of black holes is ~6-400 per cubic gigaparsec per year.

For almost all of humanity’s existence, the sole way in which we have studied our Universe is through light. This discovery has given us a new “sense” to explore the Universe, in a way opening our ears and allowing us to listen to the cosmos for the first time. Gravitational waves will allow us to probe objects, events, and epochs that are inaccessible to light, such as the merging of two black holes or the first second after the Big Bang. With more detections, we can further constrain the rate at which compact binary mergers occur, learn about the environments in which they occur, and test our models predicting their formation. Furthermore, when more advanced gravitational wave detectors join the network (such as advanced Virgo in late 2016), triangulation of gravitational wave sources in the sky will drastically improve, increasing the chances of detecting an electromagnetic counterpart to gravitational wave signals and probing these systems through multiple messengers. Needless to say, this detection will prove to be one of the most significant discoveries in modern physics, as it has opened an entire new realm of the Universe to explore.

by Michael Zevin at February 11, 2016 03:00 PM

Emily Lakdawalla - The Planetary Society Blog

Winter Issue of The Planetary Report is Here!
The winter issue of The Planetary Report is at the printer and will be in your mailbox soon if you're a member of The Planetary Society. (And if you're not, join now!)

February 11, 2016 02:23 PM

Peter Coles - In the Dark

Advance Thoughts on LIGO

By way of a warm-up to this afternoon’s announcement, here are some thoughts by another physicist…

Of Particular Significance

Scarcely a hundred years after Einstein revealed the equations for his theory of gravity (“General Relativity”) on November 25th, 1915, the world today awaits an announcement from the LIGO experiment, where the G in LIGO stands for Gravity. (The full acronym stands for “Laser Interferometer Gravitational Wave Observatory.”) As you’ve surely heard, the widely reported rumors are that at some point in the last few months, LIGO, recently upgraded to its “Advanced” version, finally observed gravitational waves — ripples in the fabric of space (more accurately, of space-time). These waves, which can make the length of LIGO shorter and longer by an incredibly tiny amount, seem to have come from the violent merger of two black holes, each with a mass [rest-mass!] dozens of times larger than the Sun. Their coalescence occurred long long ago (billions of years) in a galaxy far far away (a good fraction of the distance…

View original post 2,039 more words


by telescoper at February 11, 2016 01:47 PM

Christian P. Robert - xi'an's og

conference deadlines [register now!!]

bike trail from Kenilworth to the University of WarwickRegistration is now open for our [fabulous!] CRiSM workshop on estimating [normalising] constants, in Warwick, on April 20-22 this year. While it is almost free (almost as in £40.00!), we strongly suggest you register asap if only to secure a bedroom on the campus at a moderate rate of £55.00 per night (breakfast included!). Plus we would like to organise the poster session(s) and the associated “elevator” talks for the poster presenters.

While the deadline for early registration at AISTATS is now truly over, we also encourage all researchers interested in this [great] conference to register as early as possible, if only [again] to secure a room at the conference location, the Parador Hotel in Cádiz. (Otherwise, there are plenty of rentals in the neighbourhood.)

Last and not least, the early registration for ISBA 2016 in Santa Margherita di Pula, Sardinia, is still open till February 29. And the rate will move immediately to late registration fees. The same deadline applies to bedroom reservations in the resort, with apparently only a few rooms left for some of the nights. Rentals and hotels around are also getting filled rather quickly.


Filed under: Kids, pictures, Statistics, Travel, University life Tagged: AISTATS 2016, Cadiz, Coventry, CRiSM, ISBA 2016, Italy, registration fees, Santa Margherita di Pula, Sardinia, Spain, University of Warwick

by xi'an at February 11, 2016 01:18 PM

Matt Strassler - Of Particular Significance

Advance Thoughts on LIGO

Scarcely a hundred years after Einstein revealed the equations for his theory of gravity (“General Relativity”) on November 25th, 1915, the world today awaits an announcement from the LIGO experiment, where the G in LIGO stands for Gravity. (The full acronym stands for “Laser Interferometer Gravitational Wave Observatory.”) As you’ve surely heard, the widely reported rumors are that at some point in the last few months, LIGO, recently upgraded to its “Advanced” version, finally observed gravitational waves — ripples in the fabric of space (more accurately, of space-time). These waves, which can make the length of LIGO shorter and longer by an incredibly tiny amount, seem to have come from the violent merger of two black holes, each with a mass [rest-mass!] dozens of times larger than the Sun. Their coalescence occurred long long ago (billions of years) in a galaxy far far away (a good fraction of the distance across the visible part of the universe), but the ripples from the event arrived at Earth just weeks ago. For a brief moment, it is rumored, they shook LIGO hard enough to be convincingly observed.

For today’s purposes, let me assume the rumors are true, and let me assume also that the result to be announced is actually correct. We’ll learn today whether the first assumption is right, but the second assumption may not be certain for some months (remember OPERA’s [NOT] faster-than-light neutrinos  and BICEP2’s [PROBABLY NOT] gravitational waves from inflation). We must always keep in mind that any extraordinary scientific result has to be scrutinized and confirmed by experts before scientists will believe it! Discovery is difficult, and a large fraction of such claims — large — fail the test of time.

What the Big News Isn’t

There will be so much press and so many blog articles about this subject that I’m just going to point out a few things that I suspect most articles will miss, especially those in the press.

Most importantly, if LIGO has indeed directly discovered gravitational waves, that’s exciting of course. But it’s by no means the most important story here.

That’s because gravitational waves were already observed indirectly, quite some time ago, in a system of two neutron stars orbiting each other. This pair of neutron stars, discovered by Joe Taylor and his graduate student Russell Hulse, is interesting because one of the neutron stars is a pulsar, an object whose rotation and strong magnetic field combine to make it a natural lighthouse, or more accurately a radiohouse, sending out pulses of radio waves that can be detected at great distances. The time between pulses shifts very slightly as the pulsar moves toward and away from Earth, so the pulsar’s motion around its companion can be carefully monitored. Its orbital period has slowly changed over the decades, and the changes are perfectly consistent with what one would expect if the system were losing energy, emitting it in the form of unseen gravitational waves at just the rate predicted by Einstein’s theory (as shown in this graph.) For their discovery, Hulse and Taylor received the 1993 Nobel Prize. By now, there are other examples of similar pairs of neutron stars, also showing the same type of energy loss in detailed accord with Einstein’s equations.

A bit more subtle (so you can skip this paragraph if you want), but also more general, is that some kind of gravitational waves are inevitable… inevitable, after you accept Einstein’s earlier (1905) equations of special relativity, in which he suggested that the speed of light is a sort of universal speed limit on everything, imposed by the structure of space-time.  Sound waves, for instance, exist because the speed of sound is finite; if it were infinite, a vibrating guitar string would make the whole atmosphere wiggle back and forth in sync with the guitar string.  Similarly, since effects of gravity must travel at a finite speed, the gravitational effects of orbiting objects must create waves. The only question is the specific properties those waves might have.

No one, therefore, should be surprised that gravitational waves exist, or that they travel at the universal speed limit, just like electromagnetic waves (including visible light, radio waves, etc.) No one should even be surprised that the waves LIGO is (perhaps) detecting have properties predicted by Einstein’s specific equations for gravity; if they were different in a dramatic way, the Hulse-Taylor neutron stars would have behaved differently than expected.

Furthermore, no one should be surprised if waves from a black hole merger have been observed by the Advanced LIGO experiment. This experiment was designed from the beginning, decades ago, so that it could hardly fail to discover gravitational waves from the coalescence of two black holes, two neutron stars, or one of each. We know these mergers happen, and the experts were very confident that Advanced LIGO could find them. The really serious questions were: (a) would Advanced LIGO work as advertised? (b) if it worked, how soon would it make its first discovery? and (c) would the discovery agree in detail with expectations from Einstein’s equations?

Big News In Scientific Technology

So the first big story is that Advanced LIGO WORKS! This experiment represents one of the greatest technological achievements in human history. Congratulations are due to the designers, builders, and operators of this experiment — and to the National Science Foundation of the United States, which is LIGO’s largest funding source. U.S. taxpayers, who on average each contributed a few cents per year over the past two-plus decades, can be proud. And because of the new engineering and technology that were required to make Advanced LIGO functional, I suspect that, over the long run, taxpayers will get a positive financial return on their investment. That’s in addition of course to a vast scientific return.

Advanced LIGO is not even in its final form; further improvements are in the works. Currently, Advanced LIGO consists of two detectors located 2000 miles (3000 kilometers) apart. Each detector consists of two “arms” a few miles (kilometers) long, oriented at right angles, and the lengths of the arms are continuously compared.  This is done using exceptionally stable lasers reflecting off exceptionally perfect mirrors, and requiring use of sophisticated tricks for mitigating all sorts of normal vibrations and even effects of quantum “jitter” from the Heisenberg uncertainty principle. With these tools, Advanced LIGO can detect when passing gravitational waves change the lengths of LIGO’s arms by … incredibly … less than one part in a billion trillion (1,000,000,000,000,000,000,000). That’s an astoundingly tiny distance: a thousand times smaller than the radius of a proton. (A proton itself is a hundred thousand times smaller, in radius, than an atom. Indeed, LIGO is measuring a distance as small as can be probed by the Large Hadron Collider — albeit with a very very tiny energy, in contrast to the collider.) By any measure, the gravitational experimenters have done something absolutely extraordinary.

Big News In Gravity

The second big story: from the gravitational waves that LIGO has perhaps seen, we would learn that the merger of two black holes occurs, to a large extent, as Einstein’s theory predicts. The success of this prediction for what the pattern of gravitational waves should be is a far more powerful test of Einstein’s equations than the mere existence of the gravitational waves!

Imagine, if you can… Two city-sized black holes, each with a mass [rest-mass!] tens of times greater than the Sun, and separated by a few tens of miles (tens of kilometers), orbit each other. They circle faster and faster, as often, in their last few seconds, as 100 times per second. They move at a speed that approaches the universal speed limit. This extreme motion creates an ever larger and increasingly rapid vibration in space-time, generating large space-time waves that rush outward into space. Finally the two black holes spiral toward each other, meet, and join together to make a single black hole, larger than the first two and spinning at an incredible rate.  It takes a short moment to settle down to its final form, emitting still more gravitational waves.

During this whole process, the total amount of energy emitted in the vibrations of space-time is a few times larger than you’d get if you could take the entire Sun and (magically) extract all of the energy stored in its rest-mass (E=mc²). This is an immense amount of energy, significantly more than emitted in a typical supernova. Indeed, LIGO’s black hole merger may perhaps be the most titanic event ever detected by humans!

This violent dance of darkness involves very strong and complicated warping of space and time. In fact, it wasn’t until 2005 or so that the full calculation of the process, including the actual moment of coalescence, was possible, using highly advanced mathematical techniques and powerful supercomputers!

By contrast, the resulting ripples we get to observe, billions of years later, are much more tame. Traveling far across the cosmos, they have spread out and weakened. Today they create extremely small and rather simple wiggles in space and time. You can learn how to calculate their properties in an advanced university textbook on Einstein’s gravity equations. Not for the faint of heart, but certainly no supercomputers required.

So gravitational waves are the (relatively) easy part. It’s the prediction of the merger’s properties that was the really big challenge, and its success would represent a remarkable achievement by gravitational theorists. And it would provide powerful new tests of whether Einstein’s equations are in any way incomplete in their description of gravity, black holes, space and time.

Big News in Astronomy

The third big story: If today’s rumor is indeed of a real discovery, we are witnessing the birth of an entirely new field of science: gravitational-wave astronomy. This type of astronomy is complementary to the many other methods we have of “looking” at the universe. What’s great about gravitational wave astronomy is that although dramatic events can occur in the universe without leaving a signal visible to the eye, and even without creating any electromagnetic waves at all, nothing violent can happen in the universe without making waves in space-time. Every object creates gravity, through the curvature of space-time, and every object feels gravity too. You can try to hide in the shadows, but there’s no hiding from gravity.

Advanced LIGO may have been rather lucky to observe a two-black-hole merger so early in its life. But we can be optimistic that the early discovery means that black hole mergers will be observed as often as several times a year even with the current version of Advanced LIGO, which will be further improved over the next few years. This in turn would imply that gravitational wave astronomy will soon be a very rich subject, with lots and lots of interesting data to come, even within 2016. We will look back on today as just the beginning.

Although the rumored discovery is of something expected — experts were pretty certain that mergers of black holes of this size happen on a fairly regular basis — gravitational wave astronomy might soon show us something completely unanticipated. Perhaps it will teach us surprising facts about the numbers or properties of black holes, neutron stars, or other massive objects. Perhaps it will help us solve some existing mysteries, such as those of gamma-ray bursts. Or perhaps it will reveal currently unsuspected cataclysmic events that may have occurred somewhere in our universe’s past.

Prizes On Order?

So it’s really not the gravitational waves themselves that we should celebrate, although I suspect that’s what the press will focus on. Scientists already knew that these waves exist, just as they were aware of the existence of atoms, neutrinos, and top quarks long before these objects were directly observed. The historic aspects of today’s announcement would be in the successful operation of Advanced LIGO, in its new way of “seeing” the universe that allows us to observe two black holes becoming one, and in the ability of Einstein’s gravitational equations to predict the complexities of such an astronomical convulsion.

Of course all of this is under the assumptions that the rumors are true, and also that LIGO’s results are confirmed by further observations. Let’s hope that any claims of discovery survive the careful and proper scrutiny to which they will now be subjected. If so, then prizes of the highest level are clearly in store, and will be doled out to quite a few people, experimenters for designing and building LIGO and theorists for predicting what black-hole mergers would look like. As always, though, the only prize that really matters is given by Nature… and the many scientists and engineers who have contributed to Advanced LIGO may have already won.

Enjoy the press conference this morning. I, ironically, will be in the most inaccessible of places: over the Atlantic Ocean.  I was invited to speak at a workshop on Large Hadron Collider physics this week, and I’ll just be flying home. I suppose I can wait 12 hours to find out the news… it’s been 44 years since LIGO was proposed…


Filed under: Astronomy, Gravitational Waves Tagged: astronomy, black holes, Gravitational Waves, LIGO

by Matt Strassler at February 11, 2016 01:07 PM

astrobites - astro-ph reader's digest

Uncovering planets and stellar activity using only radial velocities

Meet stellar activity: The Problem

The Doppler method, or the radial velocity (RV) method, was the first method used to detect planets around solar-type stars: just as a star gravitationally tugs an orbiting planet, the planet tugs the host star, causing the star to move in its own tiny orbit. This ‘wobble’ motion of the star can then be detected as tiny Doppler shifts in the star’s spectra. Over the last two decades we have found around 500 planets with this method, and to this day it remains the most direct way to measure exoplanet masses.

However, the RV method has its limitations. One of them is stellar activity.

Stars are not perfectly stable, smooth objects. Instead, stars are active: they move, oscillate, flare, and rotate. Any features present on the stellar surface, like granules, starspots, or faculae, dance around the surface of the star, fading in and out of view (see video of evolving sunspots below). This variability—or ‘stellar activity’—can induce unwanted signals in the stellar spectra we observe. Annoyingly, it just so happens that these signals can have similar frequencies and amplitudes to planet signals. This is a problem. We might mistake periodic stellar oscillations for planets. Meet stellar activity: The Problem.

How do we differentiate between stellar activity and planets? Good question. Excellent question. This is the question the authors of today’s paper seek to answer.

More specifically, the authors ask: How do we tell the difference between planet signals and stellar activity using only RVs? The authors note that other groups have used both RVs and photometric data to successfully disentangle planet signals from stellar activity. However, such an approach requires a joint model for photometric and RV variations. And what do we do when we only have RV data?

The authors therefore set out to create a general framework to model stellar activity using only RVs, using few assumptions about their dataset, while confidently recovering planets if any are present.

Modeling stellar activity using Gaussian Processes

In the paper, the authors describe their modeling procedure in detail. In short, the authors use a Bayesian Markov Chain Monte Carlo (MCMC) model. This essentially means that they have a model that takes in data and any prior knowledge the authors have about it. The model then tells them how well the different model parameters describe the data, after calculating a multitude of candidate solutions. In the authors’ application, they do not know the final number of planets (they are trying to find it!), so their model varies the number of planets, while fitting for two main signals in the RV data: planet signals, and stellar activity signals. Let’s take a closer look at how the authors model these signals.

First, the planet signals. The planet signals are relatively easy: they cause the whole spectrum of the star to shift, causing a clean Doppler shift, which is relatively easy to measure. The planet orbits can be described with Kepler’s laws

Stellar activity signals are harder to describe. Stellar activity can be periodic, or quasi-periodic, and generally acts on a part of the spectrum at a time. The overall behavior of stellar activity can be modeled as correlated noise. To model stellar activity, the authors use Gaussian Processes. Gaussian processes are a flexible tool in the astronomer’s statistical toolbox—important as they inherit properties from the normal distribution—and an efficient way to model correlated noise.

Detecting two planets confidently

The authors apply their method to HARPS observations of the active star CoRoT-7, confidentially detecting two planets. Figure 1 shows one of the main outputs from their simulation: the posterior distribution of the number of planets, which we can be used to infer how many planets are supported by the data, and to gain information about their orbital parameters, and in particular, their masses.

What do the authors mean by ‘confidently’ detecting two planets? The authors use a strict detection criterion, stating that to claim a detection of N planets, then the probability of N planets should be at least 150 times greater than the probability of N-1 planets. This approach regards false positives (saying there is a planet, when there really isn’t one) to be worse than false negatives (saying there isn’t a planet, when there really is one). This is highlighted in Figure 1, showing that according to this criterion that there is strong evidence for two planets, while the evidence for more planets is relatively weak.

faria_etal_1

Figure 1. The posterior distribution for the number of planets around CoRoT-7. The ratios of the probabilities between models with 1, 2, and 3 planets is highlighted (note that p(0) = p(1) = 0). According to the authors’ detection criterion, the evidence for two planets is much stronger than the evidence for three or more.

RVs carry treasures of information

The planets around CoRoT-7 (CoRoT-7a, and CoRoT-7b) had been announced previously in earlier papers, but the high activity level of CoRoT-7 created a lot of discussion with different planet mass estimates being reported. Another group combined RV data from HARPS with simultaneous photometric measurements, to successfully disentangle the stellar activity signal. The authors of today’s paper achieve similar results, now using only RVs, showing that RVs are rich in information content. The importance of this work is not the specific application to CoRoT-7, but rather that the authors have provided a fast framework to facilitate the study of planets around active stars using only RVs. In the future, the authors plan to test their framework further, to see how well it performs on other RV datasets.

The authors have all their code and data used presented in this paper available online on GitHub: https://github.com/j-faria/exoBD-CoRoT7. Take a peek, help them improve it, and by all means use it to find more planets!

by Gudmundur Stefansson at February 11, 2016 04:18 AM

February 10, 2016

David Berenstein, Moshe Rozali - Shores of the Dirac Sea

Gravitational waves announcement from LIGO expected

As the rumor noise level has increased over the last few weeks, and LIGO has a press conference scheduled for tomorrow morning, everyone in the gravity community is expecting that LIGO will announce the first detection of gravitational waves.

 

A roundup of rumors can be found here and here and here and here.

Preprints with postdictions that sound as predictions can be found here for example. I’ve been told that the cat has been out of the bag for a while, and people with inside information have been posting papers to the arxiv in advance of the LIGO announcement.

Obviously  this is very exciting and hopefully the announcement tomorrow will usher a new era of gravitational astronomy.

 


Filed under: gravity, Physics

by dberenstein at February 10, 2016 11:28 PM

Clifford V. Johnson - Asymptotia

News from the Front, XII: Simplicity

adding_cyclesOk, I promised to explain the staircase I put up on Monday. I noticed something rather nice recently, and reported it (actually, two things) in a recent paper, here. It concerns those things I called "Holographic Heat Engines" which I introduced in a paper two years ago, and which I described in some detail in a previous post. You can go to that post in order to learn the details - there's no point repeating it all again - but in short the context is an extension of gravitational thermodynamics where the cosmological constant is dynamical, therefore supplying a meaning to the pressure and the volume variables (p,V) that are normally missing in black hole thermodynamics... Once you have those, it seems obvious that you can start considering processes that do mechanical work (from the pdV term in the first law) and within a short while the idea of heat engines in which the black hole is the working substance comes along. Positive pressure corresponds to negative cosmological constant and so the term "holographic heat engines" is explained. (At least to those who know about holographic dualities.)

So you have a (p,V) plane, some heat flows, and an equation of state determined by the species of (asymptotically AdS) black hole you are working with. It's like discovering a whole new family of fluids for which I know the equation of state (often exactly) and now I get to work out the properties of the heat engines I can define with them. That's what this is.

Now, I suspect that this whole business is an answer waiting for a question. I can't tell you what the question is. One place to look might be in the space of field theories that have such black holes as their holographic dual, but I'm the first to admit that [...] Click to continue reading this post

The post News from the Front, XII: Simplicity appeared first on Asymptotia.

by Clifford at February 10, 2016 10:33 PM

Emily Lakdawalla - The Planetary Society Blog

Curiosity update, sols 1218-1249: Digging in the sand at Bagnold Dunes
Curiosity has spent the last month sampling and processing dark sand scooped from the side of Namib Dune. The rover has now departed Namib and is preparing to cross the Bagnold dune field, while working to diagnose an anomaly with the CHIMRA sample handling mechanism.

February 10, 2016 06:21 PM

Tommaso Dorigo - Scientificblogging

Giddings: The 750 GeV Diphoton Resonance Is A Graviton
After the ATLAS and CMS collaboration disclosed their first Run 2 results on diphoton searches, less than two months ago, the realization that it would be impossible to keep up-to-date with all the theoretical ideas that were being put forth was immediate. The flood of papers discussing the 750 GeV bump was - and still is - too much to handle if reading papers is not your primary occupation.This is unfortunate, as many of my colleagues believe that the new tentative signal is real.

read more

by Tommaso Dorigo at February 10, 2016 09:51 AM

Quantum Diaries

The Problem with B.o.B. – Science and the flat earth
B.o.B.

Rap musician, B.o.B. (Image: Frazer Harrison/Getty Images for BMI)

Is the world flat?

That question was posed by popular rap musician B.o.B. on his Twitter account this past week, prompting angry, but comical video and rap responses by popular science communicator Neil deGrasse Tyson and his musician nephew.

What do we really know?

A few thousand years ago, Greek philosophers and Phoenician explorers began to cast doubt on the flat-earth model. They noted differences in star visibility and the sun’s trajectory that depended on the observer’s location, leading them to propose the earth was a sphere. Convinced by this data, as well as the roundness of earth’s shadow cast on the moon during a lunar eclipse, the Greek astronomer Eratosthenes went a step further to estimate the earth’s circumference in 240 BCE. Using trigonometry and shadows cast during the solstice, he came to within a few percent of the actual value. Not bad.

Eratosthenes method for measuring the size of the earth

Image: National Geodetic Survey NOAA, Public Domain.

Evidence backing the round-earth model grew through time and was sufficient five centuries ago to convince sailors they would not fall off earth’s edges. Magellan was the first we know of to circumnavigate the globe and to live to tell about it. Even more convincing were the famous earthrise photos sent down from lunar orbit a few hundred years later. The evidence is overwhelming. So, what’s up with B.o.B.?

Yesterday evening, I had the privilege to discuss the science of the Large Hadron Collider at CERN with a group of 13 and 14 year-olds from Seward, Alaska, USA. They connected via the ATLAS Virtual Visit system to see the experiment and to ask questions about our research. As usual, there were a lot of excellent questions, and fellow CMS physicist, Dave Barney, and I did our best to answer them all.  Then we got to:

“How do you understand things you can’t see?”

Only youth can ask a question so profound.

This started me thinking about our friend B.o.B., and it occurred to me that his skepticism is not so different from that of the student nor even of the scientists at CERN who hunted for the Higgs boson.

More than fifty years ago, an idea was formed by a group of theorists, including François Englert, Robert Brout, and Peter Higgs, essentially describing how fundamental particles attain mass. The proposed mechanism requires the existence of a pervasive, non-directional (we call it scalar) force field and its associated particle, now known as the Higgs boson. It became central to a new theory, called the Standard Model, used by physicists to describe the fundamental particles that make up matter and the forces that act upon them.

Apollo8-Earthrise

Earthrise from moon, shot by astronauts orbiting in Apollo 8 capsule. Image: NASA

The Standard Model, much like the round-earth model, proved itself over time. Just as sailors bet their lives that the earth was a sphere before seeing photos from space, physicists included the Higgs field in their theory and were able to make accurate predictions of the existence (and even the mass) of new particles before seeing images of the Higgs boson. But, we still asked:

Does the Higgs boson exist?

Yes, the empirical evidence was convincing, but just like Magellan, the astronauts, and B.o.B., we scientists wanted our photos. These finally came in 2012, in the form of high-energy proton collisions in the ATLAS and CMS detectors at CERN. Yes, there is something reassuring in seeing it with our own eyes (or detectors).

So, what’s the problem with B.o.B.? If scientists, explorers, and students have the right to be skeptical, why not a musician?

I don’t think Neil deGrasse Tyson is complaining that B.o.B. posed a question. Skepticism is key to the scientific process and questions should be asked. It is far better to ask questions than it is to blindly believe the authoritative figures who present “facts”. If you have doubts, by all means, ask!

Higgs Boson, ATLAS, Physics Events

Candidate Higgs boson decay to 2 photons. Image: ATLAS Experiment © 2011 CERN, CC-BY-SA-4.0

But, B.o.B. went further. He presented a theory (in this case, a very old one) as fact. And he did this without any serious evidence to back it up. This is irresponsible for anyone, but especially for someone who is seen as an authoritative figure by his fans, and moreover for someone who has the means and ability to know better.

We can take comfort in the fact that science is based on uncovering the truth and that truth ultimately reveals itself. But human progress depends on our ability to build upon well-established bricks of knowledge. Sure, we should check the solidity of those bricks from time to time, but let’s not waste effort trying to break them for no good reason.

As a physicist, I am often challenged by friends and family to explain the relevance of our work. So, when the opportunity came last fall to speak at TEDxTUM in Munich, I happily responded to that very question with a simple answer: We have no choice. Human survival depends on basic research. Without our drive to explore and to understand the world, our species would not still be here. We would have starved, been eaten, or died of disease, a long time ago. Hence the threat of B.o.B.

And B.o.B. is not alone.

Powerful people who would like to be world leaders are acting similarly or worse, attacking evidence-based science for the sake of political gain. And while a flat-earth conspiracy might be innocuous or even silly, those who deny important measurements, such as those of climate change, threaten our survival much more directly.

So, when scientists react to B.o.B. with words, images, or even song, they are not just defending their turf, they are expressing primal instincts. They are defending our species. And when individuals like B.o.B. threaten human survival, I suggest they watch their back. They might just get pushed off the edge of the earth.

A question of survival: Why we hunted the Higgs. (Video: TEDxTUM)

by Steven Goldfarb at February 10, 2016 08:35 AM

astrobites - astro-ph reader's digest

Moon Zoo: Counting lunar craters with “citizen science”

Title: The Moon Zoo citizen science project: Preliminary results for the Apollo 17 landing site
Authors: Roberto Bugiolacchi et al.
First Authors Affiliation: Centre for Planetary Sciences at UCL/Birkbeck.

Scientific projects utilizing large datasets are increasingly relying on crowd sourcing to analyse their data. The Moon Zoo project (part of same suite of projects as Galaxy Zoo, which is devoted to classifying galaxy morphologies) relies on this kind of “citizen science” to examine images of the lunar surface taken from the Lunar Reconnaissance Orbiter Camera. A census of craters and their sizes allows for the determination of cratering rate on the Moon’s surface, the level of crater erosion and degradation (measured by looking at the variability of the circle sizes and locations), and estimates of the regolith depth. However, the effects of erosion and illumination make accurately identifying craters difficult for computers, and human eyes are generally superior for these sorts of tasks. The limiting factor then, becomes the sheer number of craters (thousands upon thousands) relative to the number of researchers available to examine the data.

The Moon Zoo interface is fairly easy and intuitive to use. After a brief training tutorial, users are presented with images of the lunar surface, on which they can place markers to indicate the size and position of craters (Fig. 1). To validate the accuracy and reliability of the crowd sourced measurements, the authors of this paper conduct their own crater count on a set of images, and compare this to the data generated by the users.

moonzoo

Fig. 1: The Moon Zoo interface allows users to mark the location and size of craters by drawing circles. Additional options are available for the user to mark any other interesting features in the image. On average, each image is examined six times by different users.

While crowd sourcing is a cheap and efficient way to analyze large data sets, it is not without its shortcomings. Out of the ~9000+ Moon Zoo users, each user provides on average 14 annotations per crater. However, almost 75% of all users identified less than 10 craters, which shows a relatively low commitment rate. This bring into question whether the data generated by users are actually reliable. The authors set a minimum threshold of 20 crater notations per user for the data to be used in any scientific analysis, which eliminates a significant fraction of users. Due to the lack of experience of most users, there is a lot of variation in the estimates of crater boundaries and locations. For example, one systematic error results from a significant number of users using the smallest default crater size marker to indicate the size of the smallest craters, a result which is clearly visible in the irregularities in the crater size distribution (Fig. 2) The authors also conclude that users should be sufficiently trained (i.e. through a tutorial) to ensure that their responses are reliable.

cratersizes

Fig. 2: Histogram showing the distribution of crater sizes (lower panel showing the percent deviation from the power law fit). The red spikes are a result of users selecting the smallest available size crater marker for a given level of zoom on an image, instead of measuring the true crater size.

To improve the quality of responses and increase the retention rate of users, researchers involved in similar “citizen science” projects have proposed various incentives for the public to volunteer their time to these efforts. Many projects are offering acknowledgements in scientific publications to many users who are involved in any significant discoveries and gamification incentives to make the data analysis experience more fun and rewarding. Even if it’s not a perfect system, crowd sourced science still offers a huge leap in productivity given the limited number of specialists and experts available to work on these projects.

by Anson Lam at February 10, 2016 12:54 AM

February 09, 2016

Lubos Motl - string vacua and pheno

The utter insanity of Woit's Rutgers colloquium
I did my PhD at Rutgers University, the State University of New Jersey. Those were 4 interesting years – ending by the PhD defense on 9/11/2001, 9:30 am, some 50 miles from the Twin Towers.

Shortly before I came to Rutgers in Fall 1997 (not counting a visit in Spring 1997), it was a powerful thinking machine, arguably a top 5 place in string theory in the world. (This comment does not say that Rutgers is not good today, it's very good; and it does not imply that a new graduate student like me was the cause why Rutgers ceased to be at the absolute Olymp of theoretical physics, I was too small a master for such big changes. In the mid-to-late 1990s, it was simply natural for the richer universities like Harvard to attract folks from that "hot field" that did much of their recent important work at "slightly less obvious" top places such as Rutgers and Santa Barbara.)



Before the brains were absorbed by some of the "more expected" famous universities in the U.S., string theory faculty at Rutgers as a group were known – relatively to other physics professors at Rutgers – for their unusual contributions to science and also funding and they enjoyed some teaching advantages relatively to non-string faculty, and so on, a setup designed to further improve their efficient research. I was always imagining how hard such a setup would have been in Czechia, due to jealousy, a feature of the Czech national character.




Fast forward to 2016. Last week, the notorious critic of string theory Peter Voits (yes, this is the right spelling) gave a physics colloquium at Rutgers. Colloquia are held in the round underground building pictured above every Wednesday. The speakers are almost universally active physicists. Another exception occurred a week before Voits' colloquium when David Maiullo talked about his Broadway show.




The Rutgers website suggests that the host – the man who probably had the idea to invite Voits – was Herbert Neuberger, a lattice gauge theory guy. This hypothesis makes some sense; Voits' only papers about physics, those written in the mid 1980s, were about gauge theory, too.

Along with a string theorist whom I know very well and who is located in Asia, we agreed that the string theory Rutgers faculty were no warriors. And indeed, the reports say that no local string theorist has attended the anti-string colloquium and if he did, he remained completely invisible. If we insist on polite words, Mr Neuberger is quite a jerk. Can you imagine that a string theorist would organize a colloquium by a non-physicist who attacks e.g. lattice gauge theory?
The slides from Voits' colloquium are available as a PDF file. Let me go through them.
Needless to say, the first crazy thing about the talk was the title:
Not Even Wrong, ten years later: a view from mathematics on prospects for fundamental physics without experiment
Ten years after the publication of an anti-physics tirade (one of hundreds of similar tirades by the laymen you may find in the libraries or on the Internet) that no high-energy physicist has ever taken seriously, Voits and his host must think that it was such a big deal that it deserves a colloquium. Now, the following page (2/32) is the outline:
  • Advertisements: old book, blog, coming book
  • What happened to string unification
  • 2x about how mathematics helps to guide physics
  • Representation theory is useful for the Standard Model
Now, this is just plain sick. First, why should a fifth of a colloquium be dedicated to "advertisements", let alone advertisements that don't help the scientific research in any way? Is Prof Neuberger also planning to turn the physics.rutgers.edu website to a porn website?

The second point is said to be about the string unification – except that the speaker hasn't written a single paper (or any text that makes any sense or could earn a citation from a scientist) and there are many other ways to see that he is 100% unqualified to talk about these difficult matters, especially when it comes to advances that emerged in the recent decades (let alone recent years).

The remaining three bullets out of five want to convey the idea that both mathematics in general and representation theory are useful in physics and the Standard Model. What? Is this meant to be the topic of a colloquium? I understood the importance of mathematics in physics when I was 4 and the importance of representation theory in physics when I was 10. Every janitor who was allowed to clean my office for grad students had to know these basics, too. You must be joking, Sirs.

Page 3/32 makes the story of the anti-physics book even crazier. We learn that the book wasn't actually written 10 years ago; it was mostly written 15 years ago. Huge developments have taken place in string theory and theoretical physics in the recent 15 years. Even if the book were relevant for scientists back in 2001, and it obviously wasn't, it would have been outdated by today. So how can one possibly organize a colloquium in 2016 for which this book is meant to be one of the main pillars?

Page 4/32 shows a screenshot of the "Not Even Wrong" blog. Voits boasts that it has 1,500 blog posts (TRF has 6,600) and 40,000 comments (we have way over 100,000) and most of the 20,000 page views a day are by "robots" (maybe Voits' own robots). Now, why would anyone care? All this Internet traffic is completely negligible relatively to the most influential servers on the Internet. Why would someone talk about it at all? Why should the time of Rutgers students, postdocs, and professors be wasted by a mediocre website? Because it claims to have something to do with physics? It has nothing to do with the professional, serious physics.

Slide 5/32 promotes Joseph Conlon's book – quite embarrassing for Joseph. Page 6/32 says that Voits is writing a book about quantum mechanics. Given the fact that Voits misunderstands pretty much everything that is more complicated than a certain modest threshold, one can't expect much from that book.

On slides 7-8/32, we learn that Voits liked the years 1975-1979 and one of his achievements was to be an unpaid visitor at Harvard in 1987-1988. Wow. Who could possibly give a damn? I've attended dozens of colloquia by the Nobel prize winners but if the speaker or the host began to talk about some detailed affiliations, it would turn me off totally. Now, why should the Rutgers physics community suffer through a talk that lists unpaid visits by a crackpot that took place some 30 years ago?

Pages 9-12/32 include some popular-book-style introduction to string theory as understood in the 1980s, with 2 vague sentences about the 1990s and a purely non-technical comment about the recent years. Is this level of depth enough for a Rutgers physics colloquium these days?

Page 13/32 says that there is "hype about string theory" and uses a 17-year-old New York Times photograph of Lisa Randall as evidence. Now, Lisa's and Raman's finding was important in phenomenology; it wasn't quite string theory, just string-theory-related ideas; the article was rather sensible; it appeared 17 years ago; and physicists shouldn't get their knowledge about their field from the New York Times, anyway. So what the hell is the role that this slide could play in a physics colloquium in 2016?

Page 14/32 says that the multiverse may exist according to string theory and Voits states that "it is not science" and "it is dangerous" without a glimpse of a justification. Page 15/32 claims that there is the "end of science" and mentions Susskind's term "Popperazzi" for the religious cult claiming that some stupidly misinterpreted oversimplified ideas by a random philosopher should be worshiped as the most important thing by all physicists. If Voits at least invented something as catchy as "Popperazzi". He hasn't. He's done no physics for 30 years but even when it comes to talking points, he is purely stealing from others – whether it's Wolfgang Pauli, Leonard Susskind, or someone else. Is that enough for a physics colloquium?

Pages 16-17/32 inform us about the shocking thing that mathematics is a non-empirical science. Great to learn something new and deep. He also lists some random buzzwords from mathematics like "Riemannian geometry" but it remains absolutely unclear why he did so. Let me tell you why: all these buzzwords are meant to mask the fact that he is nothing else than an ignorant layman and crackpot.

On pages 19-20/32, we are invited to buy a "different vision" and "radical Platonism". Everyone knows what is "Platonism" but what it means for it to be "radical" remains unclear – but it must be related to Lee Sm*lin's "mysticism", we learn. What? A slide says that the Standard Model works rather well. A janitor would be enough for that, too.

Page 21/32 lists things like lattice gauge theory and some nonperturbative electroweak theory but says nothing about those random phrases. On page 22/32, it's said that "quantum gravity could be much like the Standard Model", but it's not explained how this could be true. He suddenly jumps to the stringy multiverse again and says that it's "circular". Whether string theory implies a multiverse or not, there is obviously nothing circular about it.

Page 23/32 starts to mix the random buzzwords from representation theory such as the Dirac cohomology and categorification. On page 24/42, we're told that the momentum is related to translations, a thing that many high school students know, too. Voits has "nothing to say about the mysterious part, how does classical behavior emerge". Nothing is not too much to say about this foundational issue for someone who claims to be writing a book on quantum mechanics.

Page 25/32 escalates the crackpottery. He works at the level of basic definitions of a linear space or a commutator – the stuff approximately from the first undergraduate lecture on linear algebra – but he pretends that he has found something that could perhaps compete with string theory and maybe supersede it. What? This is just a collection of randomly mixed up elementary buzzwords and super-elementary mathematical expressions from the undergraduate linear algebra courses. A few more slides say some ill-defined things that try to pretend that Voits knows what the Dirac operator or category theory mean – except that it's self-evident that he doesn't actually understand these concepts.

The last page, 32/32, summarizes the talk. Ten years after the "string wars", string theory is failing even more than ever before, the audience was told by the stuttering critic of science. A problem is that this is clearly a totally untrue statement and the talk didn't contain anything at all that could substantiate this statement, especially not something that would be related to the recent 15 years in theoretical physics – developments that Mr Voits doesn't have the slightest idea about, not even at the popular-book level.

We "learn" that the Standard Model could be close to a theory of everything – yes, it is surely "somewhat close" (not "too close") but no more details are offered by Voits – and representation theory could be useful.

The second, key bullet of the summary says that if the number of available new experiments is limited, physicists must "look to mathematics for some guidance". Holy cow, but that's exactly what string theorists are doing and that's exactly why Voits and Sm*lin – and the brainwashed sheep who take these crackpots seriously – criticize about string theory at almost all times. And now he wants to recommend this "power of mathematics" as "his" recipe to proceed? Holy cow.

(Emil Martinec made a much better comment on this breathtaking cognitive dissonance of Mr Voits.)

The fact that a colloquium like that has been allowed at Rutgers looks like a serious breakdown of the system. Mr Neuberger should be given hard time but because I know most of the string theorists who are currently at Rutgers faculty, I don't believe that anything like that will actually take place. The tolerance for talks with the right "ideological flavor", despite their unbelievably lousy quality, has become a part of the political correctness that has conquered much of the Academia.

by Luboš Motl (noreply@blogger.com) at February 09, 2016 05:59 PM

Symmetrybreaking - Fermilab/SLAC

Neutrinos on a seesaw

A possible explanation for the lightness of neutrinos could help answer some big questions about the universe.

Mass is a fundamental property of matter, but there’s still a lot about it we don’t understand—especially when it comes to the strangely tiny masses of neutrinos. 

An idea called the seesaw mechanism proposes a way to explain the masses of these curious particles. If shown to be correct, it could help us understand a great deal about the nature of fundamental forces and—maybe—why there’s more matter than antimatter in the universe today.

Wibbly-wobbly massy-wassy stuff

The masses of the smallest bits of matter cover a wide range. Electrons are roughly 1800 times less massive than protons and neutrons, which are one hundred times less massive than the Higgs boson. Other rare beasts like the top quark are heavier still.

Then we have the neutrinos, which don’t fit in at all. 

According to the Standard Model of particles and forces that emerged in the 1970s, neutrinos were massless. Experiments seemed to concur. However, over the next two decades, physicists showed that neutrinos change their flavor, or type.

Neutrinos come in three varieties: electron, muon and tau. Think of them as Neapolitan ice cream: The strawberry is the electron neutrino; the vanilla is the muon neutrino; and the chocolate is the tau neutrino. 

By the late 1980s, physicists were reasonably good at scooping out the strawberry; most experiments were designed to detect electron neutrinos only. But they were seeing far fewer than theory predicted they should. 

By 1998, researchers discovered the missing neutrinos could be explained by oscillation—the particles were changing from one flavor to another. By figuring out how to detect the other flavors, they showed they could account for the remainder of the missing neutrinos. 

This discovery forced them to reconsider the mass of the neutrino, since neutrinos can oscillate only if they have a tiny—but nonzero—mass.

 Today, “just from experimental facts, we know that neutrino masses are way smaller compared to all the other elementary [matter particle] masses,” says Mu-Chun Chen, a theoretical physicist at the University of California, Irvine. 

We don’t yet know exactly how much mass they have, but astronomical observations 1 Looking to the heavens for neutrino masses show they’re likely around a millionth of the mass of an electron—or even less. And this small mass could be a product of the seesaw mechanism. 

Seesaw Mechanism Animation
Artwork by Sandbox Studio, Chicago with Ana Kova

I am not left-handed!

To visualize another important property of neutrinos, make a “thumbs-up” gesture with your left hand. Your fingers will curl the way the neutrino rotates, and your thumb will point in the direction it travels. This combination makes for a “left-handed” particle. Antineutrinos, the antimatter version of neutrinos, are right-handed: Take your right hand and make a thumbs-up to show the relation between their spin and motion.

Some particles such as electrons or quarks don’t spin in any particular direction relative to the way they move; they are neither purely right- nor left-handed. So far, scientists have only ever observed left-handed neutrinos. 

But the seesaw mechanism predicts that there are two kinds of neutrinos: the light, left-handed ones we know and—on the other end of the metaphorical seesaw—heavy, right-handed neutrinos that we’ve never seen. The seesaw itself is a ratio: the higher the mass of the right-handed neutrino, the lower the mass of the left-handed neutrinos. Based on experiments, these right-handed neutrinos would be extraordinarily massive, perhaps 10^15 (one quadrillion) times heavier than a proton.

And there’s more: The seesaw mechanism predicts that if right-handed neutrinos exist, then they would be their own antiparticles. This could give us a clue to how our universe came to be full of matter. 

One idea is that in the first fraction of a second after the big bang, the universe produced just a tiny bit more matter than antimatter. After most particles annihilated with their antimatter counterparts, that imbalance left us with the matter we have today. Most of the laws of physics don’t distinguish between matter and antimatter, so something beyond the Standard Model must explain the asymmetry. 

Particles that are their own antiparticles can produce situations that violate some of the normal rules of physics. If right-handed neutrinos—which are their own antineutrinos—exist, then neutrinos could present the same kind of symmetry violation that might have happened for other types of matter. Exactly how that carries over to matter other than neutrinos, though, is still an area of active research for Chen and other physicists.

Searching for the seesaw

Scientists think they have yet to see these heavy right-handers for two reasons. First, the only force they know to act on neutrinos is the weak force, and the weak force acts only on left-handed particles. Right-handed neutrinos might not interact with any of the known forces.

Second, right-handed neutrinos would be too massive to be stable in our universe, and they would require too much energy to be created in even the most powerful particle accelerator. However, these particles could leave footprints in other experiments.

Today, scientists are studying the light, left-handed neutrinos that we can see to look for signs that could give us a verdict on the seesaw mechanism.

For one, they’re looking to see if neutrinos are their own antiparticles. That wouldn’t necessarily mean that the seesaw mechanism is true, but finding it would be a big point in the seesaw mechanism’s favor.

The seesaw mechanism goes hand-in-hand with grand unified theories—theories that unite the strong, weak and electromagnetic theory into a single force at high energies. If scientists find evidence of the seesaw mechanism, they could learn important things about how the forces are related.

The seesaw mechanism is the most likely way to explain how neutrinos got their mass. However, frustratingly, the nature of the explanation pushes many of its testable consequences out of experimental reach. 

The best hope lies in persistent experimentation, and—as with the discovery of neutrino oscillation in the first place—hunting for anything that doesn’t quite fit expectations.

by Matthew R. Francis at February 09, 2016 03:06 PM

astrobites - astro-ph reader's digest

So Much Hot (Jupiter) Diversity, So Little Time

Title: A continuum from clear to cloudy hot-Jupiter exoplanets without primordial water depletion 

Authors: Sing, D., Fortney, J., Nikolov, N., Wakeford, H., et al.

 First Authors Affiliation: University of Exeter

Paper status: Published in Nature

Almost every astrophysical process we know of was discovered by observing a large census of seemingly identical objects. The famous Hertzsprung-Russel diagram, which shows the relationship between temperature and luminosity in stars and gives insights into stellar evolution, was only uncovered when Hertzsprung and Russel were able to utilize large scale photographic spectroscopy surveys to look at and compare several hundreds of stars. They didn’t fully characterize each individual star. Instead, they looked at two simple and measurable features: apparent magnitude and the strengths of a couple absorption features as a proxy for temperature. These measurements for a stellar sample of 1 or 2, would not have yielded scientifically interesting results, but when compared to 100 others, patterns started to emerge.

In exoplanet science, we are at, what I’ll call, the “pre-HR diagram” stage. We have only been able to detect a few molecular absorption features in the atmospheres of just a handful of planets. Water absorption, for example, has been detected in the atmospheres of hot Jupiter exoplanets. The strength of these absorption features, though, has varied from planet to planet and has led various authors to make predictions about why this is— Maybe the planets with low water content were formed in part of a disk where water has been depleted? Maybe the water is actually there but clouds are muting the absorption features? With so few observations, it’s been hard to answer these questions. The authors of today’s astrobite observed the atmospheres of ten hot Jupiter planets all orbiting different host stars. Of course, ten planets won’t make the modern day exoplanet HR diagram but it does give us ten data points on what has previously been a blank canvas.

Below, Figure 1 shows the transmission spectra of ten different hot Jupiter atmospheres observed with the Hubble Space Telescope. For more info on how we get these spectra, I’d suggest reading this previous bite. These planets range in temperature from 960 – 2510 K, in mass from 0.21 – 1.50 times the mass of Jupiter, and in period from 0.79- 4.46 days. For perspective, Mercury orbits the Sun in an 88 day period and has an average temperature of 440 K. There is nothing in our Solar System remotely comparable to these ten planets. To showcase similarities and differences between each exoplanet’s atmospheric spectra, the authors have plotted everything on the same figure. The solid colored lines in the figures are the best fit atmospheric models, while the colored dots showcase the actual data.  To the untrained eye, these might seem a bit intimidating. But, if you know what you are looking for, you don’t even need complex models to gain insights into these planetary atmosphere:

Transmission spectra of hot Jupiter planets observed with the Hubble Space Telescope and Spitzer. Solid colored lines show the atmospheric models while the colored dots show the observed data.

Transmission spectra of hot Jupiter planets observed with the Hubble Space Telescope and Spitzer. Solid colored lines show the atmospheric models while the colored dots show the observed data.

Absorption Features: 

Absorption features are probably the most striking feature of a planet spectrum because they jump off, what we call, the “continuum” of the spectrum. Let’s start with WASP-17b in Figure 1. Try for a moment to ignore the solid line and focus on the orange dots. You should notice from the data points alone that there is probably sodium and water absorption in the atmosphere of WASP-17b. The reason I suggested ignoring the solid colored model is because although the model indicates present of potassium, no potassium was actually detected. This can get tricky.

Now, glance at the other spectra and try to figure out which other planets also contain sodium, which contain potassium and which contain water. Bear in mind, that like finger prints, molecules have unique wavelength at which they absorb at. This means that any feature you see at 0.6 microns will be Na, any feature you see at 0.78 microns will be K and any feature you see at 1.5 microns will be water.

Sodium was detected in five planetary atmospheres, potassium was detected in four planetary atmospheres, and water was detected in five. How well did you stack up?

Detecting the features is only half the battle. You might’ve noticed that while water was present in the atmosphere of WASP-17b and HD 209458b, their features look incredibly different. HD 209458b’s looks a bit muted. Why is that? And what about those planets that exhibit no features at all, like WASP-12b? What are their atmospheres made of?

Clouds and Hazes 

On Earth, there are few (or no) days that go by when there are blue skies throughout the entire planet. The bottom line is, every planet or moon in our Solar System with an atmosphere has some degree of clouds or hazes. It is, therefore, no surprise that we see indicators of clouds and hazes in the atmospheres of exoplanets. I should pause here and note that there are very different interpretations and definitions of what clouds and hazes are. An Earth scientists might have a different definition than an exoplanet scientists. I should therefore clarify, that I am using the definition given by the authors. In the most simple sense, a cloud is a “grey opacity source”. Imagine holding a prism up to a light and making a rainbow on your wall. If you were to take a plain, grey, no-color filter and hold it up between the prism and the wall, the only difference you would observe would be a subtle dimming across your entire rainbow. A homogeneous dimming of light (a grey opacity source).

Hazes can operate quite differently because they consist of tiny sub-micron sized particles, all capable of scattering light in various directions (called Rayleigh scattering). Ever wonder why the sky is blue? Rayleigh scattering is more efficient at short wavelengths (blue end of the spectrum), so the sunlight that gets scattered down to the earth is predominantly blue. Going back to our prism analogy, if you now replaced the grey filter with a dense mat of tiny sub-micron sized particles, you’d see the blue end of the rainbow increase in intensity.

Let’s return to Figure 1. Hazes should present themselves as a systematic increase in intensity toward the blue end of the spectrum and clouds should present themselves as a dimming throughout the entire spectrum. WASP-12b clearly exhibits a large presence of clouds while, WASP-31b all the way down to WASP-6b exhibit some degree of hazes.

What did we learn? 

From just looking comparatively at the spectra of these ten planets, some fundamental questions about planetary systems can be addressed:

  1. A muted water feature, does not necessarily mean the atmosphere is depleted of water vapor. Instead, it is more likely an indicator of clouds.
  2. Not ALL hot Jupiters have a massive cloud deck
  3. Not ALL hot Jupiters have thick hazes

We have placed ten dots on our exoplanet HR diagram and laid the ground work for how future missions such as the James Webb Space Telescope, can add to the field. In the near future we will be able to double, quadruple or even centuple this sample size and gain a deeper understanding of planet atmospheres, atmospheric chemistry, planet formation.

by Natasha Batalha at February 09, 2016 03:35 AM

Clifford V. Johnson - Asymptotia

Staring at Stairs…

triangle_staircaseThese stairs probably do not conform to any building code, but I like them anyway, and so they will appear in a paper I'll submit to the arxiv soon.

They're part of a nifty algorithm I thought of on Friday that I like rather a lot.

More later.

-cvj Click to continue reading this post

The post Staring at Stairs… appeared first on Asymptotia.

by Clifford at February 09, 2016 12:03 AM

February 08, 2016

Sean Carroll - Preposterous Universe

Guest Post: Grant Remmen on Entropic Gravity

Grant Remmen“Understanding quantum gravity” is on every physicist’s short list of Big Issues we would all like to know more about. If there’s been any lesson from last half-century of serious work on this problem, it’s that the answer is likely to be something more subtle than just “take classical general relativity and quantize it.” Quantum gravity doesn’t seem to be an ordinary quantum field theory.

In that context, it makes sense to take many different approaches and see what shakes out. Alongside old stand-bys such as string theory and loop quantum gravity, there are less head-on approaches that try to understand how quantum gravity can really be so weird, without proposing a specific and complete model of what it might be.

Grant Remmen, a graduate student here at Caltech, has been working with me recently on one such approach, dubbed entropic gravity. We just submitted a paper entitled “What Is the Entropy in Entropic Gravity?” Grant was kind enough to write up this guest blog post to explain what we’re talking about.

Meanwhile, if you’re near Pasadena, Grant and his brother Cole have written a musical, Boldly Go!, which will be performed at Caltech in a few weeks. You won’t want to miss it!


One of the most exciting developments in theoretical physics in the past few years is the growing understanding of the connections between gravity, thermodynamics, and quantum entanglement. Famously, a complete quantum mechanical theory of gravitation is difficult to construct. However, one of the aspects that we are now coming to understand about quantum gravity is that in the final theory, gravitation and even spacetime itself will be closely related to, and maybe even emergent from, the mysterious quantum mechanical property known as entanglement.

This all started several decades ago, when Hawking and others realized that black holes behave with many of the same aspects as garden-variety thermodynamic systems, including temperature, entropy, etc. Most importantly, the black hole’s entropy is equal to its area [divided by (4 times Newton’s constant)]. Attempts to understand the origin of black hole entropy, along with key developments in string theory, led to the formulation of the holographic principle – see, for example, the celebrated AdS/CFT correspondence – in which quantum gravitational physics in some spacetime is found to be completely described by some special non-gravitational physics on the boundary of the spacetime. In a nutshell, one gets a gravitational universe as a “hologram” of a non-gravitational universe.

If gravity can emerge from, or be equivalent to, a set of physical laws without gravity, then something special about that non-gravitational physics has to make it happen. Physicists have now found that that special something is quantum entanglement: the special correlations among quantum mechanical particles that defies classical description. As a result, physicists are very interested in how to get the dynamics describing how spacetime is shaped and moves – Einstein’s equation of general relativity – from various properties of entanglement. In particular, it’s been suggested that the equations of gravity can be shown to come from some notion of entropy. As our universe is quantum mechanical, we should think about the entanglement entropy, a measure of the degree of correlation of quantum subsystems, which for thermal states matches the familiar thermodynamic notion of entropy.

The general idea is as follows: Inspired by black hole thermodynamics, suppose that there’s some more general notion, in which you choose some region of spacetime, compute its area, and find that when its area changes this is associated with a change in entropy. (I’ve been vague here as to what is meant by a “change” in the area and what system we’re computing the area of – this will be clarified soon!) Next, you somehow relate the entropy to an energy (e.g., using thermodynamic relations). Finally, you write the change in area in terms of a change in the spacetime curvature, using differential geometry. Putting all the pieces together, you get a relation between an energy and the curvature of spacetime, which if everything goes well, gives you nothing more or less than Einstein’s equation! This program can be broadly described as entropic gravity and the idea has appeared in numerous forms. With the plethora of entropic gravity theories out there, we realized that there was a need to investigate what categories they fall into and whether their assumptions are justified – this is what we’ve done in our recent work.

In particular, there are two types of theories in which gravity is related to (entanglement) entropy, which we’ve called holographic gravity and thermodynamic gravity in our paper. The difference between the two is in what system you’re considering, how you define the area, and what you mean by a change in that area.

In holographic gravity, you consider a region and define the area as that of its boundary, then consider various alternate configurations and histories of the matter in that region to see how the area would be different. Recent work in AdS/CFT, in which Einstein’s equation at linear order is equivalent to something called the “entanglement first law”, falls into the holographic gravity category. This idea has been extended to apply outside of AdS/CFT by Jacobson (2015). Crucially, Jacobson’s idea is to apply holographic mathematical technology to arbitrary quantum field theories in the bulk of spacetime (rather than specializing to conformal field theories – special physical models – on the boundary as in AdS/CFT) and thereby derive Einstein’s equation. However, in this work, Jacobson needed to make various assumptions about the entanglement structure of quantum field theories. In our paper, we showed how to justify many of those assumptions, applying recent results derived in quantum field theory (for experts, the form of the modular Hamiltonian and vacuum-subtracted entanglement entropy on null surfaces for general quantum field theories). Thus, we are able to show that the holographic gravity approach actually seems to work!

On the other hand, thermodynamic gravity is of a different character. Though it appears in various forms in the literature, we focus on the famous work of Jacobson (1995). In thermodynamic gravity, you don’t consider changing the entire spacetime configuration. Instead, you imagine a bundle of light rays – a lightsheet – in a particular dynamical spacetime background. As the light rays travel along – as you move down the lightsheet – the rays can be focused by curvature of the spacetime. Now, if the bundle of light rays started with a particular cross-sectional area, you’ll find a different area later on. In thermodynamic gravity, this is the change in area that goes into the derivation of Einstein’s equation. Next, one assumes that this change in area is equivalent to an entropy – in the usual black hole way with a factor of 1/(4 times Newton’s constant) – and that this entropy can be interpreted thermodynamically in terms of an energy flow through the lightsheet. The entropy vanishes from the derivation and the Einstein equation almost immediately appears as a thermodynamic equation of state. What we realized, however, is that what the entropy is actually the entropy of was ambiguous in thermodynamic gravity. Surprisingly, we found that there doesn’t seem to be a consistent definition of the entropy in thermodynamic gravity – applying quantum field theory results for the energy and entanglement entropy, we found that thermodynamic gravity could not simultaneously reproduce the correct constant in the Einstein equation and in the entropy/area relation for black holes.

So when all is said and done, we’ve found that holographic gravity, but not thermodynamic gravity, is on the right track. To answer our own question in the title of the paper, we found – in admittedly somewhat technical language – that the vacuum-subtracted von Neumann entropy evaluated on the null boundary of small causal diamonds gives a consistent formulation of holographic gravity. The future looks exciting for finding the connections between gravity and entanglement!

by Sean Carroll at February 08, 2016 09:42 PM

Lubos Motl - string vacua and pheno

Compactified M-theory and LHC predictions
Guest blog by Gordon Kane

I want to thank Luboš for suggesting that I explain the compactified M-theory predictions of the superpartner masses, particularly for the gluino that should be seen at LHC in Run II. I’ll include the earlier Higgs boson mass and decay branching ratio predictions as well. I’ll only give references to a few papers that allow the reader to see more details of derivations and of calculated numbers, plus a few of the original papers that established the basic compactification, usually just with arXiv numbers so the interested reader can look at them and trace the literature, because this is a short explanation only focused on the LHC predictions. I apologize to others who could be referenced. Before a few years ago it was not possible to use compactified string/M-theories to predict superpartner masses. All “predictions” were based on naturalness arguments, and turned out to be wrong.




String/M-theories must be formulated in 10 or 11 dimensions to give a consistent quantum theories of gravity. In order to examine their predictions for our 4D world, they obviously must be projected onto 4D, a process called “compactification”. Compactified string/M-theories exhibit gravity, plus many properties that characterize the Standard Model of particle physics. These include Yang-Mills gauge theories of forces (such as \(SU(3)_{\rm color} \times SU(2)_{\rm electroweak}\times U(1)\)); chiral quarks and leptons (so parity violation); supersymmetry derived, not assumed; softly broken supersymmetry; hierarchical quark masses; families; moduli; and more. Thus they are attractive candidates for exploring theories extending the Standard Model.




At the present time which string/M-theory is compactified (Heterotic or Type II or M-theory etc), and to what matter-gauge groups, is not yet determined by derivations or principles. Following a body of work done in the 1995-2004 era [1,2,3,4,5,6,7], my collaborators and I have pursued compactifying M-theory. The 11D M-theory is compactified on a 7D manifold of \(G_2\) holonomy, so 7 curled up small dimensions and 3 large space ones. We assume appropriate \(G_2\) manifolds exist – there has been a lot of progress via mathematical study of such manifolds in recent years, including workshops. For M-theory it is known that gauge matter arises from singular 3-cycles in the 7D manifold [3], and chiral fermions from conical singularities on the 7D manifold [4]. Following Witten [5], we assume compactification to an \(SU(5)\)-MSSM. Other alternatives can be studied later. Having in mind the goal of finding \({\rm TeV}\) physics arising from a Planck-scale compactification, and knowing that fluxes (the generalization of electromagnetic fields to extra dimensional worlds) have dimensions and therefore naturally lead to physics near the Planck scale but not near a \({\rm TeV}\), we compactify in a fluxless sector. With the LHC data coming we focused on moduli stabilization, supersymmetry breaking and electroweak symmetry breaking.

In order to calculate in the compactified theory, we need the superpotential, the Kähler potential and the gauge kinetic function. To learn the features characteristic of the theory, we take the generic Kähler potential and gauge kinetic function. The moduli superpotential is a sum of non-perturbative terms because the complex moduli have an axion imaginary part and it has a shift symmetry [8,9,10]. We do most of the calculations with two superpotential terms, since that is sufficient to guarantee that supergravity approximations work well, and we can find semi-analytic results. When it matters we check with numerical work for more terms in the superpotential. The signs of the superpotential terms are determined by axion stabilization [8,9,10]. We use the known generic Kähler potential [6] and gauge kinetic function [7]. By using the generic theory we find the natural predictions of such a theory, with no free parameters. This is very important – if one introduces extra terms by hand, say in the Kähler potential, predictivity is lost.

In addition to the above assumptions we assume the lack of a solution to the cosmological constant problem does not stop us from making reasonable predictions. Solving the CC problems would not help us learn the gluino or Higgs boson mass, and not solving the CC problems does not prevent us from calculating the gluino or Higgs boson mass. Eventually this will have to be checked.

We showed that the M-theory compactification stabilized all moduli and gave a unique de Sitter vacuum for a given manifold, simultaneously breaking supersymmetry. Moduli vevs and masses are calculable. We calculate the supersymmetry soft-breaking Lagrangian at the compactification scale. Then we have the 4D softly broken supergravity quantum field theory, and can calculate all the predictions of the fully known parameter-free soft-breaking Lagrangian. The theory has many solutions with electroweak symmetry breaking.

We also need to have the \(\mu\) parameter in the theory. That is done following the method of Witten [5] who pointed out a generic discrete symmetry in the compactified M-theory that implied \(\mu=0\). We recognized that stabilizing the moduli broke that symmetry, so \(\mu\approx 0\). Since \(\mu\) would vanish if either supersymmetry were unbroken or moduli not stabilized, its value should be proportional to typical moduli vevs (which we calculated to be about \(1/10\) or \(1/20\) of the Planck scale) times the gravitino mass, so \(\mu\approx 3\TeV\). Combining this with the electroweak symmetry breaking conditions gives \(\tan\beta\approx 5\).

The resulting model (let’s call it a model even though it is a real theory and has no adjustable parameters, since we made the assumptions about compactifying to the \(SU(5)\)-MSSM, using the generic Kähler potential and gauge kinetic function, and estimating \(\mu\)) has a number of additional achievements. The lightest modulus can generate both the matter asymmetry and the dark matter when it decays, and thus their ratio. The moduli dominate the energy density of the universe soon after the end of inflation, so there is a non-thermal cosmological history. Axions are stabilized and there is a solution to the strong CP problem. There are no flavor or CPV problems, and EDMs are predicted to be small, below current limits, since the soft-breaking Lagrangian at the high scale is real at tree level, and the RGE running is known [14]. I mention these aspects to illustrate that the model is broadly relevant, not only to LHC predictions.

The soft-breaking Lagrangian contains the terms for the Higgs potential, \(M_{H,u}\) and \(M_{H,d}\) at the high scale. At the high scale all the scalars are about equal to the gravitino mass, about \(40\TeV\) (see below). All the terms needed for the RGE running are also calculated, so they can be run down to the \({\rm TeV}\) scale. \(M_{H,u}\) runs rapidly, down to about a \({\rm TeV}\) at the \({\rm TeV}\) scale. One can examine all the solutions with electroweak symmetry breaking, and finds they all have the form of the well-known two Higgs doublet “decoupling sector”, with one light Higgs and other physical Higgs bosons whose mass is about equal to the gravitino mass. For the decoupling sector the Higgs decay branching ratios are equal to the Standard Model ones except for small loop corrections, mainly the chargino loop. The light Higgs mass is calculated by the “match and run” technique, using the latest two and three loop contributions for heavy scalars, etc., and the light Higgs mass for all solutions is \(126.4\GeV\). This was done before the LHC data (arXiv:1112.1059 and reports at earlier meetings), though that doesn’t matter since the calculation does not depend on anything that changes with time. The RGE calculation has been confirmed by others.

The value of the gravitino mass follows from gaugino condensation and the associated dimensional transmutation. The M-theory hidden sectors generically have gauge groups (and associated matter) of various ranks. Those with the largest gauge groups will run down fastest, and their gauge coupling will get large, leading to condensates, analogous to how QCD forms the hadron spectrum but at a much higher energy scale. This scale, call it \(\Lambda\), is typically calculated to be about \(10^{14}\GeV\). The superpotential \(W\) has dimensions of mass cubed, so \(W\sim\Lambda^3\). The gravitino mass is \[

M_{3/2}=\frac{e^{K/2}W}{M_{pl}^2}\approx\left(\frac{\Lambda}{M_{pl}}\right)^3\cdot \frac{M_{pl}}{V_3}

\] since \(e^{K/2}\sim 1/V_3\). The factor \((\Lambda/M_{pl})^3\) takes us from the Planck scale down a factor \(10^{-12}\), and including the calculable volume factor gives \(M_{3/2}\approx 50\TeV\). This result is generic and robust for the compactified M-theory. It predicts that scalars (squarks, sleptons, \(M_{H,u}\), \(M_{H,d}\)) are of order \(50\TeV\) at the high scale, before RGE running.

The suppression of the gaugino masses from the gravitino scale to the \({\rm TeV}\) scale is completely general (Acharya et al, hep-th/0606262; Phys.Rev.Lett 97(2006)191601). The supergravity expression for the gaugino masses, \(M_{1/2}\), is a sum of terms each given by an F-term times the derivative of the visible sector gauge kinetic function with respect to each F-term. The visible sector gauge kinetic function does not depend on the chiral fermion F-terms, so the associated derivative vanishes, and \(M_{1/2}\) is proportional to the moduli F term generated by gaugino condensation in the hidden sector 3-cycles. The ratio of the gaugino condensate F-term to the chiral fermion F-term is approximately the ratio of volumes, \(V_3/V_7\), of order 1/40, for appropriate dimensionless units. \(V_7\) determines the gravitino mass but not \(M_{1/2}\). Let’s finally turn to the gaugino masses. The reader should understand now that the prediction is not just a “little above the limits”, but follows from a generic, robust calculation. Semi-quantitatively, the gluino mass is \([(\Lambda/M_{pl})^3/V_7]M_{pl}\).

Then the gaugino masses with the suppression described above are generically about \(1\TeV\). Detailed calculation, using the Higgs boson mass to pin down the gravitino mass more precisely (giving \(M_{3/2}=35\TeV\)) then predicts the gluino mass to be about \(1.5\TeV\), the wino mass \(614\GeV\), and the LSP bino about \(450\GeV\) [12]. These three states can be observed at LHC Run II but none of the other superpartners should be seen in Run II (also an important prediction). The higgsinos and squarks can be seen at an \(\sim 100\TeV\) collider via squark-gluino associated production [12,13].

The LHC gluino production cross section is \(10\)-\(15\,{\rm fb}\) [12]. Note that for squarks and gluinos having equal masses the squark exchange contribution to gluino production is significant, so the usual cross section claimed for gluino production is larger than our prediction when squarks are heavy. Simplified searches using larger cross sections will overestimate limits. Surprisingly, experimental groups and many phenomenologists have reported highly model dependent limits much larger than the correct ones for the compactified M-theory as if those limits were general. The wino pair production cross section is also of order \(15\,{\rm fb}\). The wino has nearly 100% branching ratio to bino + higgs, which is helpful for detection. Gluinos decay via the usual virtual squarks about 45% into first and second family quarks, 55% into 3rd family quarks, so simplified searches will overestimate limits. Branching ratios and signals are explained in [12]. The LHC t-tbar cross section is about \(4500\,{\rm fb}\), so it gives the main background (diboson production gives the next worse background). Background study should of course be done by experimenters, for realistic branching ratios to not be misleading. We estimate that to see a \(3\sigma\) signal for a \(1.5\TeV\) gluino will take over \(40\,{\rm fb}^{-1}\) integrated luminosity at LHC, so perhaps it can be seen by or during fall 2016 if the luminosity accumulates sufficiently rapidly.
  1. E.Witten, hep-th/9503124; NuclPhysB443
  2. Papadoupoulos, P. Townsend hep-th/9506150
  3. B.Acharya, hep-th/9812205
  4. B.Acharya and E.Witten, hep-th/0109152
  5. E.Witten, hep-ph/0201018
  6. C.Beasley and E. Witten, hep-th/0203061
  7. A.Lukas and D.Morris, hep-th/0305078
  8. B.Acharya, K.Bobkov, G.Kane, P.Kumar, D. Vaman, hep-th/0606262; PhysRevLett 97(2006)191601
  9. B.Acharya, K.Bobkov, G.Kane, P.Kumar, J.Shao hep-th/0701034
  10. B.Acharya, K.Bobkov, G.Kane, P.Kumar, J.Shao, arXiv:0801.0478
  11. B.Acharya, K.Bobkov, P.Kumar, arXiv:1004.5138
  12. S.Ellis, G.Kane, and B.Zheng, arXiv:1408.1961; JHEP 1507(2015)081
  13. S.Ellis and B.Zheng, arXiv:1506.02644
  14. S.Ellis and G.Kane, arXiv:1405.7719.

by Luboš Motl (noreply@blogger.com) at February 08, 2016 05:20 PM

Tommaso Dorigo - Scientificblogging

From The Great Wall To The Great Collider
With a long delay, last week I was finally able to have a look at the book "From the Great Wall to the Great Collider - China and the Quest to Uncover the Inner Workings of the Universe", by Steve Nadis and Shing-Tung Yau. And I would like to report about my impressions here.

read more

by Tommaso Dorigo at February 08, 2016 10:55 AM

February 07, 2016

John Baez - Azimuth

Rumors of Gravitational Waves

The Laser Interferometric Gravitational-Wave Observatory or LIGO is designed to detect gravitational waves—ripples of curvature in spacetime moving at the speed of light. It’s recently been upgraded, and it will either find gravitational waves soon or something really strange is going on.

Rumors are swirling that LIGO has seen gravitational waves produced by two black holes, of 29 and 36 solar masses, spiralling towards each other—and then colliding to form a single 62-solar-mass black hole!

You’ll notice that 29 + 36 is more than 62. So, it’s possible that three solar masses were turned into energy, mostly in the form of gravitational waves!

According to these rumors, the statistical significance of the signal is supposedly very high: better than 5 sigma! That means there’s at most a 0.000057% probability this event is a random fluke – assuming nobody made a mistake.

If these rumors are correct, we should soon see an official announcement. If the discovery holds up, someone will win a Nobel prize.

The discovery of gravitational waves is completely unsurprising, since they’re predicted by general relativity, a theory that’s passed many tests already. But it would open up a new window to the universe – and we’re likely to see interesting new things, once gravitational wave astronomy becomes a thing.

Here’s the tweet that launched the latest round of rumors:

ligo_tweet_cliff_burgess

For background on this story, try this:

Tale of a doomed galaxy, Azimuth, 8 November 2015.

The first four sections of that long post discuss gravitational waves created by black hole collisions—but the last section is about LIGO and an earlier round of rumors, so I’ll quote it here!

LIGO stands for Laser Interferometer Gravitational Wave Observatory. The idea is simple. You shine a laser beam down two very long tubes and let it bounce back and forth between mirrors at the ends. You use this compare the length of these tubes. When a gravitational wave comes by, it stretches space in one direction and squashes it in another direction. So, we can detect it.

Sounds easy, eh? Not when you run the numbers! We’re trying to see gravitational waves that stretch space just a tiny bit: about one part in 1023. At LIGO, the tubes are 4 kilometers long. So, we need to see their length change by an absurdly small amount: one-thousandth the diameter of a proton!

It’s amazing to me that people can even contemplate doing this, much less succeed. They use lots of tricks:

• They bounce the light back and forth many times, effectively increasing the length of the tubes to 1800 kilometers.

• There’s no air in the tubes—just a very good vacuum.

• They hang the mirrors on quartz fibers, making each mirror part of a pendulum with very little friction. This means it vibrates very well at one particular frequency, and very badly at frequencies far from that. This damps out the shaking of the ground, which is a real problem.

• This pendulum is hung on another pendulum.

• That pendulum is hung on a third pendulum.

• That pendulum is hung on a fourth pendulum.

• The whole chain of pendulums is sitting on a device that detects vibrations and moves in a way to counteract them, sort of like noise-cancelling headphones.

• There are 2 of these facilities, one in Livingston, Louisiana and another in Hanford, Washington. Only if both detect a gravitational wave do we get excited.

I visited the LIGO facility in Louisiana in 2006. It was really cool! Back then, the sensitivity was good enough to see collisions of black holes and neutron stars up to 50 million light years away.

Here I’m not talking about the supermassive black holes that live in the centers of galaxies. I’m talking about the much more common black holes and neutron stars that form when stars go supernova. Sometimes a pair of stars orbiting each other will both blow up, and form two black holes—or two neutron stars, or a black hole and neutron star. And eventually these will spiral into each other and emit lots of gravitational waves right before they collide.

50 million light years is big enough that LIGO could see about half the galaxies in the Virgo Cluster. Unfortunately, with that many galaxies, we only expect to see one neutron star collision every 50 years or so.

They never saw anything. So they kept improving the machines, and now we’ve got Advanced LIGO! This should now be able to see collisions up to 225 million light years away… and after a while, three times further.

They turned it on September 18th. Soon we should see more than one gravitational wave burst each year.

In fact, there’s a rumor that they’ve already seen one! But they’re still testing the device, and there’s a team whose job is to inject fake signals, just to see if they’re detected. Davide Castelvecchi writes:

LIGO is almost unique among physics experiments in practising ‘blind injection’. A team of three collaboration members has the ability to simulate a detection by using actuators to move the mirrors. “Only they know if, and when, a certain type of signal has been injected,” says Laura Cadonati, a physicist at the Georgia Institute of Technology in Atlanta who leads the Advanced LIGO’s data-analysis team.

Two such exercises took place during earlier science runs of LIGO, one in 2007 and one in 2010. Harry Collins, a sociologist of science at Cardiff University, UK, was there to document them (and has written books about it). He says that the exercises can be valuable for rehearsing the analysis techniques that will be needed when a real event occurs. But the practice can also be a drain on the team’s energies. “Analysing one of these events can be enormously time consuming,” he says. “At some point, it damages their home life.”

The original blind-injection exercises took 18 months and 6 months respectively. The first one was discarded, but in the second case, the collaboration wrote a paper and held a vote to decide whether they would make an announcement. Only then did the blind-injection team ‘open the envelope’ and reveal that the events had been staged.

Aargh! The disappointment would be crushing.

But with luck, Advanced LIGO will soon detect real gravitational waves. And I hope life here in the Milky Way thrives for a long time – so that when the gravitational waves from the doomed galaxy PG 1302-102 reach us, hundreds of thousands of years in the future, we can study them in exquisite detail.

For Castelvecchi’s whole story, see:

• Davide Castelvecchi Has giant LIGO experiment seen gravitational waves?, Nature, 30 September 2015.

For pictures of my visit to LIGO, see:

• John Baez, This week’s finds in mathematical physics (week 241), 20 November 2006.

For how Advanced LIGO works, see:

• The LIGO Scientific Collaboration Advanced LIGO, 17 November 2014.


by John Baez at February 07, 2016 07:01 PM

February 06, 2016

Clifford V. Johnson - Asymptotia

On Zero Matter

zero-matter-containedOver at Marvel, I chatted with actor Reggie Austin (Dr. Jason Wilkes on Agent Carter) some more about the physics I helped embed in the show this season. It was fun. (See an earlier chat here.) This was about Zero Matter itself (which will also be a precursor to things seen in the movie Dr. Strange later this year)... It was one of the first things the writers asked me about when I first met them, and we brainstormed about things like what it should be called (the name "dark force" comes later in Marvel history), and how a scientist who encountered it would contain it. This got me thinking about things like perfect fluids, plasma physics, exotic phases of materials, magnetic fields, and the like (sadly the interview skips a lot of what I said about those)... and to the writers' and show-runners' enormous credit, lots of these concepts were allowed to appear in the show in various ways, including (versions of) two containment designs that I sketched out. Anyway, have a look in the embed below.

Oh! The name. We did not settle on a name after the first meeting, but one of [...] Click to continue reading this post

The post On Zero Matter appeared first on Asymptotia.

by Clifford at February 06, 2016 01:07 AM

February 05, 2016

Clifford V. Johnson - Asymptotia

Suited Up!

war_gear_smYes, I was in battle again. A persistent skunk that wants to take up residence in the crawl space. I got rid of it last week, having found one place it broke in. This involved a lot of crawling around on my belly armed with a headlamp (not pictured - this is an old picture) and curses. I've done this before... It left. Then yesterday I found a new place it had broken in through and the battle was rejoined. Interestingly, this time it decided to hide after some of the back and forth and I lost track of it for a good while and was about to give up and hope it will feel unsafe with all the lights I'd put on down there (and/or encourage it further to leave by deploying nuclear weapons to match the ones it comes armed with*).

In preparation for this I left open the large access hatch and sprinkled a layer [...] Click to continue reading this post

The post Suited Up! appeared first on Asymptotia.

by Clifford at February 05, 2016 11:09 PM

Tommaso Dorigo - Scientificblogging

Top Secret: On Confidentiality On Scientific Issues, Across The Ring And Across The Bedroom

The following text, a short excerpt from the book "Anomaly!", recounts the time when the top quark was about to be discovered, in 1994-95. After the "evidence" paper that CDF had published in 1994, the CDF and DZERO experiments were both running for the first prize - a discovery of the last quark.

----

read more

by Tommaso Dorigo at February 05, 2016 10:08 PM

Axel Maas - Looking Inside the Standard Model

More than one Higgs means more structure
We have published once more a new paper, and I would like again to outline what we did (and why).

The motivation for this investigation started out with another paper of mine. As described earlier, back then I have taken a rather formal stand on proposals for new physics. It was based on the idea that there is some kind of self-similar substructure of what we usually call the Higgs and the W and Z bosons. In this paper, I speculated that this self-similarity may be rather exclusive to the standard model. As a consequence, this may alter the predictions for new physics models.

Of course, speculating is easy. To make something out of it requires to do real calculations. Thus, I have started two projects to test them. One is on the unification of forces, and still ongoing. Some first results are there, but not yet anything conclusive. It is the second project which yielded new results.

In this second project we had a look at a theory where more Higgs particles are added to the standard model, a so-called 2-Higgs-doublet model, or 2HDM for short. I had speculated that, besides the additional Higgs particles, further additional particles may arise as bound states. I. e., as states which are made from two or more other particles. These are not accounted for by ordinary methods.

In the end, it now appears that this idea is not correct, at least not in its simplest form. There are still some very special cases left, where this may still be true, but by and large not. However, we have understood why the original idea is wrong, and why it may still be correct in other cases. The answer is symmetry.

When adding additional Higgs particles, one is not entirely free. It is necessary that we do not alter the standard model where we have already tested it. Especially, we cannot easily modify the symmetries of the standard model. However, the symmetries of the standard model then induce a remarkable effect. The additional Higgs particles in 2HDMs are not entirely different from the ones we know. Rather, they mix with it as a quantum effect. In quantum theories, particles can change into each other under certain conditions. And the symmetries of the standard model entail that this is possible for the new and the old Higgses.

If the particles mix, the possibilities to distinguish them diminish. As a consequence, the simplest additional states can no longer be distinguished from the states already accounted for by ordinary methods. Thus, they are not additional states. Hence, the simplest possible deviation I speculated about is not realized. There may still be more complicated ones, but to figure this out is much more complicated, and has yet to be done. Thus, this work showed that the simple idea was not right.

So what about the other project still in progress? Should I now also expect this to just reproduce what is known? Actually no. The thing we learned in this project was why everything fell into its ordinary places. The reason is the mixing between the normal and the additional Higgs particles. This possibility is precluded in the other project, as there the additional particles are very different from the original ones. It may still be that my original idea is wrong. But it has to be wrong in a different way than in the case we investigated now. And thus we have also learned something more about a wider class of theories.

This shows that even disproving your ideas is important. From the reasons why they fail you learn more than just from a confirmation of them - you learn something new.

by Axel Maas (noreply@blogger.com) at February 05, 2016 05:32 PM

ZapperZ - Physics and Physicists

The Physics of Mirrors Falls Slightly Short
This is a nice, layman article on the physics behind mirrors.

While they did a nice job in explaining about the metal surface and the smoothness effect, I wish articles like this will also dive in the material science aspect of why light, in this case visible light, is reflected better off a metal surface than none metalllic surface. In other words, let's include some solid state/condensed matter physics in this. That is truly the physics behind the workings of a mirror.

Zz.

by ZapperZ (noreply@blogger.com) at February 05, 2016 03:50 PM

ZapperZ - Physics and Physicists

Wendelstein 7-X' Comes Online
ITER should look over its shoulder, because Germany's nuclear fusion reactor research facility is coming online. It is considerably smaller, significantly cheaper, but more importantly, it is built and ready to run!

Construction has already begun in southern France on ITER, a huge international research reactor that uses a strong electric current to trap plasma inside a doughnut-shaped device long enough for fusion to take place. The device, known as a tokamak, was conceived by Soviet physicists in the 1950s and is considered fairly easy to build, but extremely difficult to operate.

The team in Greifswald, a port city on Germany's Baltic coast, is focused on a rival technology invented by the American physicist Lyman Spitzer in 1950. Called a stellarator, the device has the same doughnut shape as a tokamak but uses a complicated system of magnetic coils instead of a current to achieve the same result.

Let the games begin!

Zz.

by ZapperZ (noreply@blogger.com) at February 05, 2016 03:45 PM

John Baez - Azimuth

Aggressively Expanding Civilizations

Ever since I became an environmentalist, the potential destruction wrought by aggressively expanding civilizations has been haunting my thoughts. Not just here and now, where it’s easy to see, but in the future.

In October 2006, I wrote this in my online diary:

A long time ago on this diary, I mentioned my friend Bruce Smith’s nightmare scenario. In the quest for ever faster growth, corporations evolve toward ever faster exploitation of natural resources. The Earth is not enough. So, ultimately, they send out self-replicating von Neumann probes that eat up solar systems as they go, turning the planets into more probes. Different brands of probes will compete among each other, evolving toward ever faster expansion. Eventually, the winners will form a wave expanding outwards at nearly the speed of light—demolishing everything behind them, leaving only wreckage.

The scary part is that even if we don’t let this happen, some other civilization might.

The last point is the key one. Even if something is unlikely, in a sufficiently large universe it will happen, as long as it’s possible. And then it will perpetuate itself, as long as it’s evolutionarily fit. Our universe seems pretty darn big. So, even if a given strategy is hard to find, if it’s a winning strategy it will get played somewhere.

So, even in this nightmare scenario of "spheres of von Neumann probes expanding at near lightspeed", we don’t need to worry about a bleak future for the universe as a whole—any more than we need to worry that viruses will completely kill off all higher life forms. Some fraction of civilizations will probably develop defenses in time to repel the onslaught of these expanding spheres.

It’s not something I stay awake worrying about, but it’s a depressingly plausible possibility. As you can see, I was trying to reassure myself that everything would be okay, or at least acceptable, in the long run.

Even earlier, S. Jay Olson and I wrote a paper together on the limitations in accurately measuring distances caused by quantum gravity. If you try to measure a distance too accurately, you’ll need to concentrate so much energy in such a small space that you’ll create a black hole!

That was in 2002. Later I lost touch with him. But now I’m happy to discover that he’s doing interesting work on quantum gravity and quantum information processing! He is now at Boise State University in Idaho, his home state.

But here’s the cool part: he’s also studying aggressively expanding civilizations.

Expanding bubbles

What will happen if some civilizations start aggressively expanding through the Universe at a reasonable fraction of the speed of light? We don’t have to assume most of them do. Indeed, there can’t be too many, or they’d already be here! More precisely, the density of such civilizations must be low at the present time. The number of them could be infinite, since space is apparently infinite. But none have reached us. We may eventually become such a civilization, but we’re not one yet.

Each such civilization will form a growing ‘bubble’: an expanding sphere of influence. And occasionally, these bubbles will collide!

Here are some pictures from a simulation he did:

As he notes, the math of these bubbles has already been studied by researchers interested in inflationary cosmology, like Alan Guth. These folks have considered the possibility that in the very early Universe, most of space was filled with a ‘false vacuum’: a state of matter that resembles the actual vacuum, but has higher energy density.

A false vacuum could turn into the true vacuum, liberating energy in the form of particle-antiparticle pairs. However, it might not do this instantly! It might be ‘metastable’, like ball number 1 in this picture:

It might need a nudge to ‘roll over the hill’ (metaphorically) and down into the lower-energy state corresponding to the true vacuum, shown as ball number 3. Or, thanks to quantum mechanics, it might ‘tunnel’ through this hill.

The balls and the hill are just an analogy. What I mean is that the false vacuum might need to go through a stage of having even higher energy density before it could turn into the true vacuum. Random fluctuations, either quantum-mechanical or thermal, could make this happen. Such a random fluctuation could happen in one location, forming a ‘bubble’ of true vacuum that—under certain conditions—would rapidly expand.

It’s actually not very different from bubbles of steam forming in superheated water!

But here’s the really interesting Jay Olson noted in his first paper on this subject. Research on bubbles in the inflationary cosmology could actually be relevant to aggressively expanding civilizations!

Why? Just as a bubble of expanding true vacuum has different pressure than the false vacuum surrounding it, the same might be true for an aggressively expanding civilization. If they are serious about expanding rapidly, they may convert a lot of matter into radiation to power their expansion. And while energy is conserved in this process, the pressure of radiation in space is a lot bigger than the pressure of matter, which is almost zero.

General relativity says that energy density slows the expansion of the Universe. But also—and this is probably less well-known among nonphysicists—it says that pressure has a similar effect. Also, as the Universe expands, the energy density and pressure of radiation drops at a different rate than the energy density of matter.

So, the expansion of the Universe itself, on a very large scale, could be affected by aggressively expanding civilizations!

The fun part is that Jay Olson actually studies this in a quantitative way, making some guesses about the numbers involved. Of course there’s a huge amount of uncertainty in all matters concerning aggressively expanding high-tech civilizations, so he actually considers a wide range of possible numbers. But if we assume a civilization turns a large fraction of matter into radiation, the effects could be significant!

The effect of the extra pressure due to radiation would be to temporarily slow the expansion of the Universe. But the expansion would not be stopped. The radiation will gradually thin out. So eventually, dark energy—which has negative pressure, and does not thin out as the Universe expands—will win. Then the Universe will expand exponentially, as it is already beginning to do now.

(Here I am ignoring speculative theories where dark energy has properties that change dramatically over time.)

Jay Olson’s work

Here are his papers on this subject. The abstracts sketch his results, but you have to look at the papers to see how nice they are. He’s thought quite carefully about these things.

• S. Jay Olson, Homogeneous cosmology with aggressively expanding civilizations, Classical and Quantum Gravity 32 (2015) 215025.

Abstract. In the context of a homogeneous universe, we note that the appearance of aggressively expanding advanced life is geometrically similar to the process of nucleation and bubble growth in a first-order cosmological phase transition. We exploit this similarity to describe the dynamics of life saturating the universe on a cosmic scale, adapting the phase transition model to incorporate probability distributions of expansion and resource consumption strategies. Through a series of numerical solutions spanning several orders of magnitude in the input assumption parameters, the resulting cosmological model is used to address basic questions related to the intergalactic spreading of life, dealing with issues such as timescales, observability, competition between strategies, and first-mover advantage. Finally, we examine physical effects on the universe itself, such as reheating and the backreaction on the evolution of the scale factor, if such life is able to control and convert a significant fraction of the available pressureless matter into radiation. We conclude that the existence of life, if certain advanced technologies are practical, could have a significant influence on the future large-scale evolution of the universe.

• S. Jay Olson, Estimates for the number of visible galaxy-spanning civilizations and the cosmological expansion of life.

Abstract. If advanced civilizations appear in the universe with a desire to expand, the entire universe can become saturated with life on a short timescale, even if such expanders appear but rarely. Our presence in an untouched Milky Way thus constrains the appearance rate of galaxy-spanning Kardashev type III (K3) civilizations, if it is assumed that some fraction of K3 civilizations will continue their expansion at intergalactic distances. We use this constraint to estimate the appearance rate of K3 civilizations for 81 cosmological scenarios by specifying the extent to which humanity could be a statistical outlier. We find that in nearly all plausible scenarios, the distance to the nearest visible K3 is cosmological. In searches where the observable range is limited, we also find that the most likely detections tend to be expanding civilizations who have entered the observable range from farther away. An observation of K3 clusters is thus more likely than isolated K3 galaxies.

• S. Jay Olson, On the visible size and geometry of aggressively expanding civilizations at cosmological distances.

Abstract. If a subset of advanced civilizations in the universe choose to rapidly expand into unoccupied space, these civilizations would have the opportunity to grow to a cosmological scale over the course of billions of years. If such life also makes observable changes to the galaxies they inhabit, then it is possible that vast domains of life-saturated galaxies could be visible from the Earth. Here, we describe the shape and angular size of these domains as viewed from the Earth, and calculate median visible sizes for a variety of scenarios. We also calculate the total fraction of the sky that should be covered by at least one domain. In each of the 27 scenarios we examine, the median angular size of the nearest domain is within an order of magnitude of a percent of the whole celestial sphere. Observing such a domain would likely require an analysis of galaxies on the order of a giga-lightyear from the Earth.

Here are the main assumptions in his first paper:

1. At early times (relative to the appearance of life), the universe is described by the standard cosmology – a benchmark Friedmann-Robertson-Walker (FRW) solution.

2. The limits of technology will allow for self-reproducing spacecraft, sustained relativistic travel over cosmological distances, and an efficient process to convert baryonic matter into radiation.

3. Control of resources in the universe will tend to be dominated by civilizations that adopt a strategy of aggressive expansion (defined as a frontier which expands at a large fraction of the speed of the individual spacecraft involved), rather than those expanding diffusively due to the conventional pressures of population dynamics.

4. The appearance of aggressively expanding life in the universe is a spatially random event and occurs at some specified, model-dependent rate.

5. Aggressive expanders will tend to expand in all directions unless constrained by the presence of other civilizations, will attempt to gain control of as much matter as is locally available for their use, and once established in a region of space, will consume mass as an energy source (converting it to radiation) at some specified, model-dependent rate.


by John Baez at February 05, 2016 01:00 AM

February 04, 2016

Symmetrybreaking - Fermilab/SLAC

Weighing the lightest particle

Physicists are using one of the oldest laws of nature to find the mass of the elusive neutrino.

Neutrinos are everywhere. Every second, 100 trillion of them pass through your body unnoticed, hardly ever interacting. Though exceedingly abundant, they are the lightest particles of matter, and physicists around the world are attempting the difficult challenge of measuring their mass.   

For a long time, physicists thought neutrinos were massless. This belief was overturned by the discovery that neutrinos oscillate between three flavors: electron, muon and tau. This happens because each flavor contains a mixture of three mass types, neutrino-1, neutrino-2 and neutrino-3, which travel at slightly different speeds.

According to the measurements taken so far, neutrinos must weigh less than 2 electronvolts (a minute fraction of the mass of the tiny electron, which weighs 511,000 electronvolts). A new generation of experiments is attempting to lower this limit—and possibly even identify the actual mass of this elusive particle.

Where did the energy go?

Neutrinos were first proposed by the Austrian-born theoretical physicist Wolfgang Pauli to resolve a problem with beta decay. In the process of beta decay, a neutron in an unstable nucleus transforms into a proton while emitting an electron. Something about this process was especially puzzling to scientists. During the decay, some energy seemed to go missing, breaking the well-established law of energy conservation.

Pauli suggested that the disappearing energy was slipping away in the form of another particle. This particle was later dubbed the neutrino, or “little neutral one,” by the Italian physicist Enrico Fermi.

Scientists are now applying the principle of energy conservation to direct neutrino mass experiments. By very precisely measuring the energy of electrons released during the decay of unstable atoms, physicists can deduce the mass of neutrinos.

“The heavier the neutrino is, the less energy is left over to be carried by the electron,” says Boris Kayser, a theoretical physicist at Fermilab. “So there is a maximum energy that an electron can have when a neutrino is emitted.”

These experiments are considered direct because they rely on fewer assumptions than other neutrino mass investigations. For example, physicists measure mass indirectly by observing neutrinos’ imprints on other visible things such as galaxy clustering.

Detecting the kinks

Of the direct neutrino mass experiments, KATRIN, which is based at the Karlsrule Institute for Technology in Germany, is the closest to beginning its search.

If everything works as planned, I think we'll have very beautiful results in 2017,” says Guido Drexlin, a physicist at KIT and co-spokesperson for KATRIN.

Cleanliness is key inside the main spectrometer.

The KATRIN collaboration

KATRIN plans to measure the energy of the electrons released from the decay of the radioactive isotope tritium. It will do so by using a giant tank tuned to a precise voltage that allows only electrons above a specific energy to pass through to the detector at the other side. Physicists can use this information to plot the rate of decays at any given energy.

The mass of a neutrino will cause a disturbance in the shape of this graph. Each neutrino mass type should create its own kink. KATRIN, with a peak sensitivity of 0.2 electronvolts (a factor 100 better than previous experiments) will look for a “broad kink” that physicists can use to calculate average neutrino mass.  

Another tritium experiment, Project 8, is attempting a completely different method to measure neutrino mass. The experimenters plan to detect the energy of each individual electron ejected from a beta decay by measuring the frequency of its spiraling motion in a magnetic field. Though still in the early stages, it has the potential to go beyond KATRIN’s sensitivity, giving physicists high hopes for its future.

“KATRIN is the furthest along—it will come out with guns blazing,” says Joseph Formaggio, a physicist at MIT and Project 8 co-spokesperson. “But if they see a signal, the first thing people are going to want to know is whether the kink they see is real. And we can come in and do another experiment with a completely different method.”

Cold capture

Others are looking for these telltale kinks using a completely different element, holmium, which decays through a process called electron capture. In these events, an electron in an unstable atom combines with a proton, turning it into a neutron while releasing a neutrino.

Physicists are measuring the very small amount of energy released in this decay by enclosing the holmium source in microscopic detectors that are operated at very low temperatures (typically below minus 459.2 degrees Fahrenheit). Each holmium decay leads to a tiny increase of the detector’s temperature (about 1/1000 degrees Fahrenheit).

“To lower the limit on the electron neutrino mass, you need a good thermometer that can measure these very small changes of temperature with high precision,” says Loredana Gastaldo, a Heidelberg University physicist and spokesperson for the ECHo experiment.  

There are currently three holmium experiments, ECHo and HOLMES in Europe and NuMECs in the US, which are in various stages of testing their detectors and producing isotopes of holmium.

The holmium and tritium experiments will help lower the limit on how heavy neutrinos can be, but it may be that none will be able to definitively determine their mass. It will likely require a combination of both direct and indirect neutrino mass experiments to provide scientists with the answers they seek—or, physicists might even find completely unexpected results.

“Don't bet on neutrinos,” Formaggio says. “They’re kind of unpredictable.”

by Diana Kwon at February 04, 2016 04:40 PM

Quantum Diaries

Frénésie du côté de la théorie

Depuis le 15 décembre, j’ai compté 200 nouveaux articles théoriques, chacun offrant une ou plusieurs explications possibles sur la nature d’une nouvelle particule qui n’a pas encore été découverte. Cette frénésie a commencé lorsque les expériences CMS et ATLAS ont toutes deux rapporté avoir trouvé quelques événements qui pourraient révéler la présence d’une nouvelle particule se désintégrant en deux photons. Sa masse serait autour de 750 GeV, soit cinq fois la celle du Higgs boson.

Personne ne sait si un tel engouement est justifié mais cela illustre combien les physiciens et physiciennes espèrent une découverte majeure dans les années à venir. Est-ce que cela se passera comme pour le boson de Higgs, qui fut officiellement découvert en juillet 2012, bien que quelques signes avant-coureurs apparurent un an auparavant ? Il est encore bien trop tôt pour le dire. Et comme je l’avais écrit en juillet 2011, c’est comme si nous essayions de deviner si le train s’en vient en scrutant l’horizon par une morne journée d’hiver. Seule un peu de patience nous dira si la forme indistincte à peine visible au loin est bien le train longuement attendu ou juste une illusion. Il faudra plus de données pour pouvoir trancher, mais en attendant, tout le monde garde les yeux rivés sur cet endroit.
LeTrainDeMidiLe train de midi, Jean-Paul Lemieux, Galerie nationale du Canada

En raison des difficultés inhérentes à la reprise du LHC à plus haute énergie, la quantité de données récoltées à 13 TeV en 2015 par ATLAS et CMS a été très limitée. De tels petits échantillons de données sont toujours sujets à de larges fluctuations statistiques et l’effet observé pourrait bien s’évaporer avec plus de données. C’est pourquoi les deux expériences se sont montrées si réservées lors de la présentation de ces résultats, déclarant clairement qu’il était bien trop tôt pour sauter au plafond.

Mais les théoriciens et théoriciennes, qui cherchent en vain depuis des décennies un signe quelconque de phénomènes nouveaux, ont sauté sur l’occasion. En un seul mois, y compris la période des fêtes de fin d’année”, 170 articles théoriques avaient déjà été publiés pour suggérer autant d’interprétations différentes possibles pour cette nouvelle particule, même si on ne l’a pas encore découverte.

Aucune nouvelle donnée ne viendra avant quelques mois en raison du de la maintenance annuelle. Le Grand Collisionneur de Hadrons repartira le 21 mars et devrait livrer les premières collisions aux expériences le 18 avril. On espère un échantillon de données de 30 fb-1 en 2016, alors qu’en 2015 seuls 4 fb-1 furent produits. Lorsque ces nouvelles données seront disponibles cet été, nous saurons alors si cette nouvelle particule existe ou pas.

Une telle possibilité serait une véritable révolution. Le modèle théorique actuel de la physique des particules, le Modèle Standard, n’en prévoit aucune. Toutes les particules prédites par le modèle ont déjà été trouvées. Mais puisque ce modèle laisse encore plusieurs questions sans réponses, les théoriciennes et théoriciens sont convaincus qu’il doit exister une théorie plus vaste pour expliquer les quelques anomalies observées. La découverte d’une nouvelle particule ou la mesure d’une valeur différente de celle prévue par la théorie révèleraient enfin la nature de cette nouvelle physique allant au-delà du Modèle Standard.

Personne ne connaît encore quelle forme cette nouvelle physique prendra. Voilà pourquoi tant d’explications théoriques différentes pour cette nouvelle particule ont été proposées. J’ai compilé certaines d’entre elles dans le tableau ci-dessous. Plusieurs de ces articles décrivent simplement les propriétés requises par un nouveau boson pour reproduire les données observées. Les solutions proposées sont incroyablement diversifiées, les plus récurrents étant diverses versions de modèles de matière sombre ou supersymétriques, de Vallée Cachée, de Grande Théorie Unifiée, de bosons de Higgs supplémentaire ou composites, ou encore des dimensions cachées. Il y en a pour tous les goûts : des axizillas au dilatons, en passant pas les cousins de pions sombres, les technipions et la trinification.

La situation est donc tout ce qu’il y a de plus clair : tout est possible, y compris rien du tout. Mais n’oublions pas qu’à chaque fois qu’un accélérateur est monté en énergie, on a eu droit à de nouvelles découvertes. L’été pourrait donc être très chaud.

Pauline Gagnon

Pour en savoir plus sur la physique des particules et les enjeux du LHC, consultez mon livre : « Qu’est-ce que le boson de Higgs mange en hiver et autres détails essentiels».

Pour recevoir un avis lors de la parution de nouveaux blogs, suivez-moi sur Twitter: @GagnonPauline ou par e-mail en ajoutant votre nom à cette liste de distribution.

tableau

Un résumé partiel du nombre d’articles publiés jusqu’à maintenant et le type de solutions proposées pour expliquer la nature de la nouvelle particule, si nouvelle particule il y a. Pratiquement tous les modèles théoriques connus peuvent être adaptés pour accommoder une nouvelle particule compatible avec les quelques événements observés. Ce tableau est juste indicatif et en aucun cas, strictement exact puisque plusieurs articles étaient plutôt difficiles à classer. Une de ces idées s’avèrera-t-elle être juste ?

by Pauline Gagnon at February 04, 2016 04:21 PM

Lubos Motl - string vacua and pheno

Why string theory, by Joseph Conlon
I have received a free copy of "Why String Theory" by Joseph Conlon, a young Oxford string theorist who has done successful specialized work related either to the moduli stabilization of the flux vacua, or to the axions in string theory. (He's been behind the website whystringtheory.com, too.)

The 250-page-long paperback looks modern and tries to be more technical than popular books but less technical than string theory textbooks. Unfortunately, I often feel that "more technical than a popular book" mostly means that the book uses some kind of an intellectual jargon – but the nontrivial physics ideas aren't actually described more accurately than in the popular books.




From the beginning, one may see that the book differs from the typical books that are intensely focusing on the search for a theory of everything. Well, the dedication as well as the introduction to each chapter at the beginning of the book (and others) sort of shocked me.

The dedication remains the biggest shock for me: the book is dedicated to the U.K. taxpayers.




It's not just the dedication, however. In the preface, Conlon explains that he wants the "wonderful fellow citizens who support scientific research through their taxes" (no kidding!) to be the readers. He is very grateful for the money.

The preface has only reinforced my feeling that he is "in it for the money". And the theme has continued to reappear in the following chapters, too. It became a distraction I couldn't get rid of. In at least two sections, he mentions that the financial resources going to string theory are much smaller than those in the medical research and the latter funds are still a tiny portion of the budgets.

Great. But why would you repeat this thing twice in a book that is supposed to be about physics? The money going to pure science is modest because most taxpayers are simply not interested in pure science at all. They are interested in practical things. A minority of the people is interested in our pure knowledge of Nature and those would pay a much higher percentage of the budgets to string theory, too. The actual amounts (perhaps a billion of dollars in the U.S. every year?) are a compromise of a sort.

The idea that all taxpayers will be interested in such a book is silly (almost equivalently, it's silly to think that someone will read the book because he is a taxpayer whose money is partly spent for the research; most people don't read books about cheaper ways to hire janitors although this decides about billions of dollars a year, too) and it's hard for me to get rid of the feeling that Conlon's formulations are shaped by the gratitude to the taxpayers for the money – so he's sort of bribed which is incompatible with the scientific integrity. You may imagine that a sensitive reader such as myself reads the text and sees the impact of the "bribes" on various formulations (for example, Conlon's outrageous lie that all string critics are basically honest people is probably shaped by the financial considerations – because many of the string critics are taxpayers) but quite suddenly, the book counts the string theorists by the number of mortgages that people have because of some work that is linked to string theory. Is that serious? And does a majority of string theorists have a mortgage? Whether it's right or not, why should such things matter?

The obsession with the financial aspects of Conlon's job has distracted me way too often. It's totally OK when some people are considering string theory research to be just another job – but it is just not too interesting to read about it. We don't read books about the dependence of other occupations on wages, either. And for a person who is interested in physics sufficiently to buy the book, the money circulating in string theory research is surely a negligible part of the story.

And this financial theme kept on penetrating way too many things. The first regular chapter, "The Long Wait", starts in June 1973. I honestly wouldn't know what event deserving to start the book occurred in June 1973. It turned out that it was the date when the papers on QCD were submitted and in the book, that event is "special" from the today's viewpoint because these were the newest theoretical physics papers as of today that were awarded by the Nobel prize. It seems technically true – Veltman and 't Hooft did their Nobel-prize-winning work in 1971, Kobayashi and Maskawa earlier in 1973, and so on.

But is this factoid important enough to be given the full first chapter of a book on string theory? I don't think so. The fact that no Nobel prizes came to theoretical physicists for their more recent discoveries isn't really important – except for those who are only doing physics because of the money, of course. But even when it comes to the money, numerous people (especially around string theory and inflation) got greater prizes for much newer insights. There are various reasons why the Nobel prizes aren't being given to theoretical physicists for more recent discoveries but these reasons don't imply that breathtakingly important discoveries haven't been made. This focus on June 1973 is just a totally flawed way to think about the importance in theoretical physics – an unfortunate way to start a semipopular book on theoretical physics.

I knew that the following chapter was about scales in physics which is why I was like "WTF" when I saw the first words of that chapter: "As-Salaam-Alaikum". What? ;-) This Arabic greeting means "peace be upon you". What does it have to do with scales in physics? Even when you add the following exchange from the desert that Conlon added, "where are you coming from and where are you going?", this exchange has still nothing to do with scales in physics. At most, the exchange describes a world line in an Arab region of the spacetime. But it has nothing to do with the renormalization group. Perhaps both situations involves diagrams with oriented lines – but that's too small an amount of common ancestry.

Again, one can't avoid thinking: this awkward beginning was probably a not-so-hidden message to the Muslim British taxpayers. Sorry, I have a problem with that. And I think that so do the Muslim Britons who actually care about physics. And no British Muslim will buy a book about string theory because it contains an Arabic greeting so this kind of bootlicking is ineffective, anyway. The bulk of the chapter dedicates many pages to describing the size of many objects. I think that what makes it boring is that Conlon doesn't seem to communicate any deeper and nontrivial – or, almost equivalently, a priori controversial – ideas (something that books like Wilczek's book on beauty are full of). It seems to me that the book is addressed to some moderately intelligent people with superficial ideas about physics and it encourages them to think that they're not really missing anything important. The logic of the renormalization group, "integrating out", or its relationships with reductionism etc. aren't really discussed.

The following, third chapter wants to cover the pillars of 20th century and pre-stringy physics. It starts by talking about special relativity. Conlon argues that the words "In the beginning..." in the Bible (as well as the whole subject of history etc.) contradict relativity. Sorry, there isn't any contradiction like that. Even in relativity, one may sort events chronologically. Different observers may do so differently but it's still possible. And in the history of events on the Earth, the spatial distances are so short relatively to the times times \(c\) (and the reasonable velocities to consider are so much smaller than the speed of light) that all the observers' choices of the coordinates end up being basically equivalent, anyway. So the reference to the Bible has nothing to do with special relativity, just like the Arabic greeting that had started the previous chapter has nothing to do with scales. Perhaps it was a message to the Christian taxpayers. Or the violent atheist taxpayers – because the comment about the Bible was a negative one.

Now, a page is dedicated to special relativity and less than a page to foundations of general relativity. It's really too little and nothing is really explained there. Moreover, general relativity is framed as a "replacement" of special relativity. That's not correct. Einstein would describe it as a generalization, not replacement, of special relativity (look at the name), relevant for situations in which gravity matters. In the modern setup, we view general relativity as the unique framework that results from the combination of special relativity and spin-two massless fields (which are needed to incorporate gravity). In this sense, general relativity is an application of special relativity – and in some sense a subset of special relativity.

Quantum mechanics is given several pages and Conlon says that it absolutely works which is good news. But aside from a few sentences about the quantum entanglement, the pages are mostly spent with repeating that quantum mechanics is needed for chemistry. There are several more sections about the pre-stringy pillars of physics – some cosmology, something about symmetries.

The fourth chapter wants to argue that something beyond the Standard Model and GR is needed. So it's the chapter mentioning the non-renormalizability of gravity etc. Some important points are made, including the point that quantum mechanics must hold universally (Conlon surely is pro-QM). But I can't see what kind of readers (with what background) will understand the explanations at this level. The explanations vaguely depend on some quasi-expert's jargon but they don't say enough for you to reconstruct any actual arguments. I've done lots of this semi-expert writing and it seems absolutely obvious to me that you need to extend the semi-technical explanations at least by an order of magnitude relatively to Conlon's short summaries to actually convey some helpful, verifiable, usable, nontrivial ideas.

If I try to characterize the people who are waiting for this genre that is linguistically heavy but lacking the actual arguments, I think it's right to say that they're "intellectuals who are ready to parrot sentences, even complicated sentences with the jargon similar to the experts' jargon, to defend their intellectual credentials (i.e. impress other people with intelligently sounding sentences)" but who don't really understand anything properly. And I think it's not right to increase the number of such people.

Thankfully, things get better from the fifth chapter that begins with string theory proper. The first event is Veneziano's work in 1968. Conlon describes about 10 non-physics events in the year 1968. It's not clear to me why there are so many events like that. But in such a long list, I think it is crazy not to mention Prague Spring in Czechoslovakia (and the student riots in Paris should be featured more prominently, too). It ended on August 21st, 1968 when 750,000 Warsaw Pact troops occupied my country. To say the least, it was the largest military operation since the war which, I believe, is more important than the cancellation of last steam engines on British railways etc.

The world sheet duality that was deduced from Veneziano's formula hides some cool mathematics but the book unfortunately avoids equations so none of this content is communicated. The beauty and power of all these things may only be understood along with the mathematical relationships etc. which is why I am afraid that this "more detailed" but purely verbal story about the discoveries doesn't bring much to a thinking reader. It's like a book praising a beautiful painting – which doesn't actually show you the painting.

There are various stories, e.g. about Claude Lovelace who realized that bosonic string theory requires 26 dimensions. Lovelace has never completed his PhD but he was hired as a professor at Rutgers, anyway – I was meeting him for many years when I was a PhD student (he died in 2012). A quote in Conlon's book suggests that Lovelace's promotion was insufficient relatively to his contributions. I wouldn't agree with that. At Rutgers, I also knew Joel Shapiro, another early father of string theory. He's a fun guy – he taught group theory to us. A very good course. At a colloquium I gave later, he suggested that the term "first string revolution" should indeed be used for the 1968-1973 era, as Conlon indicates. Whatever is the right name (I called it the zeroth string revolution), it wasn't a superstring revolution because there was no supersymmetry yet!

What seems problematic to me is that the exact chronology of the historical events became the heart of Conlon's prose. But string theory isn't a collection of random historical events, like the Second World War. It's primarily a unifying theory of everything that we don't understand perfectly but the current incomplete understanding is much more accurate and makes much more sense than what people knew in the late 1960s, or in the following 4 decades. A book that is really about physics just can't put all historical events on the same level. The history was just about the "Columbus' journey to the New World" but it's the New World itself, and not details of the journey, that should be the point of a book "Why the New World".

Various discoveries and dualities etc. are mentioned in one or two sentences per discovery. I think it's just too little information about each of them. It may be OK for people who read dozens of redundant books a year and who don't feel the urge to think about every idea that is being pumped into them. For certain reasons, I think that it's counterproductive when people learn about too many facts about string theory (or another theory) without really understanding their relationship and inevitability. If they learn many things, they must feel that string theorists are just inventing random garbage. It feels like science could live equally well without those things (if the events were replaced by totally different events with a different outcome) – just like the mankind could have survived without the Second World War. But it isn't the case. They're deducing something that can't be otherwise – a point you may only verify if you actually know the technology. Well, at some moment, you may start to trust claims about exact dualities etc. by certain authors. But you must see the strong evidence for or a derivation of at least one such an amazing result (or several) to see that it's not just a pile of fairy-tales.

While e.g. Richard Dawid exaggerates (to say the least) the changes in thinking during the string theory era, he does correctly capture the importance of uniqueness (only game in town) and unexpected explanatory interconnections for the string theorists' focus on string theory. Conlon, while a string theorist, seems to completely overlook if not explicitly reject these facts and principles. But they're essential for the understanding where theoretical physicists will look for new insights in the future and how they use the accumulated knowledge to find the new one. So the vision or motivation for the future is therefore basically absent in Conlon's book, too.

Another chapter is about AdS/CFT and the landscape. AdS/CFT was revealed in 1997 – and just like for 1968, Conlon lists many events in 1997. Tony Blair won some elections in a landslide. Holy cow. Every year, there are hundreds of elections in the world and someone wins them. Even the elections in the U.K. are rather frequent. Moreover, I don't understand the logic by which a book like this one should be preferably read by the Britons only. The scientific curiosity is transnational. The book may be "dedicated" to U.K. taxpayers but if it's about science, then it must be equally interesting for the Canadian and other taxpayers, right?

But there are more serious bugs with the content, I think. We're told that after AdS/CFT, almost no one would view string theory primarily as a unifying fundamental theory of Nature. Sorry but that's rubbish. Virtually all top string theorists do. The fact that there are lots of articles that use AdS/CFT methods outside "fundamental physics" doesn't imply that the links of string theory with fundamental physics have been weakened. You may find millions of T-shirts with \(E=mc^2\) which doesn't mean that it's the most important insight made by Einstein.

Similarly, it's wrong to say that the AdS/CFT made string theory "less special". The AdS/CFT correspondence has found a new powerful equivalence between string theory and quantum field theory – but the two sides operate in different spacetimes or world volumes. This holographic duality has made string theory more "inseparable" from the established physics and it became less conceivable that string theory could be "cut" away from physics again – it's because dynamics of string theory inevitably emerges if you study the important established theories, quantum field theories, carefully enough (especially in certain limits).

But if you use a consistent description, it's true (just like it was true before AdS/CFT was found) that in any spacetime where you can see the effects of quantum gravity, you may also see something like strings or M2-branes and the extra dimensions (for the total to be 10 or 11) and other things that come with them. AdS/CFT doesn't allow you to circumvent this fact in any way. It only gives you a new description of this physics of strings or membranes in terms of a theory on a different space, the boundary of the AdS space – a new QFT-based tool to directly prove that there are strings, branes, and various other stringy effects in the bulk. This theory on the boundary happens to be a quantum field theory. But the importance of QFTs in string theory wasn't new, either. Perturbative string theory was always described in terms of a QFT, namely the two-dimensional conformal field theory. The essential point is that this 2D CFT lives on a different space, the world sheet, than the actual spacetime where we observe the gravity. Aside from the world sheet CFT, AdS/CFT has also told us to use the boundary CFT – another QFT-style way to describe stringy physics. But the physics of quantum gravity in the same spacetime as the spacetime of quantum gravity is as stringy as it was before. AdS/CFT has allowed us to explicitly construct many phenomenologically unrealistic sets of equations for quantum gravity (by constructing some boundary CFTs) but it hasn't made the problem of combining particular non-gravitational matter contents with quantum gravity less constraining. An ordinary generic QFT used as a boundary CFT produces a "heavily curved gravitating AdS spacetime" and those may have become "easy" in some way. But the actual, low-curvature theories of quantum gravity are as rare as before.

At least, I found Conlon's discussion of the landscape OK. The large number of solutions is neither new not a problem. The anthropic principle is non-vacuous but it may easily degenerate into explanations that may look sufficient to someone but that are demonstrably not the right ones.

In another chapter, Conlon starts to talk about the "problem of strong coupling". I am afraid that the basic idea that "something is easy at weak coupling, hard at strong coupling" etc. is very easy, much like the usage of the buzzword "nonperturbative". But people who don't really understand and who misinterpret what "easy" and "hard" and "nonperturbative" mean will do so after reading these pages by Conlon, too. Conlon continues with the discussion of the high number of citations of AdS/CFT and reasons why it's exact and correct. An exact agreement about a complicated polynomial-in-zeta-function formula for the dimension of the Konishi operator for many colors makes a lesson clear.

Many pages talk about the application of AdS/CFT correspondence to heavy ion physics; the next section similarly talks about AdS/condensed matter physics. There are many true facts and factoids there. I disagree with Conlon's conclusion in the heavy ion section that adding corrections on top of simplified models is the universal "modus operandi" of science. He uses this thesis to explain that the "exact AdS theory" of the heavy ion physics has to be supplemented with corrections for it to work. That's true and it's normal in much of physics but 1) it is always preferred in physics when adjustments don't have to be added, and 2) it is not how string theory in the strict sense works. String theory does not allow one to add any continuous corrections to its physics, ever. Everything is completely determined by discrete data (identifying the vacuum solution) and the fact that the adjustments are possible in AdS/heavy ion physics shows that those methods are just string-inspired, not examples of full-fledged string theory.

The next chapter talks about the interactions between physics and mathematics. It starts with the pride of physicists. Physics is the deepest science and physicists are Sheldon. No one else can match them – perhaps with the exception of mathematicians. Some insights and facts about mathematics are picked (perhaps a bit randomly) but the main point to be discussed is the flow of ideas in between mathematics and physics.

Monstrous moonshine and mirror symmetry are discussed as the two big examples of string theory's importance in mathematics (excellent topics except that one can't see the beauty without the mathematical "details") while the next section argues against "cults" and chooses Feynman and Witten as the two "cults" that should be avoided. (I think that Witten's cult is basically non-existent, at least outside 3-4 buildings in the world, and I would say "unfortunately".) Progress since the 1980s wouldn't have taken place if everyone were like Feynman; or everyone were like Witten, Conlon says. I actually disagree with both statements. Diversity is way too overrated here. If you had 1,000 Feynmen in physics in the 1980s, I am pretty sure that they would have found the things in the first superstring revolution, too, aside from many other discoveries. One can approach all these things in Feynman's intuitive way. And Conlon overstates how "just intuitive" Feynman papers were. He could have made discoveries with easier formulae because he was normally making fundamental discoveries. But he was the first guy who systematically calculate the Feynman diagrams – from the path integrals to the Feynman parameterization, \(bc\) ghosts that Feynman de facto invented, and beyond. This is in no way "just heuristic/intuitive science".

The disadvantange of Feynman in the real world was that there was only one. Things are even clearer with Witten. I don't agree with Conlon that Witten is only good at things that are "at the intersection of mathematics and physics". Witten has done lots of phenomenology, too, including things like cosmic strings, SUSY breaking patterns, detailed calculations on \(G_2\) holonomy manifolds. 100 Wittens and no one else since 1968 would have been enough to find basically everything we know today. People are different and may have different strengths but that doesn't mean that most of these idiosyncrasies are irreplaceable. It can take more effort for someone to find something – than it takes to someone else – but science ultimately works for everyone who is sufficiently intelligent and hard-working. To say otherwise means to believe that science depends on some magic unreproducible skills.

Chapter 10 is meant to focus on Conlon's characteristic research topics – stabilization of the moduli in compactifications and axions. You may imagine that one needs to know quite a lot to follow what e.g. his papers could have contributed. I think it's basically impossible to convey the information in a semipopular book but he tries. The following Chapter 11 is about quantum gravity in string theory – Strominger and Vafa etc. It doesn't get to recent, post-2009 advances, as far as I can see.

Another chapter argues that all styles of doing physics – revolutionaries and hard workers etc. etc. – are important. It may sound OK but in reality, it's not really possible to classify most physicists by their styles into these boxes at all. Whether someone makes a revolution is ultimately not about his styles and emotions, anyway. And as I said, good physicists may "emulate" what others are doing, despite their having different methods and styles.

The following chapter does a pretty good job in replying some common criticisms of string theory. Then there is another chapter where it's discussed e.g. why loop quantum gravity has remained unsuccessful. I don't think that Conlon describes the status of that proposed theory accurately.

There's a lot of facts and ideas to be found in this book and I obviously agree with a large portion of it. But because of the combination of the "difficult language" and "shortage of actual explanations with the beef", the target audience isn't clear to me, the text seems to be driven by financial and career-wise considerations at too many places (and many of us find these sociological etc. things to be too distracting), and it doesn't go into the sufficient depth for the reader to actually understand that string theory isn't a conglomerate of randomly invented ideas that people are adding arbitrarily (even though Conlon knows very well and explicitly writes that string theory cannot be described in this way). It is not really a book that explains something hard enough (for the layman or the non-expert scientist) and I think that Conlon isn't really an "explainer" in this sense. And I even think that the book reinforces some misconceptions spread by some critics of string theory (e.g. about the impact of AdS/CFT on the status of string theory as a TOE).

You may want to buy the book anyway, to see that it's perhaps not as bad as this text makes it sound.

by Luboš Motl (noreply@blogger.com) at February 04, 2016 02:44 PM

February 02, 2016

Tommaso Dorigo - Scientificblogging

Choose the next topic
Being back in blogging mood, I decided I would make a poll among the most affectionate readers of this column - those who will come here to read "blog" pieces and not only "articles which are sponsored on the relevant spots in the main web page of the Science20 site.
The idea is that I have a few topics to offer for the next few posts, and I would offer you to choose which one you are interested to read about. Of course, you could also suggest that I write about something different from my proposed topics - but I do not guarantee that I will comply, as I might feel unfit to the requested tasks. We'll see, though.

Here is a short list of a few things I can spend my time talking about in a post here.

- recent CMS results
- recent ATLAS results

read more

by Tommaso Dorigo at February 02, 2016 08:34 PM

Symmetrybreaking - Fermilab/SLAC

This radioactive life

Radiation is everywhere. The question is: How much?

An overly plump atomic nucleus just can’t keep itself together. 

When an atom has too many protons or neutrons, it’s inherently unstable. Although it might sit tight for a while, eventually it can’t hold itself together any longer and it spontaneously decays, spitting out energy in the form of waves or particles.

The end result is a smaller, more stable nucleus. The spit-out waves and particles are known as radiation, and the process of nuclear decay that produces them is called radioactivity. 

Radiation is a part of life. There are radioactive elements in most of the materials we encounter on a daily basis, which constantly spray us with radiation. For the average American, this adds up to a dose of about 620 millirem of radiation every year. That’s roughly equivalent to 10 abdominal X-rays. 

Scientists use the millirem unit to express how much a radiation dose damages the human body. A person receives 1 millirem during an airline flight from one U.S. coast to the other. 

But where exactly does our annual dose of radiation come from? Looking at sources, we can split the dosage in two nearly equal parts: About half comes from natural background radiation and half comes from manmade sources.

Infographic by Sandbox Studio, Chicago with Ana Kova

 

Natural background radiation originates from outer space, the atmosphere, the ground, and our own bodies. There’s radon in the air we breathe, radium in the water we drink and miscellaneous radioactive elements in the food we eat. Some of these pass through our bodies without much ado, but some get incorporated into our molecules. When the nuclei eventually decay, our own bodies expose us to tiny doses of radiation. 

“We’re exposed to background radiation whether we like it or not,” says Sayed Rokni, radiation safety officer and radiation protection department head at SLAC National Accelerator Laboratory. “That exists no matter what we do. I wouldn’t advise it, but we could choose not to have dental X-rays. But we can’t choose not to be exposed to terrestrial radiation—radiation that is in the crust of the earth, or from cosmic radiation.”

It’s no reason to panic, though. 

“The human species, and everything around us, has evolved over the ages while receiving radiation from natural sources. It has formed us. So clearly there is an acceptable level of radiation,” Rokni says. 

Any radiation not considered background comes from manmade sources, primarily through diagnostic or therapeutic medical procedures. In the early 1980s, medical procedures accounted for 15 percent of an American’s yearly radiation exposure—they now account for 48 percent. 

“The amount of natural background radiation has stayed the same,” says Don Cossairt, Fermilab radiation protection manager. “But radiation from medical procedures has blossomed, perhaps with corresponding dramatic improvements in treating many diseases and ailments.” 

Growth in the use of medical imaging has raised the average American’s yearly exposure from its 1980s' average of 360 millirems to 620 millirems. Today’s annual average is not regarded as harmful to health by any regulatory authority. 

While medical procedures make up most of the manmade radiation we receive, about 2 percent of the overall annual dose comes from radiation emitted by some consumer products. Most of these products are probably in your home right now. Simply examining the average kitchen, one finds a cornucopia of items that emit enough radiation to detect it with a Geiger counter, in both manmade consumer products and natural foods. 

Are there Brazil nuts in your pantry? They’re the most radioactive food there is. A Brazil nut tree’s roots reach far down into the soil to deep underground where there’s more radium, absorb this radioactive element, and pass it on to the nuts. Brazil nuts also contain potassium, which occurs in tandem with potassium-40, a naturally occurring radioactive isotope. 

Potassium-40 is the most prevalent radioactive element in the food we eat. Potassium-packed bananas are well known for their radioactivity, so much so that a banana’s worth of radioactivity is used as an informal measurement of radiation. It’s called the Banana Equivalent Dose. One BED is equal to 0.01 millirem. A typical chest x-ray is somewhere around 200 to 1000 BED. A fatal dose of radiation is about 50 million BED in one sitting. 

Some other potassium-40-containing munchies that emit radiation include carrots, potatoes, lima and kidney beans and red meat. From food and water alone, the average person receives an annual internal dose of about 30 millirem. That’s 3000 bananas!

Even the dish off of which you’re eating may be giving you a slight dose of radiation. The glaze of some older ceramics contains uranium, thorium or good ol’ potassium-40 to make it a certain color, especially red-orange pottery made pre-1960s. Likewise, some yellowish and greenish antique glassware contains uranium as a colorant. Though this dinnerware might make a Geiger counter click, it’s still safe to eat with. 

Your smoke detector, which usually hangs silently on the ceiling until its batteries go dead, is radioactive too. That’s how it can save you from a burning building: A small amount of americium-241 in the device allows it to detect when there’s smoke in the air. 

“It’s not dangerous unless you take it out in the garage and beat it up with a hammer to release the radioactivity,” Cossairt says. The World Nuclear association notes that the americium dioxide found in smoke detectors is insoluble and would “pass through the digestive tract without delivering a significant radiation dose.”

Granite countertops also contain uranium and thorium, which decays into radon gas. Most of the gas gets trapped in the countertop, but some can be released and add a small amount to the radon level in a home—which primarily comes from the soil a structure sits on. 

Granite doesn’t just emit radiation inside the home. People living in areas with more granite rock receive an extra boost of radiation per year. 

Yearly radiation exposure varies significantly depending on where you live. People at higher altitudes receive a greater dose of radiation showered from space per year. 

But not to worry if you live in a locale with lots of altitude and granite, like Denver, Colorado. “No health effect due to radiation exposure has ever been correlated with people living at higher altitudes,” Cossairt says. Similarly, no one has noted a correlation between health and the increased dose of radiation from environmental granite rock. 

It doesn’t matter if you’re living at altitude or sea level, in the Rocky Mountains or on Maryland’s Eastern Shore—radiation is everywhere. But annual doses from background and manmade sources aren’t enough to worry about. So enjoy your banana and feel free to grab another handful of Brazil nuts.


Check out our printable poster about radioactivity.

Artwork by Sandbox Studio, Chicago with Ana Kova

by Chris Patrick at February 02, 2016 04:04 PM

The n-Category Cafe

Integral Octonions (Part 12)

guest post by Tim Silverman

“Everything is simpler mod <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>.”

That is is the philosophy of the Mod People; and of all <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, the simplest is 2. Washed in a bath of mod 2, that exotic object, the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice, dissolves into a modest orthogonal space, its Weyl group into an orthogonal group, its “large” <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> sublattices into some particularly nice subspaces, and the very Leech lattice itself shrinks into a few arrangements of points and lines that would not disgrace the pages of Euclid’s Elements. And when we have sufficiently examined these few bones that have fallen out of their matrix, we can lift them back up to Euclidean space in the most naive manner imaginable, and the full Leech springs out in all its glory like instant mashed potato.

What is this about? In earlier posts in this series, JB and Greg Egan have been calculating and exploring a lot of beautiful Euclidean geometry involving <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> and the Leech lattice. Lately, a lot of Fano planes have been popping up in the constructions. Examining these, I thought I caught some glimpses of a more extensive <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics> geometry; I made a little progress in the comments, but then got completely lost. But there is indeed an extensive <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics> world in here, parallel to the Euclidean one. I have finally found the key to it in the following fact:

Large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices mod <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> are just maximal flats in a <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics>-dimensional quadric over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>.

I’ll spend the first half of the post explaining what that means, and the second half showing how everything else flows from it. We unfortunately bypass (or simply assume in passing) most of the pretty Euclidean geometry; but in exchange we get a smaller, simpler picture which makes a lot of calculations easier, and the <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics> world seems to lift very cleanly to the Euclidean world, though I haven’t actually proved this or explained why — maybe I shall leave that as an exercise for you, dear readers.

N.B. Just a quick note on scaling conventions before we start. There are two scaling conventions we could use. In one, a ‘shrunken’ <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> made of integral octonions, with shortest vectors of length <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>, contains ‘standard’ sized <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices with vectors of minimal length <semantics>2<annotation encoding="application/x-tex">\sqrt{2}</annotation></semantics>, and Wilson’s Leech lattice construction comes out the right size. The other is <semantics>2<annotation encoding="application/x-tex">\sqrt{2}</annotation></semantics> times larger: a ‘standard’ <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice contains “large” <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices of minimal length <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, but Wilson’s Leech lattice construction gives something <semantics>2<annotation encoding="application/x-tex">\sqrt{2}</annotation></semantics> times too big. I’ve chosen the latter convention because I find it less confusing: reducing the standard <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> mod <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> is a well-known thing that people do, and all the Euclidean dot products come out as integers. But it’s as well to bear this in mind when relating this post to the earlier ones.

Projective and polar spaces

I’ll work with projective spaces over <semantics>𝔽 q<annotation encoding="application/x-tex">\mathbb{F}_q</annotation></semantics> and try not to suddenly start jumping back and forth between projective spaces and the underlying vector spaces as is my wont, at least not unless it really makes things clearer.

So we have an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional projective space over <semantics>𝔽 q<annotation encoding="application/x-tex">\mathbb{F}_q</annotation></semantics>. We’ll denote this by <semantics>PG(n,q)<annotation encoding="application/x-tex">\mathrm{PG}(n,q)</annotation></semantics>.

The full symmetry group of <semantics>PG(n,q)<annotation encoding="application/x-tex">\mathrm{PG}(n,q)</annotation></semantics> is <semantics>GL n+1(q)<annotation encoding="application/x-tex">\mathrm{GL}_{n+1}(q)</annotation></semantics>, and from that we get subgroups and quotients <semantics>SL n+1(q)<annotation encoding="application/x-tex">SL_{n+1}(q)</annotation></semantics> (with unit determinant), <semantics>PGL n+1(q)<annotation encoding="application/x-tex">\mathrm{PGL}_{n+1}(q)</annotation></semantics> (quotient by the centre) and <semantics>PSL n+1(q)<annotation encoding="application/x-tex">\mathrm{PSL}_{n+1}(q)</annotation></semantics> (both). Over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>, the determinant is always <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> (since that’s the only non-zero scalar) and the centre is trivial, so these groups are all the same.

In projective spaces over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>, there are <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> points on every line, so we can ‘add’ two any points and get the third point on the line through them. (This is just a projection of the underlying vector space addition.)

In odd characteristic, we get two other families of Lie type by preserving two types of non-degenerate bilinear form: symmetric and skew-symmetric, corresponding to orthogonal and symplectic structures respectively. (Non-degenerate Hermitian forms, defined over <semantics>𝔽 q 2<annotation encoding="application/x-tex">\mathbb{F}_{q^2}</annotation></semantics>, also exist and behave similarly.)

Denote the form by <semantics>B(x,y)<annotation encoding="application/x-tex">B(x,y)</annotation></semantics>. Points <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> for which <semantics>B(x,x)=0<annotation encoding="application/x-tex">B(x, x)=0</annotation></semantics> are isotropic. For a symplectic structure all points are isotropic. A form <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> such that <semantics>B(x,x)=0<annotation encoding="application/x-tex">B(x,x)=0</annotation></semantics> for all <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> is called alternating, and in odd characteristic, but not characteristic <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, skew-symmetric and alternating forms are the same thing.

A line spanned by two isotropic points, <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> and <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics>, such that <semantics>B(x,y)=1<annotation encoding="application/x-tex">B(x,y)=1</annotation></semantics> is a hyperbolic line. Any space with a non-degenerate bilinear (or Hermitian) form can be decomposed as the orthogonal sum of hyperbolic lines (i.e. as a vector space, decomposed as an orthogonal sum of hyperbolic planes), possibly together with an anisotropic space containing no isotropic points at all. There are no non-empty symplectic anisotropic spaces, so all symplectic spaces are odd-dimensional (projectively — the corresponding vector spaces are even-dimensional).

There are anisotropic orthogonal points and lines (over any finite field including in even characteristic), but all the orthogonal spaces we consider here will be a sum of hyperbolic lines — we say they are of plus type. (The odd-dimensional projective spaces with a residual anisotropic line are of minus type.)

A quadratic form <semantics>Q(x)<annotation encoding="application/x-tex">Q(x)</annotation></semantics> is defined by the conditions

i) <semantics>Q(x+y)=Q(x)+Q(y)+B(x,y)<annotation encoding="application/x-tex">Q(x+y)=Q(x)+Q(y)+B(x,y)</annotation></semantics>, where <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> is a symmetric bilinear form.

ii) <semantics>Q(λx)=λ 2Q(x)<annotation encoding="application/x-tex">Q(\lambda x)=\lambda^2Q(x)</annotation></semantics> for any scalar <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics>.

There are some non-degeneracy conditions I won’t go into.

Obviously, a quadratic form implies a particular symmetric bilinear form, by <semantics>B(x,y)=Q(x+y)Q(x)Q(y)<annotation encoding="application/x-tex">B(x,y)=Q(x+y)-Q(x)-Q(y)</annotation></semantics>. In odd characteristic, we can go the other way: <semantics>Q(x)=12B(x,x)<annotation encoding="application/x-tex">Q(x)=\frac{1}{2}B(x,x)</annotation></semantics>.

We denote the group preserving an orthogonal structure of plus type on an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional projective space over <semantics>𝔽 q<annotation encoding="application/x-tex">\mathbb{F}_q</annotation></semantics> by <semantics>GO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{GO}_{n+1}^+(q)</annotation></semantics>, by analogy with <semantics>GL n+1(q)<annotation encoding="application/x-tex">\mathrm{GL}_{n+1}(q)</annotation></semantics>. Similarly we have <semantics>SO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{SO}_{n+1}^+(q)</annotation></semantics>, <semantics>PGO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{PGO}_{n+1}^+(q)</annotation></semantics> and <semantics>PSO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{PSO}_{n+1}^+(q)</annotation></semantics>. However, whereas <semantics>PSL n(q)<annotation encoding="application/x-tex">\mathrm{PSL}_n(q)</annotation></semantics> is simple apart from <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> exceptions, we usually have an index <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> subgroup of <semantics>SO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{SO}_{n+1}^+(q)</annotation></semantics>, called <semantics>Ω n+1 +(q)<annotation encoding="application/x-tex">\Omega_{n+1}^+(q)</annotation></semantics>, and a corresponding index <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> subgroup of <semantics>PSO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{PSO}_{n+1}^+(q)</annotation></semantics>, called <semantics>PΩ n+1 +(q)<annotation encoding="application/x-tex">\mathrm{P}\Omega_{n+1}^+(q)</annotation></semantics>, and it is the latter that is simple. (There is an infinite family of exceptions, where <semantics>PSO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{PSO}_{n+1}^+(q)</annotation></semantics> is simple.)

Symplectic structures are easier — the determinant is automatically <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>, so we just have <semantics>Sp n+1(q)<annotation encoding="application/x-tex">\mathrm{Sp}_{n+1}(q)</annotation></semantics> and <semantics>PSp n+1(q)<annotation encoding="application/x-tex">\mathrm{PSp}_{n+1}(q)</annotation></semantics>, with the latter being simple except for <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> exceptions.

Just as a point with <semantics>B(x,x)=0<annotation encoding="application/x-tex">B(x,x)=0</annotation></semantics> is an isotropic point, so any subspace with <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics> identically <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> on it is an isotropic subspace.

And just as the linear groups act on incidence geometries given by the (‘classical’) projective spaces, so the symplectic and orthogonal act on polar spaces, whose points, lines, planes, etc, are just the isotropic points, isotropic lines, isotropic planes, etc given by the bilinear (or Hermitian) form. We denote an orthogonal polar space of plus type on an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-dimensional projective space over <semantics>𝔽 q<annotation encoding="application/x-tex">\mathbb{F}_q</annotation></semantics> by <semantics>Q n +(q)<annotation encoding="application/x-tex">\mathrm{Q}_n^+(q)</annotation></semantics>.

In characteristic <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, a lot of this goes wrong, but in a way that can be fixed and mostly turns out the same.

1) Symmetric and skew-symmetric forms are the same thing! There are still distinct orthogonal and symplectic structures and groups, but we can’t use this as the distinction.

2) Alternating and skew-symmetric forms are not the same thing! Alternating forms are all skew-symmetric (aka symmetric) but not vice versa. A symplectic structure is given by an alternating form — and of course this definition works in odd characteristic too.

3) Symmetric bilinear forms are no longer in bijection with quadratic forms: every quadratic form gives a unique symmetric (aka skew-symmetric, and indeed alternating) bilinear form, but an alternating form is compatible with multiple quadratic forms. We use non-degenerate quadratic forms to define orthogonal structures, rather than symmetric bilinear forms — which of course works in odd characteristic too. (Note also from the above that in characteristic <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> an orthogonal structure has an associated symplectic structure, which it shares with other orthogonal structures.)

We now have both isotropic subspaces on which the bilinear form is identically <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, and singular subspaces on which the quadratic form is identically <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, with the latter being a subset of the former. It is the singular spaces which go to make up the polar space for the orthogonal structure.

To cover both cases, we’ll refer to these isotropic/singular projective spaces inside the polar spaces as flats.

Everything else is still the same — decomposition into hyperbolic lines and an anisotropic space, plus and minus types, <semantics>Ω n+1 +(q)<annotation encoding="application/x-tex">\Omega_{n+1}^+(q)</annotation></semantics> inside <semantics>SO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{SO}_{n+1}^+(q)</annotation></semantics>, polar spaces, etc.

Over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>, we have that <semantics>GO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{GO}_{n+1}^+(q)</annotation></semantics>, <semantics>SO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{SO}_{n+1}^+(q)</annotation></semantics>, <semantics>PGO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{PGO}_{n+1}^+(q)</annotation></semantics> and <semantics>PSO n+1 +(q)<annotation encoding="application/x-tex">\mathrm{PSO}_{n+1}^+(q)</annotation></semantics> are all the same group, as are <semantics>Ω n+1 +(q)<annotation encoding="application/x-tex">\Omega_{n+1}^+(q)</annotation></semantics> and <semantics>PΩ n+1 +(q)<annotation encoding="application/x-tex">\mathrm{P}\Omega_{n+1}^+(q)</annotation></semantics>.

The vector space dimension of the maximal flats in a polar space is the polar rank of the space, one of its most important invariants — it’s the number of hyperbolic lines in its orthogonal decomposition.

<semantics>Q 2m1 +(q)<annotation encoding="application/x-tex">\mathrm{Q}_{2m-1}^+(q)</annotation></semantics> has rank <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>. The maximal flats fall into two classes. In odd characteristic, the classes are preserved by <semantics>SO 2m +(q)<annotation encoding="application/x-tex">\mathrm{SO}_{2m}^+(q)</annotation></semantics> but interchanged by the elements of <semantics>GO 2m +(q)<annotation encoding="application/x-tex">\mathrm{GO}_{2m}^+(q)</annotation></semantics> with determinant <semantics>1<annotation encoding="application/x-tex">-1</annotation></semantics>. In even characteristic, the classes are preserved by <semantics>Ω 2m +(q)<annotation encoding="application/x-tex">\Omega_{2m}^+(q)</annotation></semantics>, but interchanged by elements of <semantics>GO 2m +(q)<annotation encoding="application/x-tex">\mathrm{GO}_{2m}^+(q)</annotation></semantics>.

Finally, I’ll refer to the value of the quadratic form at a point, <semantics>Q(x)<annotation encoding="application/x-tex">Q(x)</annotation></semantics>, as the norm of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>, even though in Euclidean space we’d call it “half the norm-squared”.

Here are some useful facts about <semantics>Q 2m1 +(q)<annotation encoding="application/x-tex">\mathrm{Q}_{2m-1}^+(q)</annotation></semantics>:

1a. The number of points is <semantics>(q m1)(q m1+1)q1<annotation encoding="application/x-tex">\displaystyle\frac{\left(q^m-1\right)\left(q^{m-1}+1\right)}{q-1}</annotation></semantics>.

1b. The number of maximal flats is <semantics> i=0 m1(1+q i)<annotation encoding="application/x-tex">\prod_{i=0}^{m-1}\left(1+q^i\right)</annotation></semantics>.

1c. Two maximal flats of different types must intersect in a flat of odd codimension; two maximal flats of the same type must intersect in a flat of even codimension.

Here two more general facts.

1d. Pick a projective space <semantics>Π<annotation encoding="application/x-tex">\Pi</annotation></semantics> of dimension <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>. Pick a point <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> in it. The space whose points are lines through <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, whose lines are planes through <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, etc, with incidence inherited from <semantics>Π<annotation encoding="application/x-tex">\Pi</annotation></semantics>, is a projective space of dimension <semantics>n1<annotation encoding="application/x-tex">n-1</annotation></semantics>.

1e. Pick a polar space <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics> of rank <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>. Pick a point <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> in it. The space whose points are lines (i.e. <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>-flats) through <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, whose lines are planes (i.e. <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-flats) through <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, etc, with incidence inherited from <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics>, is a polar space of the same type, of rank <semantics>m1<annotation encoding="application/x-tex">m-1</annotation></semantics>.

The Klein correspondence at breakneck speed

The bivectors of a <semantics>4<annotation encoding="application/x-tex">4</annotation></semantics>-dimensional vector space constitute a <semantics>6<annotation encoding="application/x-tex">6</annotation></semantics>-dimensional vector space. Apart from the zero bivector, these fall into two types: degenerate ones which can be decomposed as the wedge product of two vectors and therefore correspond to planes (or, projectively, lines); and non-degenerate ones, which, by, wedging with vectors on each side give rise to symplectic forms. Wedging two bivectors gives an element of the <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>-dimensional space of <semantics>4<annotation encoding="application/x-tex">4</annotation></semantics>-vectors, and, picking a basis, the single component of this wedge product gives a non-degenerate symmetric bilinear form on the <semantics>6<annotation encoding="application/x-tex">6</annotation></semantics>-dimensional vector space of bivectors, and hence, in odd characteristic, an orthogonal space, which turns out to be of plus type. It also turns out that this can be carried over to characteristic <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> as well, and gives a correspondence between <semantics>PG(3,q)<annotation encoding="application/x-tex">\mathrm{PG}(3,q)</annotation></semantics> and <semantics>Q 5 +(q)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(q)</annotation></semantics>, and isomorphisms between their symmetry groups. It is precisely the degenerate bivectors that are the ones of norm <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, and we get the following correspondence:

<semantics>Q 5 +(q) PG(3,q) point line orthogonal points intersecting lines line plane pencil plane 1 point plane 2 plane<annotation encoding="application/x-tex">\array{\arrayopts{\collayout{left}\collines{dashed}\rowlines{solid dashed}\frame{solid}} \mathbf{\mathrm{Q}_5^+(q)}&\mathbf{\mathrm{PG}(3,q)}\\ \text{point}&\text{line}\\ \text{orthogonal points}&\text{intersecting lines}\\ \text{line}&\text{plane pencil}\\ \text{plane}_1&\text{point}\\ \text{plane}_2&\text{plane} }</annotation></semantics>

Here, “plane pencil” is all the lines that both go through a particular point and lie in a particular plane: effectively a point on a plane. The two types of plane in <semantics>Q 5 +(q)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(q)</annotation></semantics> are two families of maximal flats, and they correspond, in <semantics>PG(3,q)<annotation encoding="application/x-tex">\mathrm{PG}(3,q)</annotation></semantics>, to “all the lines through a particular point” and “all the lines in a particular plane”.

From fact 1c above, in <semantics>Q 5 +(q)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(q)</annotation></semantics> we have that two maximal flats of of different type must either intersect in a line or not intersect at all, corresponding to the fact in <semantics>PG(3,q)<annotation encoding="application/x-tex">\mathrm{PG}(3,q)</annotation></semantics> that a point and a plane either coincide or don’t; while two maximal flats of the same type must intersect in a point, corresponding to the fact in <semantics>PG(3,q)<annotation encoding="application/x-tex">\mathrm{PG}(3,q)</annotation></semantics> that any two points lie in a line, and any two planes intersect in a line.

Triality zips past your window

In <semantics>Q 7 +(q)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(q)</annotation></semantics>, you may observe from facts 1a and 1b that the following three things are equal in number: points; maximal flats of one type; maximal flats of the other type. This is because these three things are cycled by the triality symmetry.

Counting things over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>

Over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>, we have the following things:

2a. <semantics>PG(3,2)<annotation encoding="application/x-tex">\mathrm{PG}(3,2)</annotation></semantics> has <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> planes, each containing <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> points and <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> lines. It has (dually) <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> points, each contained in <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> lines and <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> planes. It has <semantics>35<annotation encoding="application/x-tex">35</annotation></semantics> lines, each containing <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> points and contained in <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> planes.

2b. <semantics>Q 5 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(2)</annotation></semantics> has <semantics>35<annotation encoding="application/x-tex">35</annotation></semantics> points, corresponding to the <semantics>35<annotation encoding="application/x-tex">35</annotation></semantics> lines of <semantics>PG(3,2)<annotation encoding="application/x-tex">\mathrm{PG}(3,2)</annotation></semantics>, and <semantics>30<annotation encoding="application/x-tex">30</annotation></semantics> planes, corresponding to the <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> points and <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> planes of <semantics>PG(3,2)<annotation encoding="application/x-tex">\mathrm{PG}(3, 2)</annotation></semantics>. There’s lots and lots of other interesting stuff, but we will ignore it.

2c. <semantics>Q 7 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(2)</annotation></semantics> has <semantics>135<annotation encoding="application/x-tex">135</annotation></semantics> points and <semantics>270<annotation encoding="application/x-tex">270</annotation></semantics> <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-spaces, i.e. two families of maximal flats containing <semantics>135<annotation encoding="application/x-tex">135</annotation></semantics> elements each. A projective <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics>-space has <semantics>255<annotation encoding="application/x-tex">255</annotation></semantics> points, so if we give it an orthogonal structure of plus type, it will have <semantics>255135=120<annotation encoding="application/x-tex">255-135=120</annotation></semantics> points of norm <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>.

<semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> mod <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>

Now we move onto the second part.

We’ll coordinatise the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice so that the coordinates of its points are of the following types:

a) All integer, summing to an even number

b) All integer+<semantics>12<annotation encoding="application/x-tex">\frac{1}{2}</annotation></semantics>, summing to an odd number.

Then the roots are of the following types:

a) All permutations of <semantics>(±1,±1,0,0,0,0,0,0)<annotation encoding="application/x-tex">\left(\pm1,\pm1,0,0,0,0,0,0\right)</annotation></semantics>

b) All points like <semantics>(±12,±12,±12,±12,±12,±12,±12,±12)<annotation encoding="application/x-tex">\left(\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}, \pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}\right)</annotation></semantics> with an odd number of minus signs.

We now quotient <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> by <semantics>2E 8<annotation encoding="application/x-tex">2\mathrm{E}_8</annotation></semantics>. The elements of the quotient can by represented by the following:

a) All coordinates are <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> or <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, an even number of each.

b) All coordinates are <semantics>±12<annotation encoding="application/x-tex">\pm\frac{1}{2}</annotation></semantics> with either <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> or <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> minus signs.

c) Take an element of type b and put a star after it. The meaning of this is: you can replace any coordinate <semantics>12<annotation encoding="application/x-tex">\frac{1}{2}</annotation></semantics> and replace it with <semantics>32<annotation encoding="application/x-tex">-\frac{3}{2}</annotation></semantics>, or any coordinate <semantics>12<annotation encoding="application/x-tex">-\frac{1}{2}</annotation></semantics> and replace it with <semantics>32<annotation encoding="application/x-tex">\frac{3}{2}</annotation></semantics>, to get an <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice element representing this element of <semantics>E 8/2E 8<annotation encoding="application/x-tex">\mathrm{E}_8/2\mathrm{E}_8</annotation></semantics>.

This is an <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics>-dimensional vector space over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>.

Now we put the following quadratic form on this space: <semantics>Q(x)<annotation encoding="application/x-tex">Q(x)</annotation></semantics> is half the Euclidean norm-squared, mod <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>. This gives rise to the following bilinear form: the Euclidean dot product mod <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>. This turns out to be a perfectly good non-degenerate quadratic form of plus type over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>.

There are <semantics>120<annotation encoding="application/x-tex">120</annotation></semantics> elements of norm <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>, and these correspond to roots of <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> , with <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> roots per element (related by switching the sign of all coordinates).

a) Elements of shape <semantics>(1,1,0,0,0,0,0,0)<annotation encoding="application/x-tex">\left(1,1,0,0,0,0,0,0\right)</annotation></semantics> are already roots in this form.

b) Elements of shape <semantics>(0,0,1,1,1,1,1,1)<annotation encoding="application/x-tex">\left(0,0,1,1,1,1,1,1\right)</annotation></semantics> correspond to the roots obtained by taking the complement (replacing all <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>s by <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> and vice versa) and then changing the sign of one of the <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>s.

c) Elements in which all coordinates are <semantics>±12<annotation encoding="application/x-tex">\pm\frac{1}{2}</annotation></semantics> with either <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> or <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> minus signs are already roots, and by switching all the signs we get the half-integer roots with <semantics>5<annotation encoding="application/x-tex">5</annotation></semantics> or <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> minus signs.

There are <semantics>135<annotation encoding="application/x-tex">135</annotation></semantics> non-zero elements of norm <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>, and these all correspond to lattice points in shell <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, with <semantics>16<annotation encoding="application/x-tex">16</annotation></semantics> lattice points per element of the vector space.

a) There are <semantics>70<annotation encoding="application/x-tex">70</annotation></semantics> elements of shape <semantics>(1,1,1,1,0,0,0,0)<annotation encoding="application/x-tex">\left(1,1,1,1,0,0,0,0\right)</annotation></semantics>. We get <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> lattice points by changing an even number of signs (including <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>). We get another <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> lattice points by taking the complement and then changing an odd number of signs.

b) There is <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> element of shape <semantics>(1,1,1,1,1,1,1,1)<annotation encoding="application/x-tex">\left(1,1,1,1,1,1,1,1\right)</annotation></semantics>. This corresponds to the <semantics>16<annotation encoding="application/x-tex">16</annotation></semantics> lattice points of shape <semantics>(±2,0,0,0,0,0,0,0)<annotation encoding="application/x-tex">\left(\pm2,0,0,0,0,0,0,0\right)</annotation></semantics>.

c) There are <semantics>64<annotation encoding="application/x-tex">64</annotation></semantics> elements like <semantics>(±12,±12,±12,±12,±12,±12,±12,±12) *<annotation encoding="application/x-tex">\left(\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac {1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}\right)^*</annotation></semantics>, with <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> or <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> minus signs. We get <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> actual lattice points by replacing <semantics>±12<annotation encoding="application/x-tex">\pm\frac{1}{2}</annotation></semantics> by <semantics>32<annotation encoding="application/x-tex">\mp\frac{3}{2}</annotation></semantics> in one coordinate, and another <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> by changing the signs of all coordinates.

This accounts for all <semantics>16135=2160<annotation encoding="application/x-tex">16\cdot135=2160</annotation></semantics> points in shell <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>.

Isotropic:

<semantics>shape number (1,1,1,1,1,1,1,1) 1 (1,1,1,1,0,0,0,0) 70 (±12,±12,±12,±12,±12,±12,±12,±12) * 64 total 135<annotation encoding="application/x-tex">\array{\arrayopts{\collayout{left}\rowlines{solid}\collines{solid}\frame{solid}} \mathbf{shape}&\mathbf{number}\\ \left(1,1,1,1,1,1,1,1\right)&1\\ \left(1,1,1,1,0,0,0,0\right)&70\\ \left(\pm\tfrac{1}{2},\pm\tfrac{1}{2},\pm\tfrac{1}{2},\pm\tfrac{1}{2},\pm\tfrac{ 1}{2},\pm\tfrac{1}{2},\pm\tfrac{1}{2},\pm\tfrac{1}{2}\right)^*&64\\ \mathbf{total}&\mathbf{135} }</annotation></semantics>

Anisotropic:

<semantics>shape number (1,1,1,1,1,1,0,0) 28 (1,1,0,0,0,0,0,0) 28 (±12,±12,±12,±12,±12,±12,±12,±12) 64 total 120<annotation encoding="application/x-tex">\array{\arrayopts{\collayout{left}\rowlines{solid}\collines{solid}\frame{solid}} \mathbf{shape}&\mathbf{number}\\ \left(1,1,1,1,1,1,0,0\right)&28\\ \left(1,1,0,0,0,0,0,0\right)&28\\ \left(\pm\tfrac{1}{2},\pm\tfrac{1}{2},\pm\tfrac{1}{2},\pm\tfrac{1}{2},\pm\tfrac{ 1}{2},\pm\tfrac{1}{2},\pm\tfrac{1}{2},\pm\tfrac{1}{2}\right)&64\\ \mathbf{total}&\mathbf{120} }</annotation></semantics>

Since the quadratic form in <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics> comes from the quadratic form in Euclidean space, it is preserved by the Weyl group <semantics>W(E 8)<annotation encoding="application/x-tex">W(\mathrm{E}_8)</annotation></semantics>. In fact the homomorphism <semantics>W(E 8)GO 8 +(2)<annotation encoding="application/x-tex">W(\mathrm{E}_8)\rightarrow \mathrm{GO}_8^+(2)</annotation></semantics> is onto, although (contrary to what I said in an earlier comment) it is a double cover — the element of <semantics>W(E 8)<annotation encoding="application/x-tex">W(\mathrm{E}_8)</annotation></semantics> that reverses the sign of all coordinates is a (in fact, the) non-trivial element element of the kernel.

Large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices

Pick a Fano plane structure on a set of seven points.

Here is a large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> containing <semantics>(2,0,0,0,0,0,0,0)<annotation encoding="application/x-tex">\left(2,0,0,0,0,0,0,0\right)</annotation></semantics>:

(where <semantics>1i,j,k,p,q,r,s7<annotation encoding="application/x-tex">1\le i,j,k,p,q,r,s\le7</annotation></semantics>)

<semantics>±2e i<annotation encoding="application/x-tex">\pm2e_i</annotation></semantics>

<semantics>±e 0±e i±e j±e k<annotation encoding="application/x-tex">\pm e_0\pm e_i\pm e_j\pm e_k</annotation></semantics> where <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>, <semantics>j<annotation encoding="application/x-tex">j</annotation></semantics>, <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> lie on a line in the Fano plane

<semantics>±e p±e q±e r±e s<annotation encoding="application/x-tex">\pm e_p\pm e_q\pm e_r\pm e_s</annotation></semantics> where <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, <semantics>q<annotation encoding="application/x-tex">q</annotation></semantics>, <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics> , <semantics>s<annotation encoding="application/x-tex">s</annotation></semantics> lie off a line in the Fano plane.

Reduced to <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> mod <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, these come to

i) <semantics>(1,1,1,1,1,1,1,1)<annotation encoding="application/x-tex">\left(1,1,1,1,1,1,1,1\right)</annotation></semantics>

ii) <semantics>e 0+e i+e j+e k<annotation encoding="application/x-tex">e_0+e_i+e_j+e_k</annotation></semantics> where <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>, <semantics>j<annotation encoding="application/x-tex">j</annotation></semantics>, <semantics>k<annotation encoding="application/x-tex">k</annotation></semantics> lie on a line in the Fano plane. E.g. <semantics>(1,1,1,0,1,0,0,0)<annotation encoding="application/x-tex">\left(1,1,1,0,1,0,0,0\right)</annotation></semantics>.

iii) <semantics>e p+e q+e r+e s<annotation encoding="application/x-tex">e_p+e_q+e_r+e_s</annotation></semantics> where <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, <semantics>q<annotation encoding="application/x-tex">q</annotation></semantics>, <semantics>r<annotation encoding="application/x-tex">r</annotation></semantics>, <semantics>s<annotation encoding="application/x-tex">s</annotation></semantics> lie off a line in the Fano plane. E.g. <semantics>(0,0,0,1,0,1,1,1)<annotation encoding="application/x-tex">\left(0,0,0,1,0,1,1,1\right)</annotation></semantics>.

Each of these corresponds to <semantics>16<annotation encoding="application/x-tex">16</annotation></semantics> elements of the large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> roots.

Some notes on these points:

1) They’re all isotropic, since they have a multiple of <semantics>4<annotation encoding="application/x-tex">4</annotation></semantics> non-zero entries.

2) They’re mutually orthogonal.

  a) Elements of types ii and iii are all orthogonal to <semantics>(1,1,1,1,1,1,1,1)<annotation encoding="application/x-tex">\left(1,1,1,1,1,1,1,1\right)</annotation></semantics> because they have an even number of ones (like all all-integer elements).

  b) Two elements of type ii overlap in two places: <semantics>e 0<annotation encoding="application/x-tex">e_0</annotation></semantics> and the point of the Fano plane that they share.

  c) If an element <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> of type ii and an element <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> of type iii are mutual complements, obviously they have no overlap. Otherwise, the complement of <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> is an element of type ii, so <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> overlaps with it in exactly two places; hence <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics> overlaps with <semantics>y<annotation encoding="application/x-tex">y</annotation></semantics> itself in the other two non-zero places of <semantics>x<annotation encoding="application/x-tex">x</annotation></semantics>.

  d) From <semantics>c<annotation encoding="application/x-tex">c</annotation></semantics>, given two elements of type iii, one will overlap with the complement of the other in two places, hence (by the argument of c) will overlap with the other element itself in two places.

3) Adjoining the zero vector, they give a set closed under addition.

The rule for addition of all-integer elements is reasonably straightforward: if they are orthogonal, then treat the <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>s and <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>s as bits and add mod <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>. If they aren’t orthogonal, then do the same, then take the complement of the answer.

  a) Adding <semantics>(1,1,1,1,1,1,1,1)<annotation encoding="application/x-tex">\left(1,1,1,1,1,1,1,1\right)</annotation></semantics> to any of the others just gives the complement, which is a member of the set.

  b) Adding two elements of type ii, we set to <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> the <semantics>e 0<annotation encoding="application/x-tex">e_0</annotation></semantics> component and the component corresponding to the point of intersection in the Fano plane, leaving the <semantics>4<annotation encoding="application/x-tex">4</annotation></semantics> components where they don’t overlap, which are just the complement of the third line of the Fano plane through their point of intersection, and is hence a member of the set.

  c) Each element of type iii is the sum of the element of type i and an element of type ii, hence is covered implicitly by cases a and b.

4) There are <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> elements of the set.

  a) There is <semantics>(1,1,1,1,1,1,1,1)<annotation encoding="application/x-tex">\left(1,1,1,1,1,1,1,1\right)</annotation></semantics>.

  b) There are <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> corresponding to lines of the Fano plane.

  c) There are <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> corresponding to the complements of lines of the Fano plane.

From the above, these <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> elements form a maximal flat of <semantics>Q 7 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(2)</annotation></semantics>. (That is, <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> points projectively, forming a projective <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-space in a projective <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics>-space.)

That a large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice projects to a flat is straightforward:

First, as a lattice it’s closed under addition over <semantics><annotation encoding="application/x-tex">\mathbb{Z}</annotation></semantics>, so should project to a subspace over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>.

Second, since the cosine of the angle between two roots of <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> is always a multiple of <semantics>12<annotation encoding="application/x-tex">\frac{1}{2}</annotation></semantics>, and the points in the second shell have Euclidean length <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, the dot product of two large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> roots must always be an even integer. Also, the large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> roots project to norm <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> points. So all points of the large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> should project to norm <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> points.

It’s not instantly obvious to me that large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> should project to a maximal flat, but it clearly does.

So I’ll assume each <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> corresponds to a maximal flat, and generally that everything that I’m going to talk about over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics> lifts faithfully to Euclidean space, which seems plausible (and works)! But I haven’t proved it. Anyway, assuming this, a bunch of stuff follows.

Total number of large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices

We immediately know there are <semantics>270<annotation encoding="application/x-tex">270</annotation></semantics> large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices, because there are <semantics>270<annotation encoding="application/x-tex">270</annotation></semantics> maximal flats in <semantics>Q 7 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(2)</annotation></semantics>, either from the formula <semantics> i=0 m1(1+q i)<annotation encoding="application/x-tex">\prod_{i=0}^{m-1}\left(1+q^i\right)</annotation></semantics>, or immediately from triality and the fact that there are <semantics>135<annotation encoding="application/x-tex">135</annotation></semantics> points in <semantics>Q 7 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(2)</annotation></semantics>.

Number of large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> root systems sharing a given point

We can now bring to bear some more general theory. How many large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> root-sets share a point? Let us project this down and instead ask, How many maximal flats share a given point?

Recall fact 1e:

1e. Pick a polar space <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics> of rank <semantics>m<annotation encoding="application/x-tex">m</annotation></semantics>. Pick a point <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> in it. The space whose points are lines (i.e. <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>-flats) through <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, whose lines are planes (i.e. <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>-flats) through <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, etc, with incidence inherited from <semantics>Σ<annotation encoding="application/x-tex">\Sigma</annotation></semantics>, form a polar space of the same type, of rank <semantics>m1<annotation encoding="application/x-tex">m-1</annotation></semantics>.

So pick a point <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> in <semantics>Q 7 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(2)</annotation></semantics>. The space of all flats containing <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> is isomorphic to <semantics>Q 5 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(2)</annotation></semantics>. The maximal flats containing <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> in <semantics>Q 7 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(2)</annotation></semantics> correspond to all maximal flats of <semantics>Q 5 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(2)</annotation></semantics>, of which there are <semantics>30<annotation encoding="application/x-tex">30</annotation></semantics>. So there are <semantics>30<annotation encoding="application/x-tex">30</annotation></semantics> maximal flats of <semantics>Q 7 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(2)</annotation></semantics> containing <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, and hence <semantics>30<annotation encoding="application/x-tex">30</annotation></semantics> large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices containing a given point.

We see this if we fix <semantics>(1,1,1,1,1,1,1,1)<annotation encoding="application/x-tex">\left(1,1,1,1,1,1,1,1\right)</annotation></semantics>, and the maximal flats correspond to the <semantics>30<annotation encoding="application/x-tex">30</annotation></semantics> ways of putting a Fano plane structure on <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> points. Via the Klein correspondence, I guess this is a way to show that the <semantics>30<annotation encoding="application/x-tex">30</annotation></semantics> Fano plane structures correspond to the points and planes of <semantics>PG(3,2)<annotation encoding="application/x-tex">\mathrm{PG}(3,2)</annotation></semantics>.

Number of large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> root system disjoint from a given large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> root system

Now assume that large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices with non-intersecting sets of roots correspond to non-intersecting maximal flats. The intersections of maximal flats obey rule 1c:

1c. Two maximal flats of different types must intersect in a flat of odd codimension; two maximal flats of the same type must intersect in a flat of even codimension.

So two <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats of opposite type must intersect in a plane or a point; if they are of the same type, they must intersect in a line or not at all (the empty set having dimension <semantics>1<annotation encoding="application/x-tex">-1</annotation></semantics>).

We want to count the dimension <semantics>1<annotation encoding="application/x-tex">-1</annotation></semantics> intersections, but it’s easier to count the dimension <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> intersections and subtract from the total.

So, given a <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat, how many other <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats intersect it in a line?

Pick a point <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> in <semantics>Q 7 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(2)</annotation></semantics>. The <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats sharing that point correspond to the planes of <semantics>Q 5 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(2)</annotation></semantics>. Then the set of <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats sharing just a line through <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> with our given <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat correspond to the set of planes of <semantics>Q 5 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(2)</annotation></semantics> sharing a single point with a given plane. By what was said above, this is all the other planes of the same type (there’s no other dimension these intersections can have). There are <semantics>14<annotation encoding="application/x-tex">14</annotation></semantics> of these (<semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> planes minus the given one).

So, given a point in the <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat, there are <semantics>14<annotation encoding="application/x-tex">14</annotation></semantics> other <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats sharing a line (and no more) which passes through the point. There are <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> points in the <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat, but on the other hand there are <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> points in a line, giving <semantics>14153=70<annotation encoding="application/x-tex">\frac{14\cdot15}{3}=70</annotation></semantics> <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-spaces sharing a line (and no more) with a given <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat.

But there are a total of <semantics>135<annotation encoding="application/x-tex">135</annotation></semantics> <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats of a given type. If <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> of them is a given <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat, and <semantics>70<annotation encoding="application/x-tex">70</annotation></semantics> of them intersect that <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat in a line, then <semantics>135170=64<annotation encoding="application/x-tex">135-1-70=64</annotation></semantics> don’t intersect the <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat at all. So there should be <semantics>64<annotation encoding="application/x-tex">64</annotation></semantics> large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices whose roots don’t meet the roots of a given large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice.

Other numbers of intersecting root systems

We can also look at the intersections of large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> root systems with large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> root systems of opposite type. What about the intersections of two <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats in a plane? If we focus just on planes passing through a particular point, this corresponds, in <semantics>Q 5 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(2)</annotation></semantics>, to planes intersecting in a line. There are <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> planes intersecting a given plane in a line (from the Klein correspondence — they correspond to the seven points in a plane or the seven planes containing a point of <semantics>PG(3,2)<annotation encoding="application/x-tex">\mathrm{PG}(3,2)</annotation></semantics>). So there are <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats of <semantics>Q 7 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_7^+(2)</annotation></semantics> which intersect a given <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat in a plane containing a given point. There <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> points to choose from, but <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> points in a plane, meaning that there are <semantics>7157=15<annotation encoding="application/x-tex">\frac{7\cdot15}{7}=15</annotation></semantics> <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats intersecting a given <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat in a plane. A plane has <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> points, so translating that to <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattices should give <semantics>716=112<annotation encoding="application/x-tex">7\cdot16=112</annotation></semantics> shared roots.

That leaves <semantics>13515=120<annotation encoding="application/x-tex">135-15=120</annotation></semantics> <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flats intersecting a given <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat in a single point, corresponding to <semantics>16<annotation encoding="application/x-tex">16</annotation></semantics> shared roots.

<semantics>intersection dim. number same type 2 15 No 1 70 Yes 0 120 No 1 64 Yes<annotation encoding="application/x-tex">\array{\arrayopts{\collayout{left}\collines{solid}\rowlines{solid}\frame{solid}} \mathbf{\text{intersection dim.}}&\mathbf{\text{number}}&\mathbf{\text{same type}}\\ 2&15&No\\ 1&70&Yes\\ 0&120&No\\ -1&64&Yes }</annotation></semantics>

A couple of points here related to triality. Under triality, one type of maximal flat gets sent to the other type, and the other type gets sent to singular points (<semantics>0<annotation encoding="application/x-tex">0</annotation></semantics>-flats). The incidence relation of “intersecting in a plane” gets sent to ordinary incidence of a point with a flat. So the fact that there are <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> maximal flats that intersect a given maximal flat in a plane is a reflection of the fact that there are <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> points in a maximal flat (or, dually, <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> maximal flats of a given type containing a given point).

The intersection of two maximal flats of the same type translates into a relation between two singular points. Just from the numbers, we’d expect “intersection in a line” to translate into “orthogonal to”, and “disjoint” to translate into “not orthogonal to”.

In that case, a pair of maximal flats intersecting in a (flat) line translates to <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> mutually orthogonal flat points — whose span is a flat line. Which makes sense, because under triality, <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>-flats transform to <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>-flats, reflecting the fact that the central point of the <semantics>D 4<annotation encoding="application/x-tex">D_4</annotation></semantics> diagram (representing lines) is sent to itself under triality.

In that case, two disjoint maximal flats translates to a pair of non-orthogonal singular points, defining a hyperbolic line.

Fixing a hyperbolic line (pointwise) obviously reduces the rank of the polar space by <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>, picking out a <semantics>GO 6 +(2)<annotation encoding="application/x-tex">\mathrm{GO}_6^+(2)</annotation></semantics> subgroup of <semantics>GO 8 +(2)<annotation encoding="application/x-tex">\mathrm{GO}_8^+(2)</annotation></semantics>. By the Klein correspondence, <semantics>GO 6 +(2)<annotation encoding="application/x-tex">\mathrm{GO}_6^+(2)</annotation></semantics> is isomorphic to <semantics>PSL 4(2)<annotation encoding="application/x-tex">\mathrm{PSL}_4(2)</annotation></semantics>, which is just the automorphism group of <semantics>PG(3,2)<annotation encoding="application/x-tex">\mathrm{PG}(3, 2)</annotation></semantics> — i.e., here, the automorphism group of a maximal flat. So the joint stabiliser of two disjoint maximal flats is just automorphisms of one of them, which forces corresponding automorphisms of the other. This group is also isomorphic to the symmetric group <semantics>S 8<annotation encoding="application/x-tex">S_8</annotation></semantics>, giving all permutations of the coordinates (of the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice).

(My guess would be that the actions of <semantics>GL 4(2)<annotation encoding="application/x-tex">\mathrm{GL}_4(2)</annotation></semantics> on the two maximal flats would be related by an outer automorphsm of <semantics>GL 4(2)<annotation encoding="application/x-tex">\mathrm{GL}_4(2)</annotation></semantics>, in which the action on the points of one flat would match an action on the planes of the other, and vice versa, preserving the orthogonality relations coming from the symplectic structure implied by the orthogonal structure — i.e. the alternating form implied by the quadratic form.)

Nearest neighbours

We see this “non-orthogonal singular points” <semantics><annotation encoding="application/x-tex">\leftrightarrow</annotation></semantics> “disjoint maximal flats” echoed when we look at nearest neighbours.

Nearest neighbours in the second shell of the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice are separated from each other by an angle of <semantics>cos 134<annotation encoding="application/x-tex">\cos^{-1}\frac{3}{4}</annotation></semantics>, so have a mutual dot product of <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>, hence are non-orthogonal over <semantics>𝔽 2<annotation encoding="application/x-tex">\mathbb{F}_2</annotation></semantics>.

Let us choose a fixed point <semantics>(2,0,0,0,0,0,0,0)<annotation encoding="application/x-tex">\left(2,0,0,0,0,0,0,0\right)</annotation></semantics> in the second shell of <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> . This has as our chosen representative <semantics>(1,1,1,1,1,1,1,1)<annotation encoding="application/x-tex">\left(1,1,1,1,1,1,1,1\right)</annotation></semantics> in our version of <semantics>PG(7,2)<annotation encoding="application/x-tex">\mathrm{PG}(7,2)</annotation></semantics>, which has the convenient property that it is orthogonal to the all-integer points, and non-orthogonal to the half-integer points. The half-integer points in the second shell are just those that we write as <semantics>(±12,±12,±12,±12,±12,±12,±12,±12) <annotation encoding="application/x-tex">\left(\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}, \pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}\right)^\star</annotation></semantics> in our notation, where the <semantics>*<annotation encoding="application/x-tex">*</annotation></semantics> means that we should replace any <semantics>12<annotation encoding="application/x-tex">\frac{1}{2}</annotation></semantics> by <semantics>32<annotation encoding="application/x-tex">-\frac{3}{2}</annotation></semantics> or replace any <semantics>12<annotation encoding="application/x-tex">-\frac{1}{2}</annotation></semantics> by <semantics>32<annotation encoding="application/x-tex">\frac{3}{2}</annotation></semantics> to get a corresponding element in the second shell of the <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> latttice, and where we require <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> or <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> minus signs in the notation, to correspond two points in the lattice with opposite signs in all coordinates.

Now, since each reduced isotropic point represents <semantics>16<annotation encoding="application/x-tex">16</annotation></semantics> points of the second shell, merely saying that two reduced points have dot product of <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics> is not enough to pin down actual nearest neighbours.

But very conveniently, the sets of <semantics>16<annotation encoding="application/x-tex">16</annotation></semantics> are formed in parallel ways for the particular setup we have chosen. Namely, lifting <semantics>(1,1,1,1,1,1,1,1)<annotation encoding="application/x-tex">\left(1,1,1,1,1,1,1,1\right)</annotation></semantics> to a second-shell element, we can choose to put the <semantics>±2<annotation encoding="application/x-tex">\pm2</annotation></semantics> in each of the <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> coordinates, with positive or negative sign, and lifting an element of the form <semantics>(±12,±12,±12,±12,±12,±12,±12,±12) *<annotation encoding="application/x-tex">\left(\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}, \pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}\right)^*</annotation></semantics> to a second-shell element, we can choose to put the <semantics>±32<annotation encoding="application/x-tex">\pm\frac{3}{2}</annotation></semantics> in each of the <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> coordinates, with positive or negative sign.

So we can line up our conventions, and choose, e.g., specifically <semantics>(+2,0,0,0,0,0,0,0)<annotation encoding="application/x-tex">\left(+2,0,0, 0,0,0,0,0\right)</annotation></semantics>, and choose neighbours of the form <semantics>(+32,±12,±12,±12,±12,±12,±12,±12)<annotation encoding="application/x-tex">\left(+\frac{3}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}, \pm\frac{1}{2},\pm\frac{1}{2},\pm\frac{1}{2}\right)</annotation></semantics>, with an even number of minus signs.

This tells us we have <semantics>64<annotation encoding="application/x-tex">64</annotation></semantics> nearest neighbours, corresponding to the <semantics>64<annotation encoding="application/x-tex">64</annotation></semantics> isotropic points of half-integer form. Let us call this set of points <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>.

Now pick one of those <semantics>64<annotation encoding="application/x-tex">64</annotation></semantics> isotropic points, call it <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>. It lies, as we showed earlier, in <semantics>30<annotation encoding="application/x-tex">30</annotation></semantics> maximal flats, corresponding to the <semantics>30<annotation encoding="application/x-tex">30</annotation></semantics> plane flats of <semantics>Q 5 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(2)</annotation></semantics>, and we would like to understand the intersections of these flats with <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics>: that is, those nearest neighbours which belong to each large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice.

In any maximal flat, i.e. any <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat, containing <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, there will be <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> lines passing through <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, each with <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics> other points on it, totalling <semantics>14<annotation encoding="application/x-tex">14</annotation></semantics> which, together with <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> itself form the <semantics>15<annotation encoding="application/x-tex">15</annotation></semantics> points of a copy of <semantics>PG(3,2)<annotation encoding="application/x-tex">\mathrm{PG}(3,2)</annotation></semantics>.

Now, the sum of two all-integer points is an all-integer point, but the sum of two half-integer points is also an all-integer point. So of the two other points on each of those lines, one will be half-integer and one all-integer. So there will be <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> half-integer points in addition to <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> itself; i.e. the maximal flat will meet <semantics>T<annotation encoding="application/x-tex">T</annotation></semantics> in <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> points; hence the corresponding large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice will contain <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> of the nearest neighbours of <semantics>(2,0,0,0,0,0,0,0)<annotation encoding="application/x-tex">\left(2,0,0,0,0,0,0,0\right)</annotation></semantics>.

Also, because the sum of two half-integer points is not a half-integer point, no <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> of those <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> points will lie on a line.

But the only way that you can get <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> points in a <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-space such that no <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics> of them lie on a line of the space is if they are the <semantics>8<annotation encoding="application/x-tex">8</annotation></semantics> points that do not lie on a plane of the space. Hence the other <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> points — the ones lying in the all-integer subspace — must form a Fano plane.

So we have the following: inside the projective <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics>-space of lattice elements mod <semantics>2<annotation encoding="application/x-tex">2</annotation></semantics>, we have the projective <semantics>6<annotation encoding="application/x-tex">6</annotation></semantics>-space of all-integer elements, and inside there we have the <semantics>5<annotation encoding="application/x-tex">5</annotation></semantics>-space of all-integer elements orthogonal to <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics>, and inside there we have a polar space isomorphic to <semantics>Q 5 +(2)<annotation encoding="application/x-tex">\mathrm{Q}_5^+(2)</annotation></semantics>, and in there we have <semantics>30<annotation encoding="application/x-tex">30</annotation></semantics> planes. And adding <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> to each element of one of those planes gives the <semantics>7<annotation encoding="application/x-tex">7</annotation></semantics> elements which accompany <semantics>p<annotation encoding="application/x-tex">p</annotation></semantics> in the intersection of the isotropic half-integer points with the corresponding <semantics>3<annotation encoding="application/x-tex">3</annotation></semantics>-flat, which lift to the nearest neighbours of <semantics>(2,0,0,0,0,0,0,0)<annotation encoding="application/x-tex">\left(2,0,0,0,0,0,0,0\right)</annotation></semantics> lying in the corresponding large <semantics>E 8<annotation encoding="application/x-tex">\mathrm{E}_8</annotation></semantics> lattice.

by john (baez@math.ucr.edu) at February 02, 2016 04:27 AM

John Baez - Azimuth

Corelations in Network Theory

Category theory reduces a large chunk of math to the clever manipulation of arrows. One of the fun things about this is that you can often take a familiar mathematical construction, think of it category-theoretically, and just turn around all the arrows to get something new and interesting!

In math we love functions. If we have a function

f: X \to Y

we can formally turn around the arrow to think of f as something going back from Y back to X. But this something is usually not a function: it’s called a ‘cofunction’. A cofunction from Y to X is simply a function from X to Y.

Cofunctions are somewhat interesting, but they’re really just functions viewed through a looking glass, so they don’t give much new—at least, not by themselves.

The game gets more interesting if we think of functions and cofunctions as special sorts of relations. A relation from X to Y is a subset

R \subseteq X \times Y

It’s a function when for each x \in X there’s a unique y \in Y with (x,y) \in R. It’s a cofunction when for each y \in Y there’s a unique x \in x with (x,y) \in R.

Just as we can compose functions, we can compose relations. Relations have certain advantages over functions: for example, we can ‘turn around’ any relation R from X to Y and get a relation R^\dagger from Y to X:

R^\dagger = \{(y,x) : \; (x,y) \in R \}

If we turn around a function we get a cofunction, and vice versa. But we can also do other fun things: for example, since both functions and cofunctions are relations, we can compose a function and a cofunction and get a relation.

Of course, relations also have certain disadvantages compared to functions. But it’s utterly clear by now that the category \mathrm{FinRel}, where the objects are finite sets and the morphisms are relations, is very important.

So far, so good. But what happens if we take the definition of ‘relation’ and turn all the arrows around?

There are actually several things I could mean by this question, some more interesting than others. But one of them gives a very interesting new concept: the concept of ‘corelation’. And two of my students have just written a very nice paper on corelations:

• Brandon Coya and Brendan Fong, Corelations are the prop for extraspecial commutative Frobenius monoids.

Here’s why this paper is important for network theory: corelations between finite sets are exactly what we need to describe electrical circuits made of ideal conductive wires! A corelation from a finite set X to a finite set Y can be drawn this way:

I have drawn more wires than strictly necessary: I’ve drawn a wire between two points whenever I want current to be able to flow between them. But there’s a reason I did this: a corelation from X to Y simply tells us when current can flow from one point in either of these sets to any other point in these sets.

Of course circuits made solely of conductive wires are not very exciting for electrical engineers. But in an earlier paper, Brendan introduced corelations as an important stepping-stone toward more general circuits:

• John Baez and Brendan Fong, A compositional framework for passive linear circuits. (Blog article here.)

The key point is simply that you use conductive wires to connect resistors, inductors, capacitors, batteries and the like and build interesting circuits—so if you don’t fully understand the math of conductive wires, you’re limited in your ability to understand circuits in general!

In their new paper, Brendan teamed up with Brandon Coya, and they figured out all the rules obeyed by the category \mathrm{FinCorel}, where the objects are finite sets and the morphisms are corelations. I’ll explain these rules later.

This sort of analysis had previously been done for \mathrm{FinRel}, and it turns out there’s a beautiful analogy between the two cases! Here is a chart displaying the analogy:

Spans Cospans
extra bicommutative bimonoids special commutative Frobenius monoids
Relations Corelations
extraspecial bicommutative bimonoids extraspecial commutative Frobenius monoids

I’m sure this will be cryptic to the nonmathematicians reading this, and even many mathematicians—but the paper explains what’s going on here.

I’ll actually say what an ‘extraspecial commutative Frobenius monoid’ is later in this post. This is a terse way of listing all the rules obeyed by corelations between finite sets—and thus, all the rules obeyed by conductive wires.

But first, let’s talk about something simpler.

What is a corelation?

Just as we can define functions as relations of a special sort, we can also define relations in terms of functions. A relation from X to Y is a subset

R \subseteq X \times Y

but we can think of this as an equivalence class of one-to-one functions

i: R \to X \times Y

Why an equivalence class? The image of i is our desired subset of X \times Y. The set R here could be replaced by any isomorphic set; its only role is to provide ‘names’ for the elements of X \times Y that are in the image of i.

Now we have a relation described as an arrow, or really an equivalence class of arrows. Next, let’s turn the arrow around!

There are different things I might mean by that, but we want to do it cleverly. When we turn arrows around, the concept of product (for example, cartesian product X \times Y of sets) turns into the concept of sum (for example, disjoint union X + Y of sets). Similarly, the concept of monomorphism (such as a one-to-one function) turns into the concept of epimorphism (such as an onto function). If you don’t believe me, click on the links!

So, we should define a corelation from a set X to a set Y to be an equivalence class of onto functions

p: X + Y \to C

Why an equivalence class? The set C here could be replaced by any isomorphic set; its only role is to provide ‘names’ for the sets of elements of X + Y that get mapped to the same thing via p.

In simpler terms, a corelation from X to a set Y is just a partition of the disjoint union X + Y. So, it looks like this:

If we like, we can then draw a line connecting any two points that lie in the same part of the partition:

These lines determine the corelation, so we can also draw a corelation this way:

This is why corelations describe circuits made solely of wires!

The rules governing corelations

The main result in Brandon and Brendan’s paper is that \mathrm{FinCorel} is equivalent to the PROP for extraspecial commutative Frobenius monoids. That’s a terse way of the laws governing \mathrm{FinCorel}.

Let me just show you the most important laws. In each of these law I’ll draw two circuits made of wires, and write an equals sign asserting that they give the same corelation from a set X to a set Y. The inputs X of each circuit are on top, and the outputs Y are at the bottom. I’ll draw 3-way junctions as little triangles, but don’t worry about that. When we compose two corelations we may get a wire left in mid-air, not connected to the inputs or outputs. We draw the end of the wire as a little circle.

There are some laws called the ‘commutative monoid’ laws:

and an upside-down version called the ‘cocommutative comonoid’ laws:

Then we have ‘Frobenius laws’:

and finally we have the ‘special’ and ‘extra’ laws:

All other laws can be derived from these in some systematic ways.

Commutative Frobenius monoids obey the commutative monoid laws, the cocommutative comonoid laws and the Frobenius laws. They play a fundamental role in 2d topological quantum field theory. Special Frobenius monoids are also well-known. But the ‘extra’ law, which says that a little piece of wire not connected to anything can be thrown away with no effect, is less well studied. Jason Erbele and I gave it this name in our work on control theory:

• John Baez and Jason Erbele, Categories in control. (Blog article here.)

For more

David Ellerman has spent a lot of time studying what would happen to mathematics if we turned around a lot of arrows in a certain systematic way. In particular, just as the concept of relation would be replaced by the concept of corelation, the concept of subset would be replaced by the concept of partition. You can see how it fits together: just as a relation from X to Y is a subset of X \times Y, a corelation from X to Y is a partition of X + Y.

There’s a lattice of subsets of a set:

In logic these subsets correspond to propositions, and the lattice operations are the logical operations ‘and’ and ‘or’. But there’s also a lattice of partitions of a set:

In Ellerman’s vision, this lattice of partitions gives a new kind of logic. You can read about it here:

• David Ellerman, Introduction to partition logic, Logic Journal of the Interest Group in Pure and Applied Logic 22 (2014), 94–125.

As mentioned, the main result in Brandon and Brendan’s paper is that \mathrm{FinCorel} is equivalent to the PROP for extraspecial commutative Frobenius monoids. After they proved this, they noticed that the result has also been stated in other language and proved in other ways by two other authors:

• Fabio Zanasi, Interacting Hopf Algebras—the Theory of Linear Systems, PhD thesis, École Normale Supériere de Lyon, 2015.

• K. Dosen and Z. Petrić, Syntax for split preorders, Annals of Pure and Applied Logic 164 (2013), 443–481.

Unsurprisingly, I prefer Brendan and Brandon’s approach to deriving the result. But it’s nice to see different perspectives!


by John Baez at February 02, 2016 02:00 AM

February 01, 2016

Jester - Resonaances

750 ways to leave your lover
A new paper last week straightens out the story of the diphoton background in ATLAS. Some confusion was created because theorists misinterpreted the procedures described in the ATLAS conference note, which could lead to a different estimate of the significance of the 750 GeV excess. However, once the correct phenomenological and statistical approach is adopted, the significance quoted by ATLAS can be reproduced, up to small differences due to incomplete information available in public documents. Anyway, now that this is all behind, we can safely continue being excited at least until summer.  Today I want to discuss different interpretations of the diphoton bump observed by ATLAS. I will take a purely phenomenological point of view, leaving for the next time  the question of a bigger picture that the resonance may fit into.

Phenomenologically, the most straightforward interpretation is the so-called everyone's model: a 750 GeV singlet scalar particle produced in gluon fusion and decaying to photons via loops of new vector-like quarks. This simple construction perfectly explains all publicly available data, and can be easily embedded in more sophisticated models. Nevertheless, many more possibilities were pointed out in the 750 papers so far, and here I review a few that I find most interesting.

Spin Zero or More?  
For a particle decaying to two photons, there is not that many possibilities: the resonance has to be a boson and, according to young Landau's theorem, it cannot have spin 1. This leaves at the table spin 0, 2, or higher. Spin-2 is an interesting hypothesis, as this kind of excitations is predicted in popular models like the Randall-Sundrum one. Higher-than-two spins are disfavored theoretically. When more data is collected, the spin of the 750 GeV resonance can be tested by looking at the angular distribution of the photons. The rumor is that the data so far somewhat favor spin-2 over spin-0, although the statistics is certainly insufficient for any serious conclusions.  Concerning the parity, it is practically impossible to determine it by studying the diphoton final state, and both the scalar and the pseudoscalar option are equally viable at present. Discrimination may be possible in the future, but  only if multi-body decay modes of the resonance are discovered. If the true final state is more complicated than two photons (see below), then the 750 GeV resonance may have  any spin, including spin-1 and spin-1/2.

Narrow or Wide? 
The total width is an inverse of particle's lifetime (in our funny units). From the experimental point of view, the width larger than detector's  energy resolution  will show up as a smearing of the resonance due to the uncertainty principle. Currently, the ATLAS run-2 data prefer the width 10 times larger than the experimental resolution  (which is about 5 GeV in this energy ballpark), although the preference is not very strong in the statistical sense. On the other hand, from the theoretical point of view, it is much easier to construct models where the 750 GeV resonance is a narrow particle. Therefore, confirmation of the large width would have profound consequences, as it would significantly narrow down the scope of viable models.  The most exciting interpretation would then be that the resonance is a portal to a dark sector containing new light particles very weakly coupled to ordinary matter.    

How many resonances?  
One resonance is enough, but a family of resonances tightly packed around 750 GeV may also explain the data. As a bonus, this could explain the seemingly large width without opening new dangerous decay channels. It is quite natural for particles to come in multiplets with similar masses: our pion is an example where the small mass splitting π± and π0 arises due to electromagnetic quantum corrections. For Higgs-like multiplets the small splitting may naturally arise after electroweak symmetry breaking, and  the familiar 2-Higgs doublet model offers a simple realization. If the mass splitting of the multiplet is larger than the experimental resolution, this possibility can tested by precisely measuring the profile of the resonance and searching for a departure from the Breit-Wigner shape. On the other side of the spectrum is the idea is that there is no resonance at all at 750 GeV, but rather at another mass, and the bump at 750 GeV appears due to some kinematical accidents.
   
Who made it? 
The most plausible production process is definitely the gluon-gluon fusion. Production in collisions of light quark and antiquarks is also theoretically sound, however it leads to a more acute tension between run-2 and run-1 data. Indeed, even for the gluon fusion, the production cross section of a 750 GeV resonance in 13 TeV proton collisions is only 5 times larger than at 8 TeV. Given the larger amount of data collected in run-1, we would expect a similar excess there, contrary to observations. For a resonance produced from u-ubar or d-dbar the analogous ratio is only 2.5 (see the table), leading to much more  tension. The ratio climbs back to 5 if the initial state contains the heavier quarks: strange, charm, or bottom (which can also be found sometimes inside a proton), however I haven't seen yet a neat model that makes use of that. Another possibility is to produce the resonance via photon-photon collisions. This way one could cook up a truly minimal and very predictive model where the resonance couples only to photons of all the Standard Model particles. However, in this case, the ratio between 13 and 8 TeV cross section is very unfavorable, merely a factor of 2, and the run-1 vs run-2 tension comes back with more force. More options open up when associated production (e.g. with t-tbar, or in vector boson fusion) is considered. The problem with these ideas is that, according to what was revealed during the talk last December, there isn't any additional energetic particles in the diphoton events. Similar problems are facing models where the 750 GeV resonance appears as a decay product of a heavier resonance, although in this case some clever engineering or fine-tuning may help to hide the additional particles from experimentalist's eyes.

Two-body or more?
While a simple two-body decay of the resonance into two photons is a perfectly plausible explanation of all existing data, a number of interesting alternatives have been suggested. For example, the decay could be 3-body, with another soft visible or invisible  particle accompanying two photons. If the masses of all particles involved are chosen appropriately, the invariant mass spectrum of the diphoton remains sharply peaked. At the same time, a broadening of the diphoton energy due to the 3-body kinematics may explain why the resonance appears wide in ATLAS. Another possibility is a cascade decay into 4 photons. If the  intermediate particles are very light, then the pairs of photons from their decay are very collimated and may look like a single photon in the detector.
   
 ♬ The problem is all inside your head   and the possibilities are endless. The situation is completely different than during the process of discovering the  Higgs boson, where one strongly favored hypothesis was tested against more exotic ideas. Of course, the first and foremost question is whether the excess is really new physics, or just a nasty statistical fluctuation. But if that is confirmed, the next crucial task for experimentalists will be to establish the nature of the resonance and get model builders on the right track.  The answer is easy if you take it logically ♬ 

All ideas discussed above appeared in recent articles by various authors addressing the 750 GeV excess. If I were to include all references the post would be just one giant hyperlink, so you need to browse the literature yourself to find the original references.

by Jester (noreply@blogger.com) at February 01, 2016 08:14 AM

January 31, 2016

Lubos Motl - string vacua and pheno

Transparency, public arguments: a wrong recipe for arXiv rejections
OneDrive: off-topic: tomorrow, Microsoft will reduce the free 15 GB space by 10 GB and abolish the free 15 GB camera roll space. Old users may click here and after two more clicks, they will avoid this reduction if they act on Sunday!
Crackpot blog Backreaction and its flavor appendix called Nature believe that it was wrong for the arXiv.org website of scientific preprints (100k papers a year, 1.1 million in total) to reject two submissions by students of quantum information who attempted to rebrand themselves as general relativists and argue that you can't ever fall into a black hole.

Thankfully, Ms Hossenfelder and others agree that the papers were wrong. But they still protest against the fact that the papers were rejected. Or to say the least, there should have been some "transparency" in the rejection – in other words, some details about the decision which should be followed by some arguments in the public.

I totally disagree with those comments.




The arXiv (the xxx.lanl.gov website) was established in the early 1990s as a tool for researchers to share their findings more quickly, before they got published in the paper journals that mattered at that time. Paul Ginsparg created the software and primarily fathered the hep-ph and hep-th (high energy physics phenomenology and theory) archives – he also invented the funny, friendly yet mocking philosophy-like nickname "phenomenologists" for the people who were not formal (mainly string) theorists but who actually cared what happens with the particles in the muddy world of germs and worms.




The hep-th and hep-ph archives were meant to serve rather particular communities of experts. They pretty much knew who belonged to those sets and who didn't. The set got expanded when a member trained a new student. Much like the whole web (a server at CERN serving Tim Berners-Lee and few pals), the arXiv.org got more "global" and potentially accessible to the whole mankind.

This evolution has posed some new challenges. The website had to get shielded from the thousands of potential worthless submissions by the laymen. There existed various ways to deal with the challenge but an endorsement system was introduced for hep-th, hep-ph, and other experts' archives. It is much easier to send papers to "less professional" archives within arXiv.org but it's still harder than to send them to the crackpot-dominated viXra.org of Philip Gibbs.

The submissions are still filtered by moderators who are volunteers. One of them, Daniel Gottesman of the Perimeter Institute, has made an important response to those who try to criticize the arXiv moderators when they manage to submit their paper previously rejected by the arXiv to a classic journal:
“If a paper is rejected by arXiv and accepted by a journal, that does not mean that arXiv is the one that made a mistake.”
Exactly. The arXiv's filtering process isn't terribly cruel – authors above a certain quality level can be pretty sure that their paper gets accepted to the arXiv if they obey some sensible conditions so it's not like the "bloody struggle for some place under the Sun" in some printed journals considered prestigious.

But the arXiv's filters are still nontrivial and independent and it may happen that the arXiv-rejected paper gets to a printed journal – which often means that the printed journal has poor quality standards. There is no "right" and there cannot be any "right" to have any paper accepted to the arXiv. There is no "guarantee" that the arXiv is always more inclusive than all journals in the world. The arXiv's filtering policies are different which doesn't mean "softer and sloppier than everyone else's".



In this case, the rejected papers were written by students of Dr Nicolas Gisin – a senior quantum information expert in Geneva. But these students didn't write about something they're expected to be good at because they're Gisin's students.

Instead, they wrote about black holes and it was bullšit. You can't ever fall into a black hole, a layman often thinks before he starts to understand general relativity at a bit deeper level. They made the typical beginners' mistakes. Then they realized they were mistakes and did some smaller but still embarrassing mistakes that allowed them to say that "you can't ever fall into a Hawking-eaporating black hole" which is still demonstrably nonsense.

My understanding is that these preprints about black holes should not be allowed in the quantum information archive where they would be off-topic; and these students should not have the "automatic" right to post to high-energy physics or general relativity archives because they're not experts and they're not in a close enough contact with an expert. So I think it's a matter of common sense that papers from such authors about this topic are likely to be rejected – and if everyone seems to agree that the papers are wrong, what are we really talking about?

The reason why some people talk about this self-evidently justified rejection is that there are some people who would love to flood the arXiv with garbage and dramatically reduce its quality requirements. These people want it simply because they can't achieve the minimum quality threshold that is required in hep-th, hep-ph, and elsewhere – but they want to be considered as experts of the same quality, anyway. So they fight against any moderation. If there is any moderation at all, they scream, at least, they should get some complete justification why their submission was rejected. It's clear what would be done with such an explanation. The rejected authors would show it to friends, posted on blogs, and look for some political support that would push the moderators in a direction and these moderators could ultimately give up and accept the garbage, anyway.

The louder and more well-connected you would be, the more likely it would be for the garbage you wrote to be accepted to the arXiv at the end.

In fact, this Dr Nicolas Gisin already shows us where this "transparency" would lead. The actually relevant comment that should be said in this context is that Dr Gisin has partially failed as an adviser. He failed to stop his students from embarrassing themselves by (nearly) publishing a preprint about a part of physics that they clearly don't understand at the expert level. It's really this Dr Gisin, and not the arXiv moderators, who should have been the greatest obstacle that his students should have faced while submitting wrong papers on general relativity.

Instead, he became a defender of the "students' right to submit these wrong papers". Why this right should exist? Once you try to demand such non-existent "rights" and scream things that make it clear that you don't give a damn whether the papers have elementary errors or not, you are a problem for the arXiv. You are a potentially unstoppable source of junk that may get multiplied and that the experts would have to go through every day. It doesn't matter that you have published lots of good preprints to another place of the arXiv. You just don't have credentials to flood every sub-archive at arXiv.org.

We see that Dr Gisin tried to inflate his ego by co-organizing an article in Nature that tries to pretend that it's a scandal that two young people who have the honor to be students of Dr Gisin himself were treated in this disrespectful way by the archives dedicated to general relativity or particle physics. With this screaming in the public, lots of people could join Dr Gisin and send the message to the arXiv moderators: How dare you? Those were students of our great Dr Gisin. You must treat them as prophets.

Sorry but they're not prophets. They were just students who tried to send wrong papers to professionals' archives about disciplines at which they are clearly not too good and unsurprisingly, they have failed. Even if Dr Gisin had sent the papers about the "impossibility to fall to a black hole", these papers should have been rejected.

The rejection may depend on some personal opinions or biases of a particular moderator – but there's nothing wrong about it. At the end, science has to be evaluated by some individual people. Gisin's students' papers could have been rejected for numerous simple reasons. If you demanded the moderators to publish some detailed explanations, it wouldn't help anybody. Any suggestion that some "arguments" between the rejected authors and moderators should follow means that
  • someone believes that there is a "right" for everyone to submit preprints anywhere to arXiv.org, but there's none
  • the moderators must be ready to sacrifice any amount of time and energy, but they don't have to
  • the interactions between the moderators and the would-be authors are discussions between two equal peers.
But the last point is simply not true, either. The rejected authors are primarily meant to be – and it's almost always the case – people who just don't know the basics or don't require their own papers to pass even the most modest tests of quality. One may say that they're crackpots or marginal crackpots. You just don't want the moderators to spend much time by communication with these people – because to save the time of actual researchers is the main reason of the rejection in the first place. So if you forced the moderators to spend an hour with every rejected crackpot paper, you could very well "open the gates" and force every researcher to waste a few seconds by looking over the abstract of the bullšit paper instead. If the gates were opened in this way, the number of junk papers would obviously start to grow.

The main problem of this "transparency" is that the meritocratic decision – one that ultimately must be done by someone who knows something about the subject, or a group of such people – would be replaced by a fight in the public arena.

Let me give you a simple example. It's just an example; there could be many other examples that are much less connected with the content of this weblog in the past but whose issues are very analogous, anyway. I believe – or hope – that loop quantum gravity papers aren't allowed at hep-th (just at gr-qc) because these people are acknowledged to be crackpots at the quality threshold expected in high energy physics. Every expert knows that even if there were something OK about loop quantum gravity (and there's nothing), there's no way how it could tell us something meaningful about particle physics.

Now, whenever a moderator would reject a loop quantum gravity paper at hep-th, the "transparency" regime would force him to explain the steps. In one way or another, more explicitly or less explicitly, he would have to reveal that he considers all the loop quantum gravity people to be cranks. Pretty much everyone in high-energy physics does. But almost no one says those things on a regular basis because people are "nice" and they want to avoid mud. OK, so the loop quantum gravity author would get this reply. What would he do with it? Would he learn a lesson? No, loop quantum gravity folks can never learn any lesson – that's a big part of the reason why they're crackpots.

Instead, this rejected author would send the explanation by the arXiv moderator to his friends, for example clueless inkspillers in Nature (e.g. Zeeya Merali who wrote this Nature rant about the "high-profile physicist" whose students were "outrageously" rejected), who would try to turn the explanation by the arXiv moderator into a scandal. Could anything good come out of it? Not at all. At most, the loop quantum gravity crackpot could assure himself that just like him or Sabine Hossenfelder, way over 99.9% of the public doesn't have the slightest idea about issues related to quantum gravity.

But the arXiv must still keep on working – it has become an important venue for the professionals in high energy physics. It's serving the relatively small community whose knowledge – and therefore also opinions – dramatically differ from the knowledge and opinions of the average member of the public. Clearly, if the hep-th arXiv were conquered by the community of the loop quantum gravity crackpots or the broader public that has been persuaded that loop quantum gravity is an OK science, the actual experts in quantum gravity would have to start a new website because hep-th would become unusable very soon, after it would be flooded by many more junk submissions. But hep-th is supposed to be their archive. That's how and why it was founded. The definition of "they" isn't quite sharp and clear but it's not completely ill-defined, either.

If a paper is rejected, it means that there is a significant disagreement between a moderator and the author of the preprint. The author must think that the preprint is good enough or great (that's why the paper was submitted) while the moderator doesn't share this opinion. If the author is smart or has really found something unusual, he may be right and the moderator may be wrong. The probability of that is clearly nonzero. It's just arguably small. But you simply can't improve the quality of the rejection process by turning the process into a potentially neverending public argument. It's essential that the expertise needed to evaluate submissions to the professional archives is not "omnipresent" which is why the broader "publication" of the details of the rejection transfers the influence and interest on a wrongly, too inclusively defined subgroup of the mankind.

Those are the reasons why I think that the calls for transparency, however fashionable these calls have become, are misplaced and potentially threatening for the remainder of meritocracy in science.

by Luboš Motl (noreply@blogger.com) at January 31, 2016 08:17 AM

January 30, 2016

John Baez - Azimuth

Among the Bone Eaters

Anthropologists sometimes talk about the virtues and dangers of ‘going native’: doing the same things as the people they’re studying, adopting their habits and lifestyles—and perhaps even their beliefs and values. The same applies to field biologists: you sometimes see it happen to people who study gorillas or chimpanzees.

It’s more impressive to see someone go native with a pack of hyenas:

• Marcus Baynes-Rock, Among the Bone Eaters: Encounters with Hyenas in Harar, Penn State University Press, 2015.

I’ve always been scared of hyenas, perhaps because they look ill-favored and ‘mean’ to me, or perhaps because their jaws have incredible bone-crushing force:

This is a spotted hyena, the species of hyena that Marcus Baynes-Rock befriended in the Ethiopian city of Harar. Their bite force has been estimated at 220 pounds!

(As a scientist I should say 985 newtons, but I have trouble imagining what it’s like to have teeth pressing into my flesh with a force of 985 newtons. If you don’t have a feeling for ‘pounds’, just imagine a 100-kilogram man standing on a hyena tooth that is pressing into your leg.)

So, you don’t want to annoy a hyena, or look too edible. However, the society of hyenas is founded on friendship! It’s the bonds of friendship that will make one hyena rush in to save another from an attacking lion. So, if you can figure out how to make hyenas befriend you, you’ve got some heavy-duty pals who will watch your back.

In Harar, people have been associating with spotted hyenas for a long time. At first they served as ‘trash collectors’, but later the association deepened. According to Wikipedia:

Written records indicate that spotted hyenas have been present in the walled Ethiopian city of Harar for at least 500 years, where they sanitise the city by feeding on its organic refuse.

The practice of regularly feeding them did not begin until the 1960s. The first to put it into practice was a farmer who began to feed hyenas in order to stop them attacking his livestock, with his descendants having continued the practice. Some of the hyena men give each hyena a name they respond to, and call to them using a “hyena dialect”, a mixture of English and Oromo. The hyena men feed the hyenas by mouth, using pieces of raw meat provided by spectators. Tourists usually organize to watch the spectacle through a guide for a negotiable rate. As of 2002, the practice is considered to be on the decline, with only two practicing hyena men left in Harar.


Hyena man — picture by Gusjer

According to local folklore, the feeding of hyenas in Harar originated during a 19th-century famine, during which the starving hyenas began to attack livestock and humans. In one version of the story, a pure-hearted man dreamed of how the Hararis could placate the hyenas by feeding them porridge, and successfully put it into practice, while another credits the revelation to the town’s Muslim saints convening on a mountaintop. The anniversary of this pact is celebrated every year on the Day of Ashura, when the hyenas are provided with porridge prepared with pure butter. It is believed that during this occasion, the hyenas’ clan leaders taste the porridge before the others. Should the porridge not be to the lead hyenas’ liking, the other hyenas will not eat it, and those in charge of feeding them make the requested improvements. The manner in which the hyenas eat the porridge on this occasion are believed to have oracular significance; if the hyena eats more than half the porridge, then it is seen as portending a prosperous new year. Should the hyena refuse to eat the porridge or eat all of it, then the people will gather in shrines to pray, in order to avert famine or pestilence.

Marcus Baynes-Rock went to Harar to learn about this. He wound up becoming friends with a pack of hyenas:

He would play with them and run with them through the city streets at night. In the end he ‘went native’: he would even be startled, like the hyenas, when they came across a human being!

To get a feeling for this, I think you have to either read his book or listen to this:

In a city that welcomes hyenas, an anthropologist makes friends, Here and Now, National Public Radio, 18 January 2016.

Nearer the beginning of this quest, he wrote this:

The Old Town of Harar in eastern Ethiopia is enclosed by a wall built 500 years ago to protect the town’s inhabitants from hostile neighbours after a religious conflict that destabilised the region. Historically, the gates would be opened every morning to admit outsiders into the town to buy and sell goods and perhaps worship at one of the dozens of mosques in the Muslim city. Only Muslims were allowed to enter. And each night, non-Hararis would be evicted from the town and the gates locked. So it is somewhat surprising that this endogamous, culturally exclusive society incorporated holes into its defensive wall, through which spotted hyenas from the surrounding hills could access the town at night.

Spotted hyenas could be considered the most hated mammal in Africa. Decried as ugly and awkward, associated with witches and sorcerers and seen as contaminating, spotted hyenas are a public relations challenge of the highest order. Yet in Harar, hyenas are not only allowed into the town to clean the streets of food scraps, they are deeply embedded in the traditions and beliefs of the townspeople. Sufism predominates in Harar and at last count there were 121 shrines in and near the town dedicated to the town’s saints. These saints are said to meet on Mt Hakim every Thursday to discuss any pressing issues facing the town and it is the hyenas who pass the information from the saints on to the townspeople via intermediaries who can understand hyena language. Etymologically, the Harari word for hyena, ‘waraba’ comes from ‘werabba’ which translates literally as ‘news man’. Hyenas are also believed to clear the streets of jinn, the unseen entities that are a constant presence for people in the town, and hyenas’ spirits are said to be like angels who fight with bad spirits to defend the souls of spiritually vulnerable people.

[…]

My current research in Harar is concerned with both sides of the relationship. First is the collection of stories, traditions, songs and proverbs of which there are many and trying to understand how the most hated mammal in Africa can be accommodated in an urban environment; to understand how a society can tolerate the presence of a potentially dangerous
species. Second is to understand the hyenas themselves and their participation in the relationship. In other parts of Ethiopia, and even within walking distance of Harar, hyenas are dangerous animals and attacks on people are common. Yet, in the old town of Harar, attacks are unheard of and it is not unusual to see hyenas, in search of food scraps, wandering past perfectly edible people sleeping in the streets. This localised immunity from attack is reassuring for a researcher spending nights alone with the hyenas in Harar’s narrow streets and alleys.

But this sounds like it was written before he went native!

Social networks

By the way: people have even applied network theory to friendships among spotted hyenas:

• Amiyaal Ilany, Andrew S. Booms and Kay E. Holekamp, Topological effects of network structure on long-term social network dynamics in a wild mammal, Ecology Letters, 18 (2015), 687–695.

The paper is not open-access, but there’s an easy-to-read summary here:

Scientists puzzled by ‘social network’ of spotted hyenas, Sci.news.com, 18 May 2015.

The scientists collected more than 55,000 observations of social interactions of spotted hyenas (also known as laughing hyenas) over a 20 year period in Kenya, making this one of the largest to date of social network dynamics in any non-human species.

They found that cohesive clustering of the kind where an individual bonds with friends of friends, something scientists call ‘triadic closure,’ was the most consistent factor influencing the long-term dynamics of the social structure of these mammals.

Individual traits, such as sex and social rank, and environmental effects, such as the amount of rainfall and the abundance of prey, also matter, but the ability of individuals to form and maintain social bonds in triads was key.

“Cohesive clusters can facilitate efficient cooperation and hence maximize fitness, and so our study shows that hyenas exploit this advantage. Interestingly, clustering is something done in human societies, from hunter-gatherers to Facebook users,” said Dr Ilany, who is the lead author on the study published in the journal Ecology Letters

Hyenas, which can live up to 22 years, typically live in large, stable groups known as clans, which can comprise more than 100 individuals.

According to the scientists, hyenas can discriminate maternal and paternal kin from unrelated hyenas and are selective in their social choices, tending to not form bonds with every hyena in the clan, rather preferring the friends of their friends.

They found that hyenas follow a complex set of rules when making social decisions. Males follow rigid rules in forming bonds, whereas females tend to change their preferences over time. For example, a female might care about social rank at one time, but then later choose based on rainfall amounts.

“In spotted hyenas, females are the dominant sex and so they can be very flexible in their social preferences. Females also remain in the same clan all their lives, so they may know the social environment better,” said study co-author Dr Kay Holekamp of Michigan State University.

“In contrast, males disperse to new clans after reaching puberty, and after they disperse they have virtually no social control because they are the lowest ranking individuals in the new clan, so we can speculate that perhaps this is why they are obliged to follow stricter social rules.”

If you like math, you might like this way of measuring ‘triadic closure’:

Triadic closure, Wikipedia.

For a program to measure triadic closure, click on the picture:


by John Baez at January 30, 2016 01:00 AM

January 29, 2016

Lubos Motl - string vacua and pheno

Munich: Kane vs Gross
Kane's attitude is the more scientific one

Yesterday, I mentioned Gordon Kane's paper based on his talk in Munich. Today, I noticed that
lots of the talk videos are available
on their website. The available speakers include Rovelli, Dawid, Pigliucci, Dardashti, Kragh, Achinstein, Schäffer, Smeenk, Kane, Quevedo, Wüthrich, Mukhanov, Ellis, Castellani, Lüst, Hossenfelder, Thebault, and Dvali while others may be added soon.




Among those, I was most interested in
Gordon Kane's talk
partly because I've read about some fiery exchange with David Gross. And yes, there was one. In the 45-minute talk, it starts around 30:00.




The main claim that ignited the battle was Gordy's assertion that M-theory on \(G_2\) holonomy manifolds with certain assumptions had predicted the mass \(m\approx 126\GeV\) of the Higgs boson before it was discovered; see e.g. Gordon Kane's blog post in December 2011. David Gross responded angrily. I've tried to understand the calculation "completely" with all the details and so far, I have failed. I feel that Gordon would have been able to compress the calculation or its logic if it were a clean one.

On the other hand, I partly do understand how the calculation works, what the assumptions are, and I just find it plausible that it's entirely accurate to say that with those assumptions including some notion of genericity, M-theory on 7D manifolds does produce the prediction of a \(126\GeV\) Higgs without fine-tuning. This statement surely isn't ludicrously wrong like many of the claims that I often criticize on this blog and some very careful researchers (importantly for me, Bobby Acharya) have pretty much joined Gordy in his research and in the summary of their conclusions, too.

Gross' and Kane's attitudes to the exchange were dramatically different. Gordon was focusing on the assumptions, calculations, and predictions; David was all about polls and the mob. "Gordon, you will agree that most of us wouldn't agree that M-theory has predicted the Higgs mass." And so on. Yes, no, what of it? If there's some important enough prediction and you have missed it or you don't understand it, it's your deficiency, David. If a majority of the community doesn't get it, it's the mistake of the members of the community. None of these votes can settle the question whether it's right for Gordon to say that M-theory has made the Higgs mass prediction, especially if most of these people know very well that they haven't even read any of these papers.

(By the way, Gordon phrases his predictions for the superpartner masses as predictions that have gone beyond the stage of "naive naturalness" which is how the people were estimating the masses decades ago. These days, they can work without this philosophy or strategy – as David Gross often categorizes naturalness.)

I think that David was acting like an inquisitor of a sort. The mob doesn't know or doesn't like that you have made that prediction, so you couldn't have done so. Well, that's a very lame criticism, David. With this approach of a bully, I sort of understand why you have sometimes endorsed the climate hysteria, too.

Also, I disagree with one particular claim by Gross, namely his assertion that the situation was in no way analogous to the prediction of Mercury's perihelion precession by general relativity. That was a prediction that would have killed general relativity if it had been falsified. Nothing like that is true in the case of Kane's M-theory predictions, Gross says.

Now, this claim is just rubbish, David. First of all, just like in the case of many of the string/M-theoretical predictions, the precession of Mercury's perihelion wasn't a full-fledged prediction but a postdiction. The precession anomaly had been known for a very long time before general relativity was completed. Einstein has only used this postdiction as a confirmation that increased his psychological certainty that he's on the right track (his heart has stopped for a second, we have heard) – and Gordon and his collaborators have arguably gone through totally analogous confirmations that have strengthened their belief that their class of compactifications is right (and string theorists – like reportedly Witten – have surely gone through the very same feeling when they learned that string theory postdicted gravity, and perhaps other things). At least, I don't see a glimpse of a real difference between the two situations.

Second, on top of this problem with David's argumentations, it's simply not true that any of these predictions or postdictions would have killed general relativity to the extent that they would convince Einstein to abandon it. One could be afraid that we need speculations about Einstein's thinking to know what would have happened if the confirmation hadn't taken place. Fortunately, we know what Einstein would have thought in that case – because someone has asked him:
When asked by his assistant what his reaction would have been if general relativity had not been confirmed by Eddington and Dyson in 1919, Einstein famously made the quip: "Then I would feel sorry for the dear Lord. The theory is correct anyway." [15]
Famously enough, in the first edition of the Czech Elegant Universe by Brian Greene, your humble correspondent translated "dear Lord" with the Czech word "lord" indicating Eddington. The quote makes sense in this way as well, doesn't it? ;-) I wasn't too aware of God, the other guy whom Einstein may have had in mind.

But back to the main topic.

Einstein would have definitely not abandoned general relativity. If the bending of light weren't observed, he would look for other explanations why it wasn't – abandoning GR wouldn't be among his top choices simply because the theory is beautiful and theoretically robust but was still able to pass some tests of agreement with the well-known physics (the Newtonian limit etc.). Today, many string theorists are actually more eager to abandon string theory for possibly inconclusive reasons than Einstein has ever been willing to abandon relativity.

The only possible kind of a difference between the two theories' predictions (GR and M-theory on \(G_2\) manifolds) is the fact that we think that GR is sort of a unique theory while M-theory on \(G_2\) manifolds, even as the "class of generic compactifications on 7D manifolds that Gordon has in mind", is not quite as unique. Even within string theory, there exist other classes of vacua, and even the \(G_2\) compactifications could be studied with somewhat different assumptions about the effective field theory we should get (not MSSM but a different model, and so on).

However, this difference isn't a function of purely intrinsic characteristics of the two theories. GR seems unique today because no one who is sensible and important enough is pushing any real "alternatives" to GR anymore. But these alternatives used to be considered and even Einstein himself has written papers proposing alternatives or "not quite corrected" versions of GR, especially before 1915.

My point is that in a couple of years, perhaps already in 2020, the accumulated knowledge may be such that it will be absolutely right to say that the situation of GR in 1919 and the situation of M-theory on \(G_2\) manifolds in 2016 were absolutely analogous. By 2020, it may become clear for most of the string theorists that the M-theory compactifications are the only way to go, some predictions – e.g. Gordon's predictions about the SUSY spectrum and cross sections – will have been validated, and all the reasonable people will simply return to the view that M-theory is at least as natural and important as GR and it has made analogous – and in fact, much more striking – predictions as GR.

In other words, the extra hindsight that we have in the case of GR – the fact that GR is an older theory (and has therefore passed a longer sequence of tests) – is the only indisputable qualitative difference between the two situations. I think that every other statement about differences (except for possible statements pointing out some particular bugs in the derivations in Gordon et al. papers, but Gross has been doing nothing of the sort) are just delusional or demagogic.

Sadly, the amount of energy that average members of the scientific, physics, or string community dedicate to the honest reading of other people's papers has decreased in recent years or a decade or so. But whenever it's true, people should be aware of this limitation of theirs and they should never try to market their laziness as no-go theorems. The fact that you or most of the people in your room don't understand something doesn't mean that it's wrong. And the greater amount of technical developments you have ignored, the greater is the probability that the problem is on your side.

by Luboš Motl (noreply@blogger.com) at January 29, 2016 04:35 PM

January 28, 2016

Symmetrybreaking - Fermilab/SLAC

A mile-deep campus

Forget wide-open spaces—this campus is in a former gold mine.

Twice a week, when junior Arthur Turner heads to class at Black Hills State University in Spearfish, South Dakota, he takes an elevator to what is possibly the first nearly mile-deep educational campus in the world.

Groundwater sprinkles on his head as he travels 10 minutes and 4850 feet into a gold-mine-turned-research-facility. His goal is to help physicists there search for the origins of dark matter and the properties of neutrinos.

Sanford Underground Research Facility opened in 2007, five years after the closure of the Homestake Gold Mine. The mile of bedrock above acts as a natural shield, blocking most of the radiation that can interfere with sensitive physics experiments.

“On the surface, there are one or two cosmic rays going through your hand every second,” says Jaret Heise, the science director at Sanford Lab. But if you head underground, you reduce that flux by up to 10 million, to just one or two cosmic rays every month, he says.

Not only do these experiments need to be safeguarded from space radiation, they also need to be safeguarded from their own low levels of radiation.

“Every screw, every piece of material, has to be screened,” says BHSU Underground Campus Lab Director Brianna Mount.

BHSU offered to help Sanford Lab with this in 2014 by funding a cleanroom to maintain the background-radiation-counting detectors used to check incoming materials. Once the materials have been cleared, they can help with current experiments or build the next generation of sensitive instruments.

Heise is particularly excited for the capability to build a new generation of dark matter and neutrino detectors.

“As physics experiments become more and more sensitive, the materials from which they're made need to be that much cleaner,” Heise says. “And that's where these counters come into play, helping us to get the best materials, to fabricate these next-generation experiments."

In return, Sanford Lab offered to host an underground campus for BHSU. Two cleanrooms—one dedicated to physics and the other dedicated to biology—allow students and faculty to conduct a variety of experiments.

The lab finished outfitting the space in September 2015. Even though it’s a mile underground, the counters require their own shielding because the local rock and any nearby ductwork or concrete will give off a small amount of radiation.

Once the lab was fully shielded, a group of students, including Turner, moved in a microscope and two low-background counters. After exiting the freight elevator, also known as the cage, the students walked into an old mine shaft. Then they hiked roughly half a mile to the cleanrooms, meandering through old tunnels with floors that sparkle with mica, a common grain in the bedrock.

“It's just been one of the coolest things that I've ever been a part of … to actually see what physics researchers do,” Turner says.

All three of the instruments the students installed were quickly put to use. Heise expects that they will triple that number this year with the addition of six more detectors from labs and universities across the US.

With the opening of the underground campus, physics students can now work on low-background counting experiments in the mine. And biology students go to sites in the far regions of the mine (the facility extends as far as 8000 feet underground but is mostly buried in water below about 5800 feet) and sample water in order to study the fungi and bacteria that live there with no light and low oxygen. These critters might exist in similar crevasses on Mars or Jupiter’s moons, or they might hold the key to developing new types of antibiotics here on Earth. The students can now bring samples back to the underground laboratory (instead of having to haul them to BHSU’s main campus while packed in dry ice).

Students with non-science majors are using the new campus to their advantage too. “We've also had education majors and even a photography major underground,” Mount says. But that’s not all. Mount welcomes research ideas from students across the US—from different universities down to the littlest scientists, as young as kindergarteners. 

Although lab benches are installed in the cleanroom, it can’t easily accommodate 30 students, a typical class size, and students under the age of 18 legally cannot enter the underground lab. But BHSU has found ways to engage the students who can’t make the trek.

A professor can perform an experiment underground while a Swivl—a small robot that supports an iPad—follows him or her around the lab, streaming video back to a classroom. And the cleanroom microscope is hooked up to the Internet, allowing students to view slides in real time, something they will eventually be able to do from several states away.

by Shannon Hall at January 28, 2016 06:52 PM

January 27, 2016

ZapperZ - Physics and Physicists

Will You Be Doing This Physics Demo For Your Students?
I like my students, and I love physics demos, but I don't think I'll be doing THIS physics demo anytime soon, thankyouverymuch!



It is a neat effect, and if someone else performed this, the media would have proclaimed this as "defying the laws of physics".

Maybe I can do a demo on this on a smaller scale, perhaps  using a Barbie doll. And if you ask me how in the world I have a Barbie doll in my possession, I'll send my GI Joe to capture you!

Zz.

by ZapperZ (noreply@blogger.com) at January 27, 2016 01:31 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
February 14, 2016 07:51 AM
All times are UTC.

Suggest a blog:
planet@teilchen.at