Particle Physics Planet


June 25, 2016

Christian P. Robert - xi'an's og

Brexit as hypothesis testing

last run on Clifton and Durdham Downs, Bristol, Jan. 27, 2012While I have no idea of how the results of the Brexit referendum of last Thursday will be interpreted, I am definitely worried by the possibility (and consequences) of an exit and wonder why those results should inevitably lead to Britain leaving the EU. Indeed, referenda are not legally binding in the UK and Parliament could choose to ignore the majority opinion expressed by this vote. For instance, because of the negative consequences of a withdrawal. Or because the differential is too little to justify such a dramatic change. In this, it relates to hypothesis testing in that only an overwhelming score can lead to the rejection of a natural null hypothesis corresponding to the status quo, rather than the posterior probability being above a mere ½. Which is the decision associated with a 0-1 loss function.  Of course, the analogy can be attacked from many sides, from a denial of democracy (simple majority being determined by a single extra vote) to a lack of randomness in the outcome of the referendum (since everyone in the population is supposed to have voted). But I still see some value in requiring major societal changes to be backed by more than a simple majority. All this musing is presumably wishful thinking since every side seems eager to move further (away from one another), but it would great if it could take place.


Filed under: Kids, pictures, Statistics Tagged: Brexit, Britain, hypothesis testing, Margaret Thatcher, model posterior probabilities, referendum

by xi'an at June 25, 2016 10:16 PM

Emily Lakdawalla - The Planetary Society Blog

Quick multimedia roundup: China's new rocket blasts off on inaugural mission
China's new Long March 7 rocket successfully blasted off on its inaugural mission today at 8:00 p.m. Beijing time (12:00 UTC, 7:00 a.m. EDT).

June 25, 2016 08:35 PM

Tommaso Dorigo - Scientificblogging

Finding All-Hadronic Top - Again
The top quark is the heaviest known subatomic particle we may call "elementary", i.e. one we describe as a point-like object; it weighs a full 66% more than the Higgs boson itself! Top was discovered in 1995 by the CDF and DZERO collaborations at the Fermilab Tevatron collider, which produced collisions between protons and antiprotons at an energy 7 times smaller than that of the proton-proton collisions now provided by the Large Hadron Collider at CERN.

read more

by Tommaso Dorigo at June 25, 2016 12:58 PM

Peter Coles - In the Dark

Keep Calm and Carry On

I’m still depressed and worried by the referendum vote, but it isn’t the British way to yield to despair. Let’s take the advice of a previous generation…

image

It seems clear that we are set to remain in the EU for the forseeable future, as the Leave campaigners are in no hurry to invoke Article 50 of the Lisbon treaty. I think there’s a real chance that we’ll end up staying in the EU after all.

In any case we are still in the European Union now. So. Chin up everyone. Business as usual!


by telescoper at June 25, 2016 11:09 AM

John Baez - Azimuth

A Quirky Function

guest post by David Tanzer

Here is a mathematical riddle.  Consider the function below, which is undefined for negative values, sends zero to one, and sends positive values to zero.   Can you come up with a nice compact formula for this function, which uses only the basic arithmetic operations, such as addition, division and exponentiation?  You can’t use any special functions, including things like sign and step functions, which are by definition discontinuous.

In college, I ran around showing people the graph, asking them to guess the formula.  I even tried it out on some professors there, U. Penn.  My algebra prof, who was kind of intimidating, looked at it, got puzzled, and then got irritated. When I showed him the answer, he barked out: Is this exam over??!  Then I tried it out during office hours on E. Calabi, who was teaching undergraduate differential geometry.  With a twinkle in his eye, he said, why that’s zero to the x!

The graph of 0x is not without controversy.   It is reasonable that for positive x, we have that 0x is zero.  Then 0-x = 1/0x = 1/0, so the function is undefined for negative values.  But what about 00?  This question is bound to come up in the course of one’s general mathematical education, and has been the source of long, ruminative arguments.

There are three contenders for 00:  undefined, 0, and 1.  Let’s try to define it in a way that is most consistent with the general laws of exponents — in particular, that for all a, x and y, ax+y = ax ay, and a-x = 1/ax. Let’s stick to these rules, even when a, x and y are all zero.

Then 00 equals its own square, because 00 = 00 + 0 = 00 00. And it equals its reciprocal, because 00 = 0-0 = 1/00. By these criteria, 00 equals 1.

That is the justification for the above graph — and for the striking discontinuity that it contains.

Here is an intuition for the discontinuity. Consider the family of exponential curves bx, with b as the parameter.  When b = 1, you get the constant function 1.  When b is more than 1, you get an increasing exponential, and when it is between 0 and 1, you get a decreasing exponential.  The intersection of all of these graphs is the “pivot” point x = 0, y = 1.  That is the “dot” of discontinuity.

What happens to bx, as b decreases to zero?  To the right of the origin, the curve progressively flattens down to zero.  To the left it rises up towards infinity more and more steeply.  But it always crosses through the point x = 0, y = 1, which remains in the limiting curve.  In heuristic terms, the value y = 1 is the discontinuous transit from infinitesimal values to infinite values.

There are reasons, however, why 00 could be treated as indeterminate, and left undefined.  These were indicated by the good professor.

Dr. Calabi was a truly inspiring fellow.  He loved Italian paintings, and  had a kind of geometric laser vision.  In the classroom, he showed us the idea of torsion using his arms to fly around the room like an airplane.  They even have a manifold named after him, the Calabi-Yau manifold.

He went on to talk about the underpinnings of this quirky function.  First he drew attention to the function f(x,y) = xy, over the complex domain, and attempted to sketch its level sets.  He focused on the behavior of the function when x and y are close to zero.   Then he stated that every one of the level sets L(z) = \{(x,y)|x^y = z\} comes arbitrarily close to (0,0).

This means that xy has a wild singularity at the origin: every complex number z is the limit of xy along some path to zero.  Indeed, to reach z, just take a path in L(z) that approaches (0,0).

To see why the level sets all approach the origin, take logs, to get ln(xy) = y ln(x) = ln(z).  That gives y = ln(z) / ln(x), which is a parametric formula for L(z).  As x goes to zero, ln(x) goes to negative infinity, so y goes to zero.  These are paths (x, ln(z)/ln(x)), completely within L(z), which approach the origin.

In making these statements, we need to keep in mind that xy is multi-valued.  That’s because xy = e y ln(x), and ln(x) is multi-valued. That is because ln(x) is the inverse of the complex exponential, which is many-to-one: adding any integer multiple of 2 \pi i to z leaves ez unchanged.  And that follows from the definition of the exponential, which sends a + bi to the complex number with magnitude a and phase b.

Footnote:  to visualize these operations, represent the complex numbers by the real plane.  Addition is given by vector addition.  Multiplication gives the vector with magnitude equal to the product of the magnitudes, and phase equal to the sum of the phases.   The positive real numbers have phase zero, and the positive imaginary numbers are at 90 degrees vertical, with phase \pi / 2.

For a specific (x,y), how many values does xy have?  Well, ln(x) has a countable number of values, all differing by integer multiples of 2 \pi i.  This generally induces a countable number of values for xy.  But if y is rational, they  collapse down to a finite set.  When y = 1/n, for example, the values of y ln(x) are spaced apart by 2 \pi i / n, and when these get pumped back through the exponential function, we find only n distinct values for x 1/n — they are the nth roots of x.

So, to speak of the limit of xy along a path, and of the partition of \mathbb{C}^2 into level sets, we need to work within a branch of xy.   Each branch induces a different partition of \mathbb{C}^2.  But for every one of these partitions, it holds true that all of the level sets approach the origin.  That follows from the formula for the level set L(z), which is y = z / ln(x).  As x goes to zero, every branch of ln(x) goes to negative infinity.  (Exercise:  why?)  So y also goes to zero.  The branch affects the shape of the paths to the origin, but not their existence.

Here is a qualitative description of how the level sets fit together:  they are like spokes around the origin, where each spoke is a curve in one complex dimension.  These curves are 1-D complex manifolds, which are equivalent to  two-dimensional surfaces in \mathbb{R}^4.  The partition comprises a two-parameter family of these surfaces, indexed by the complex value of xy.

What can be said about the geometry and topology of this “wheel of manifolds”?  We know they don’t intersect.  But are they “nicely” layered, or twisted and entangled?  As we zoom in on the origin, does the picture look smooth, or does it have a chaotic appearance, with infinite fine detail?  Suggestive of chaos is the fact that the gradient

\nabla x^y = (y x^{y-1}, \ln(y) x^y) = (y/x, \ln(y)) x^y

is also “wildly singular” at the origin.

These questions can be explored with plotting software.  Here, the artist would have the challenge of having only two dimensions to work with, when the “wheel” is really a structure in four-dimensional space.  So some interesting cross-sections would have to be chosen.

Exercises:

• Speak about the function bx, where b is negative, and x is real.
• What is 0^\pi, and why?
• What is 0^i?

Moral: something that seems odd, or like a joke that might annoy your algebra prof, could be more significant than you think.  So tell these riddles to your professors, while they are still around.

In memoriam.


by David A. Tanzer at June 25, 2016 06:31 AM

June 24, 2016

Peter Coles - In the Dark

P.S. Another Exit

The news about yesterday’s vote to take the United Kingdom out of the European Union reminded me that I haven’t yet mentioned on this blog that I’ll shortly be making an exit of my own although it is completely unconnected with and far less important than the EU referendum! Hopefully this will answer a comment on a poem I recently posted

I will be stepping down as Head of the School of Mathematical and Physical Sciences (MPS) and leaving the University of Sussex at the end of July. I made this decision some time ago and it was annnounced publicly by the University of Sussex in May, but at that time I was busy marking examinations and doing other stuff and I never got around to mentioning it on here.

I do not propose to go into detail about the reasons for my resignation, which are a mixture of personal and professional. Suffice to say I have found the many burdens and frustrations of my current job just too onerous to manage and therefore concluded that it’s better for all concerned if I leave and make way for someone better suited to the position.

I will be taking a short career break for health reasons, and returning to the  School of Physics and Astronomy at Cardiff University, to continue my research in astrophysics and cosmology in connection with the new Data Innovation Research Institute.

My appointment in 2013 was for a 5- year term, so I am leaving after three and a half years. MPS is in a very good position, with record student numbers and research income. I would not have decided to leave if I thought my departure would in any way jeopardise the progress that has been made over the last few years or the plans already in place for the next few years.

I’d like to take this opportunity to thank everyone I’ve worked with at Sussex for being such great colleagues and wish them all the very best for the future.

 


by telescoper at June 24, 2016 05:54 PM

Emily Lakdawalla - The Planetary Society Blog

WISE Views in Infrared
Amateur image processor Judy Schmidt explains the process of creating gorgeous views of the cosmos from infrared data from the WISE telescope.

June 24, 2016 05:04 PM

Peter Coles - In the Dark

Britain votes to leave the EU

My once and future home….

euromovewales

Today is a sad day for Britain and for Wales. The UK has voted to leave the EU.

Here in Wales we have overwhelmingly benefited from EU membership. Between 2007 and 2013 alone we received over £1.8billion in structural funding from the EU. Our universities prosper through EU co-operation and funding, a brand new campus in Swansea has just been completed – thanks to EU funding.

Those who have fought for remain, from across the political spectrum, have fought with love and wisdom. They have put aside party politics to fight for a greater, a stronger, a better UK – a better Wales. Thank you to everyone who got out and campaigned, who leafleted, who talked to friends about the benefits of the EU. Thank you.

Today we look back on what we have lost. Tomorrow we get together to help our broken and bruised country get back on its feet. This is a time to come together.

European Movement Council of…

View original post 46 more words


by telescoper at June 24, 2016 05:03 PM

Christian P. Robert - xi'an's og

London calling….

The Day After… Most sadly, England massively voted against remaining in the EU, while Scotland even more massively supported the Remain option.


Filed under: pictures Tagged: Brexit, Britain, England, European Union, London's burning, referendum, Scotland

by xi'an at June 24, 2016 12:18 PM

Andrew Jaffe - Leaves on the Line

The Sick Rose

Songs of innocence and of experience page 39 The Sick Rose Fitzwilliam copy

O Rose thou art sick.
The invisible worm,
That flies in the night
In the howling storm:

Has found out thy bed
Of crimson joy:
And his dark secret love
Does thy life destroy.

—William Blake, Songs of Experience

by Andrew at June 24, 2016 10:42 AM

Peter Coles - In the Dark

Dear Brexiteer. What we need you to do now.

I’m too desolated by the referendum result – and too busy – to comment for now, but here’s a post that encapsulates a lot of what I feel.

I continue to hope that we will remain in the EU. There may be another referendum next year. I think that’s always been Boris’ plan.

But in the meantime, Cameron has resigned and we’ll get a new right-wing government with new policies without a General Election having taken place. It’s a coup.

Those of you who voted BrExit because of the alleged democratic failings of the EU should ponder on that.

frpip

So well done, first of all. You listened to the arguments, the same ones I listened to. You heard all the same information I did, you listened to the same debates that I did, but you voted to leave. And you won. I take that – it was a democratic process and sometimes in the democratic process you lose, as I have done.

The referendum has activated the political energies of people who haven’t been interested in politics for some time, so we are told, and many of them are like you, who voted to leave. So here’s the plea of the losing side to you now.

Firstly, don’t stop – don’t stop with your political passion and activism, because we need you now. We need you to be active, we need you to keep talking to the people who you trusted with this vote, and we need you to…

View original post 333 more words


by telescoper at June 24, 2016 08:47 AM

Emily Lakdawalla - The Planetary Society Blog

An Astronomer Learns to Make His CASE
Science in America depends on federal funding, yet many young scientists don't understand how the U.S. government decides to spend its money on science, nor are they encouraged to use their new degrees to advise the process. This is changing with support from the American Association for the Advancement of Science.

June 24, 2016 06:32 AM

Clifford V. Johnson - Asymptotia

Historic Hysteria

So, *that* happened... (Click for larger view.)

referendum_result

-cvj Click to continue reading this post

The post Historic Hysteria appeared first on Asymptotia.

by Clifford at June 24, 2016 05:35 AM

Clifford V. Johnson - Asymptotia

Concern…

Anyone else finding this terrifying? A snapshot (click for larger view) from the Guardian's live results tracker* as of 19:45 PST - see here.

referendum_so_far_1
-cvj

*BTW, I've been using their trackers a lot during the presidential primaries, they're very good. Click to continue reading this post

The post Concern… appeared first on Asymptotia.

by Clifford at June 24, 2016 02:54 AM

June 23, 2016

Peter Coles - In the Dark

Referendum Day

Today has been a very eventful day. First I was up at 6am to get to my local polling station in order to cast my vote in the EU Referendum  as soon as the doors opened. I then had to get up to campus and spent all day from 9am until now interviewing for a Lectureship in Probability and Statistics. In between there have been thunderstorms, torrential rain, and flooding. Also, after checking the bookies’ odds on the Referendum result, I decided to place an insurance bet on Leave of £100 at 10/1 against. Given the closeness of the opinion polls I think those odds are far too long.

I’m far too tired to stay up and follow the results coming in, but tomorrow morning I’ll wake up to find that the UK will remain in the European Union or that I’m £1000 richer.

Anyway, for those of you out there who still haven’t voted – perhaps because of the inclement weather – there’s still three hours to get to it!

keep-calm-and-vote-remain-2


by telescoper at June 23, 2016 06:03 PM

astrobites - astro-ph reader's digest

Dinnertime for an Active Galaxy

Title: OGLE16aaa – a Signature of a Hungry Super Massive Black Hole.

Authors: Łukasz Wyrzykowski, M.Zieliński, Z.Kostrzewa-Rutkowska, et al.

First Author’s Institution: Warsaw University Astronomical Observatory, Poland.

Paper Status: Submitted to MNRAS.

What happens when a star passes too close to a supermassive black hole? The black hole’s gravity is so strong that the star is ripped apart. It’s called a Tidal Disruption Event, or TDE, and it is among the most violent explosions in the universe. These events have been covered several times before on astrobites, but they are incredibly rare, and have only been known about since 2011. Apart from being dramatic events in their own right, they are one of the few tools we have for studying supermassive black holes. Today’s paper is about one of these TDEs, OGLE16aaa, which was first detected in January of this year, and is still ongoing today!

TDFlare_1

Artist’s impression of a tidal disruption event. Source: http://www.astro.dur.ac.uk/~pgandhi/mwf/

There is a clear pattern in the host galaxies in which TDEs form. Most TDE-hosting galaxies have two things in common. Firstly, they have strong Balmer absorption lines; that is, when we split their light into a spectrum, there are deficiencies at the characteristic frequencies of ionised hydrogen. This tells us that these galaxies are dominated by stars in whose spectra we see these strong Balmer lines: stars at the hot end of the range such as A-class stars. These are bright, bluish stars; in our own Galaxy, Sirius (the brightest star in the night sky) would be an example. As these stars are short-lived (on the order of ten to a hundred million years, compared to the ten billion that our sun should last), we can tell that these galaxies must have undergone an intense period of star formation within the last few tens of millions of years (comparatively recent for stars!).

Secondly, most TDEs that we have seen have been in galaxies that do not have Active Galactic Nuclei, or AGN. AGN are particularly bright and variable regions at the centre of some galaxies, and are generally thought to be produced by matter falling onto the galaxy’s central black hole, becoming super-heated as it falls.

Galaxies without AGN, but with strong Balmer lines, are believed to be the result of recent mergers between two galaxies; the merger triggers a bout of star formation while disrupting the AGN. These galaxies are pretty rare — only 2.3% of galaxies meet both criteria — but 75% of all detected TDEs have been in these AGN-less merger galaxies. It is thought that the merger might cause disruptions to stellar orbits that send some stars spinning off towards the black hole.

ogle gal centre

Left: The blue star shows the position of OGLE16aaa within its host galaxy. Right: Measured positions of the TDE (blue stars) and mean measured position (green cross) relative to the centre of the galaxy (red circle). The close proximity of the event to the centre of its galaxy is exactly what we expect from a TDE around the central black hole. Source: Figure 1 in today’s paper.

This is where OGLE16aaa is unusual, because its host galaxy does have an AGN, and doesn’t have strong Balmer lines. It is the third TDE to be found in an AGN-hosting galaxy, showing that while TDEs are rarer in these galaxies, it is not impossible for them to form here. This is not too big a surprise — there is nothing to say that such TDEs should be impossible — but it has implications for TDE surveys. Because the brightness of AGNs flickers anyway, it could be hard to distinguish the flare caused by a TDE from the other variability of the AGN. For this reason, surveys for TDEs have generally discounted candidates found near AGN as unreliable. However, as the evidence grows that TDEs in AGN-hosting galaxies do happen, it becomes clear that missing them will throw off our estimates of TDE occurrence rates, which could have significant consequences for future TDE work.

by Matthew Green at June 23, 2016 01:15 PM

Symmetrybreaking - Fermilab/SLAC

The Higgs-shaped elephant in the room

Higgs bosons should mass-produce bottom quarks. So why is it so hard to see it happening?

Higgs bosons are born in a blob of pure concentrated energy and live only one-septillionth of a second before decaying into a cascade of other particles. In 2012, these subatomic offspring were the key to the discovery of the Higgs boson.

So-called daughter particles stick around long enough to show up in the CMS and ATLAS detectors at the Large Hadron Collider. Scientists can follow their tracks and trace the family trees back to the Higgs boson they came from.

But the particles that led to the Higgs discovery were actually some of the boson’s less common progeny. After recording several million collisions, scientists identified a handful of Z bosons and photons with a Higgs-like origin. The Standard Model of particle physics predicts that Higgs bosons produce those particles 2.5 and 0.2 percent of the time. Physicists later identified Higgs bosons decaying into W bosons, which happens about 21 percent of the time.

According to the Standard Model, the most common decay of the Higgs boson should be a transformation into a pair of bottom quarks. This should happen about 60 percent of the time.

The strange thing is, scientists have yet to discover it happening (though they have seen evidence).

According to Harvard researcher John Huth, a member of the ATLAS experiment, seeing the Higgs turning into bottom quarks is priority No. 1 for Higgs boson research.

“It would behoove us to find the Higgs decaying to bottom quarks because this is the largest interaction,” Huth says, “and it darn well better be there.”

If the Higgs to bottom quarks decay were not there, scientists would be left completely dumbfounded.

“I would be shocked if this particle does not couple to bottom quarks,” says Jim Olsen, a Princeton researcher and Physics Coordinator for the CMS experiment. “The absence of this decay would have a very large and direct impact on the relative decay rates of the Higgs boson to all of the other known particles, and the recent ATLAS and CMS combined measurements are in excellent agreement with expectations.”

To be fair, the decay of a Higgs to two bottom quarks is difficult to spot.

When a dying Higgs boson produces twin Z or W bosons, they each decay into a pair of muons or electrons. These particles leave crystal clear signals in the detectors, making it easy for scientists to spot them and track their lineage. And because photons are essentially immortal beams of light, scientists can immediately spot them and record their trajectory and energy with electromagnetic detectors.

But when a Higgs births a pair of bottom quarks, they impulsively marry other quarks, generating huge unstable families which bourgeon, break and reform. This chaotic cascade leaves a messy ancestry.

Scientists are developing special tools to disentangle the Higgs from this multi-generational subatomic soap opera. Unfortunately, there are no cheek swabs or Maury Povich to announce, Higgs, you are the father! Instead, scientists are working on algorithms that look for patterns in the energy these jets of particles deposit in the detectors.

“The decay of Higgs bosons to bottom quarks should have different kinematics from the more common processes and leave unique signatures in our detector,” Huth says. “But we need to deeply understand all the variables involved if we want to squeeze the small number of Higgs events from everything else.”

Physicist Usha Mallik and her ATLAS team of researchers at the University of Iowa have been mapping the complex bottom quark genealogies since shortly after the Higgs discovery in 2012.

“Bottom quarks produce jets of particles with all kinds and colors and flavors,” Mallik says. “There are fat jets, narrow gets, distinct jets and overlapping jets. Just to find the original bottom quarks, we need to look at all of the jet’s characteristics. This is a complex problem with a lot of people working on it.”

This year the LHC will produce five times more data than it did last year and will generate Higgs bosons 25 percent faster. Scientists expect that by August they will be able to identify this prominent decay of the Higgs and find out what it can tell them about the properties of this unique particle.

by Sarah Charley at June 23, 2016 01:00 PM

Emily Lakdawalla - The Planetary Society Blog

All about China's new rocket and spaceport, which may see action this Saturday
Sometime between Saturday and Wednesday, China plans to launch a brand new rocket from a brand new launch site, and conduct a small-scale test of its next-generation crew capsule.

June 23, 2016 11:33 AM

Lubos Motl - string vacua and pheno

Negative rumors haven't passed the TRF threshold
...yet?...

Several blogs and Twitter accounts have worked hard to distribute the opinion that the 2015 excess of diphoton events resembling a new \(750\GeV\) particle at the LHC wasn't repeated in the 2016 data. No details are given but it is implicitly assumed that this result was shared with the members of ATLAS at a meeting on June 16 at 1pm and those of CMS on June 20 at 5pm.

In recent 5 years, my sources have informed me about all similar news rather quickly and all such "rumors about imminent announcements" you could have read here were always accurate. And I became confident whenever I had at least 2 sources that looked "probably more than 50% independent of one another".

Well, let me say that the number of such sources that are telling me about the disappearance of the cernette is zero as of today. It doesn't mean that those negative reports must be unsubstantiated or even that the particle exists – it is totally plausible that it doesn't exist – but there is a reason to think that the reports are unsubstantiated. The channels that I am seeing seem untrustworthy from my viewpoint.




One can have some doubts about the "very existence of the new results" in the LHC collaborations. The processing needed to get the answers could be fast (especially because the collisions in 2016 take place at the same energy as a year earlier so the old methods just work) – but it was often slow, too. Only in the recent week or two, the amount of the 2016 collisions was higher than in 2015.




It seems reasonable to me that the experimenters were waiting at least for an equally large amount of collisions as the 2015 dataset and the processing of the information didn't really start up to recently. Freya Blekman of CMS (plus Caterina Doglioni of ATLAS) argues that it takes much more than two weeks to perform the difficult analyses and calibrate the data. She claims that the rumors are not spread by those who are involved in this process.

Andre David of CMS seriously or jokingly indicates that he likes how the false rumors have been successfully propagated. I actually find this "deliberate fog" a plausible scenario, too. ATLAS and CMS could have become more experienced and self-controlling when it comes to this rumor incontinence.



ATLAS and CMS could have launched a social experiment with false rumors according to the template pioneered by Dr Sheldon Cooper and Dr Amy Farrah-Fowler

So you know, before these reports, I estimated the probability of new physics near the invariant mass of \(750\GeV\) to be 50%. It may have decreased to something like 45% for me now. So far, the reasons for substantial changes haven't arrived. That may change within minutes after I post this blog post. Alternatively, it may refuse to change up to mid July. Blekman wants people to be even more patient: the results will be known at ICHEP 2016, between August 3rd and 10th. That conference should incorporate the data up to mid July.

Update Thursday: One of the two main traditional gossip channels of mine has confirmed the negative rumors so my subjective probability that the rumor comes from well-informed sources has increased to 75%.

by Luboš Motl (noreply@blogger.com) at June 23, 2016 04:47 AM

Clifford V. Johnson - Asymptotia

QED and so forth…

(Spoiler!! :) )

Talking about gauge invariance took a couple more pages than I planned...

screen_shot_progress_QED

-cvj Click to continue reading this post

The post QED and so forth… appeared first on Asymptotia.

by Clifford at June 23, 2016 12:22 AM

June 22, 2016

Emily Lakdawalla - The Planetary Society Blog

Plans for China's farside Chang'e 4 lander science mission taking shape
The future Chang'e 4 lunar farside landing mission is rapidly taking shape. Now the mission's team is coming to a consensus on the landing location, as well as on the mission's instrument package.

June 22, 2016 05:09 PM

Jester - Resonaances

Game of Thrones: 750 GeV edition
The 750 GeV diphoton resonance has made a big impact on theoretical particle physics. The number of papers on the topic is already legendary, and they keep coming at the rate of order 10 per week. Given that the Backović model is falsified, there's no longer a theoretical upper limit.  Does this mean we are not dealing with the classical ambulance chasing scenario? The answer may be known in the next days.

So who's winning this race?  What kind of question is that, you may shout, of course it's Strumia! And you would be wrong, independently of the metric. The contest is much more fierce than one might expect:  it takes 8 papers on the topic to win, and 7 papers to even get on the podium.  Among the 3 authors with 7 papers the final classification is decided by trial by combat the citation count.  The result is (drums):

Citations, tja...   Social dynamics of our community encourages referencing all previous work on the topic, rather than just the relevant ones, which in this particular case triggered a period of inflation. One day soon citation numbers will mean as much as authorship in experimental particle physics. But for now the size of the h-factor is still an important measure of virility for theorists. If the citation count rather the number of papers is the main criterion, the iron throne is taken by a Targaryen contender (trumpets):

This explains why the resonance is usually denoted by the letter S.

Congratulations to all the winners.  For all the rest, wish you more luck and persistence in the next edition,  provided it will take place.

My scripts are not perfect (in previous versions I missed crucial contenders, as pointed out in the comments), so let me know in case I missed your papers or miscalculated citations. 

by Jester (noreply@blogger.com) at June 22, 2016 05:00 PM

astrobites - astro-ph reader's digest

Where the Wild (Planet)Things Are

Title: Search for giant planets in M67 III: excess of hot Jupiters in dense open clusters

Authors: A. Brucalassi, L. Pasquini, R. Saglia, M.T. Ruiz, P. Bonifacio, I. Leão, B.L. Canto Martins, J.R. de Medeiros, L. R. Bedin, K. Biazzo, C. Melo, C. Lovis, and S. Randich

First Author’s Institution: Max-Planck für extraterrestrische Physik, Garching bei München, Germany

Status: Accepted for publication in A&A Journal Letters

 

If you wanted to discover a new giant exoplanet, where would you look? New research, shows that star clusters are a good place to start, at least if you want to look for giant exoplanets close to their host star.

Hot Jupiters are a breed of exoplanets that have masses about or larger than Jupiter and orbit a star in 10 days or less (for comparison, Mercury takes 88 days to go around the Sun).  When they were first discovered, they posed a problem to planet formation models as it was thought gas giants could only form far from their host star where it was cool enough for ices to form, which allows for larger planets to be made.  Since then, studies have shown these planets could form far out and migrate inwards over their lifetime.  This can happen through interactions with the disk in which the planet forms (known as Type II migration), or through gravitational scattering with other planets or nearby stars.

Brucalassi and her team decided to investigate an open cluster in the Milky Way (Messier 67) to look for hot Jupiters.  Over several years they used three different telescopes (the ESO 3.6m telescope, the Hobby Eberly Telescope and the TNG on La Palma of the Canary Islands) to take high-precision spectra of 88 stars, 12 of which are binary stars.  This spectra could then be analyzed for small blue- and redshifts which indicate the star is moving slightly.  In this case, that movement is caused by the presence of another body, the exoplanet.  This method is known as the radial velocity method and is the method that was used in the first exoplanet discoveries.  To make sure that each star’s own activity wasn’t affecting its spectra, the group measured the Hα line which shows how active the star’s chromosphere is.  Figure 1 shows an example of the radial velocity measurements.

Screen Shot 2016-06-20 at 2.24.16 PM

Figure 1: Radial velocity measurements for YBP401. The coloured dots represent the different telescopes the measurements were made at. The measurements show an exoplanet with a period of just 4.08 days.

The group’s measurements revealed a new exoplanet around the main sequence star YBP401.  They were also able to get better measurements on two stars (YBP1194 and YBP1514) with known hot Jupiters.  This brought the total number of hot Jupiters to 3 out of 88 stars.  Although 3 might not seem like a very big number, it is larger than the number of hot Jupiters found around field stars (stars not in clusters).  For the statistical analysis, Brucalassi compares the number of exoplanets with the number of main sequence and subgiant stars, i.e. stars that are not yet at the ends of their lives.  Of the 88 stars, 66 are main sequence or subgiant, and of those only 53 are not binary stars.  Most radial velocity studies choose to not observe binary stars so it is important to compare numbers with that in mind.  A previous study from 2012 found a hot Jupiter frequency of 1.2% ± 0.38 around field stars.  Brucalassi finds 4.5+4.5-2.5% when comparing with only single stars (not including binaries) in M67.  To compare with statistics from the Kepler mission, binaries are included, as Kelper also surveys binaries, and the percentage for hot Jupiters in a cluster is 5.6+5.4-2.6%.  The Kepler mission finds a frequency of hot Jupiters of just ~0.4%, which is considerably lower.  And this trend isn’t seen just in M67.  Combining radial velocity surveys for the clusters M67, Hyades, and Praesepe, there are 6 hot Jupiters in 240 surveyed stars, whereas the study from 2012 found only 12 in survey of 836 field stars.

It’s known that systems with more metals tend to produce more planets and the star’s mass may also have an effect on planet production.  However, the clusters stars and field stars are on average the same mass, so this alone cannot account for the differneces.  M67 is also at solar metallicity (i.e. it’s stars tend to have the same amount of metals as our Sun) so this can also not account for the excess of hot Jupiters.  Brucalassi concludes that the high number of hot Jupiters is due to the environment.  Past simulations show that stars in a crowded cluster environment will experience at least one close encounter with another star, which is all that is needed to drive a Jupiter in to a closer orbit.  This new research gives further evidence to this theory, putting us one step closer to understanding how exoplanets can form.

Screen Shot 2016-06-20 at 8.31.27 AM

Figure 2: An artist’s rendition of the new hot Jupiter. Click on the image for a full animated video of the M67 cluster. Courtesy of the ESO press release (#eso1621).

by Mara Johnson-Groh at June 22, 2016 03:35 PM

Jester - Resonaances

Off we go
The LHC is back in action since last weekend, again colliding protons with 13 TeV energy. The weasels' conspiracy was foiled, and the perpetrators were exemplarily electrocuted. PhD students have been deployed around the LHC perimeter to counter any further sabotage attempts (stoats are known to have been in league with weasels in the past). The period that begins now may prove to be the most exciting time for particle physics in this century.  Or the most disappointing.

The beam intensity is still a factor of 10 below the nominal one, so  the harvest of last weekend is meager 40 inverse picobarns. But the number of proton bunches in the beam is quickly increasing, and once it reaches O(2000), the data will stream at a rate of a femtobarn per week or more. For the nearest future, the plan is to have a few inverse femtobarns on tape by mid-July, which would roughly double the current 13 TeV dataset. The first analyses of this chunk of data  should be presented around the time of the  ICHEP conference in early August. At that point we will know whether the 750 GeV particle is real. Celebrations will begin if the significance of the diphoton peak increases after adding the new data, even if the statistics is not enough to officially announce  a discovery. In the best of all worlds, we may also get a hint of a matching 750 GeV peak in another decay channel (ZZ, Z-photon, dilepton, t-tbar,...) which would help focus our model building. On the other hand, if the significance of the diphoton peak drops in August, there will be a massive hangover...

By the end of October, when the 2016 proton collisions are scheduled to end, the LHC hopes to collect some 20 inverse femtobarns of data. This should already give us a rough feeling of new physics within the reach of the LHC. If a hint of another resonance is seen at that point, one will surely be able to confirm or refute it with the data collected in the following years.  If nothing is seen... then you should start telling yourself that condensed matter physics is also sort of fundamental,  or that systematic uncertainties in astrophysics are not so bad after all...  In any scenario, by December, when first analyses of the full  2016 dataset will be released,  we will know infinitely more than we do today.

So fasten your seat belts and get ready for a (hopefully) bumpy ride. Serious rumors should start showing up on blogs and twitter starting from July.

by Jester (noreply@blogger.com) at June 22, 2016 03:27 PM

Jester - Resonaances

Weekend Plot: The king is dead (long live the king)
The new diphoton king has been discussed at length in the blogoshpere, but the late diboson king also deserves a word or two. Recall that last summer ATLAS announced a 3 sigma excess in the dijet invariant mass distribution where each jet resembles a fast moving W or Z boson decaying to a pair of quarks. This excess can be interpreted as a 2 TeV resonance decaying to a pair of W or Z bosons. For example, it could be a heavy cousin of the W boson, W' in short, decaying to a W and a Z boson. Merely a month ago this paper argued that the excess remains statistically significant after combining several different CMS and ATLAS diboson resonance run-1 analyses in hadronic and leptonic channels of W and Z decay. However, the hammer came down seconds before the diphoton excess announced: diboson resonance searches based on the LHC 13 TeV collisions data do not show anything interesting around 2 TeV. This is a serious problem for any new physics interpretation of the excess since, for this mass scale,  the statistical power of the run-2 and run-1 data is comparable.  The tension is summarized in this plot:
The green bars show the 1 and 2 sigma best fit cross section to the diboson excess. The one on the left takes into account  only the hadronic channel in ATLAS, where the excess is most significant; the one on the right is bases on  the combined run-1 data. The red lines are the limits from run-2 searches in ATLAS and CMS, scaled to 8 TeV cross sections assuming W' is produced in quark-antiquark collisions. Clearly, the best fit region for the 8 TeV data is excluded by the new 13 TeV data. I display results for the W' hypothesis, however conclusions are similar (or more pessimistic) for other hypotheses leading to WW and/or ZZ final states. All in all,  the ATLAS diboson excess is not formally buried yet, but at this point any a reversal of fortune would be a miracle.

by Jester (noreply@blogger.com) at June 22, 2016 03:27 PM

Jon Butterworth - Life and Physics

Don’t let’s quit

This doesn’t belong on the Guardian Science pages, because even though universities and science will suffer if Britain leaves the EU, that’s not my main reason for voting ‘remain’. But lots of friends have been writing or talking about their choice, and the difficulties of making it, and I feel the need to write my own reasons down even if everyone is saturated by now. It’s nearly over, after all.

Even though the EU is obviously imperfect, a pragmatic compromise, I will vote to stay in with hope and enthusiasm. In fact, I’ll do so partly because it’s an imperfect, pragmatic compromise.

I realise there are a number of possible reasons for voting to leave the EU, some better than others, but please don’t.

Democracy

Maybe you’re bothered because EU democracy isn’t perfect. Also we can get outvoted on some things (these are two different points. Being outvoted sometimes is actually democratic. Some limitations on EU democracy are there to stop countries being outvoted by other countries too often). But it sort of works and it can be improved, especially if we took EU elections more seriously after all this. And we’re still ‘sovereign’, simply because we can vote to leave if we get outvoted on something important enough.

Misplaced nostalgia and worse

Maybe you don’t like foreigners, or you want to ‘Take Britain back’  (presumably to some fantasy dreamworld circa 1958). Unlucky; the world has moved on and will continue to do so whatever the result this week. I don’t have a lot of sympathy, frankly, and I don’t think this applies to (m)any of my ‘leave’ friends.

Lies

Maybe you believed the lies about the £350m we don’t send, which wouldn’t save the NHS anyway even if we did, or the idea that new countries are lining up to join and we couldn’t stop them if we wanted. If so please look at e.g. https://fullfact.org/europe/ for help. Some people I love and respect have believed some of these lies, and that has made me cross. These aren’t matters of opinion, and the fact that the ‘leave’ campaign repeats them over and over shows both their contempt for the intelligence of voters and the weakness of their case. If you still want to leave, knowing the facts, then fair enough. But don’t do it on a lie.

We need change

Maybe you have a strong desire for change, because bits of British life are rubbish and unfair. In this case, the chances are your desire for change is directed at entirely the wrong target. The EU is not that powerful in terms of its direct effects on everyday life. The main thing it does is provide a mechanism for resolving common issues between EU member states. It is  a vast improvement on the violent means used in previous centuries. It spreads rights and standards to the citizens and industries of members states, making trade and travel safer and easier. And it amplifies our collective voice in global politics.

People who blame the EU for the injustices of British life are being made fools of by unscrupulous politicians, media moguls and others who have for years been content to see the destruction of British industry, the undermining of workers’ rights, the underfunding of the NHS and education, relentless attacks on national institutions such as the BBC, neglect of whole regions of the country and more.

These are the people now telling us to cut off our nose to spite our face, and they are exploiting the discontent they have fostered to persuade us this would be a good idea, by blaming the EU for choices made by UK governments.

They are quite happy for industry to move to lower-wage economies in the developing world when is suits them, but they don’t want us agreeing common standards, protections and practices with our EU partners. They don’t like Nation states clubbing together, because that can make trouble for multinationals, and (in principle at least) threatens their ability to cash in on exploitative labour practices and tax havens. They would much rather play nation off against nation.

If…

If we vote to leave, the next few years will be dominated by attempts to negotiate something from the wreckage, against the background of a shrinking economy and a dysfunctional political class.  This will do nothing to fix inequality and the social problems we face (and I find it utterly implausible that people like Bojo, IDS or Farage would even want that). Those issues will be neglected or worse. Possibly this distraction, which is already present, is one reason some in the Conservative Party have involved us all in their internal power struggles.

If we vote remain, I hope the desire for change is preserved beyond Thursday, and is focussed not on irresponsible ‘blame the foreigner’ games, but on real politics, of hope and pragmatism, where it can make a positive difference.

I know there’s no physics here. This is the ‘life’ bit, and apart from the facts, it’s just my opinion. Before writing it I said this on twitter:

and it probably still be true that it’s better than the above. Certainly it’s shorter. But I had to try my own words.

I’m not going to enable comments here since they can be added on twitter and facebook if you feel the urge, and I can’t keep up with too many threads.


Filed under: Politics

by Jon Butterworth at June 22, 2016 06:19 AM

June 21, 2016

astrobites - astro-ph reader's digest

Chaos before Shining Stars

Title: GMC Collisions as Triggers of Star Formation. II. 3D Turbulent, Magnetized Simulations.

Authors: Benjamin Wu, Jonathan C. Tan, Fumitaka Nakamura, Sven Van Loo, Duncan Christie, and David Collins.

First Author’s Institution: Department of Physics, University of Florida, Gainesville, Florida, USA.

Paper Status: Submitted to ApJ.


Left panel of Figure 5 in the original paper. Scientifically accurate depiction of the gas density and magnetic field orientation.

Featured image: Visualization of the gas surface density and the magnetic field orientations for the cloud collision simulation. The beautiful and accurate depiction of the magnetic fields is the marriage between science and art, created by the Linear Integral Convolution (LIC) method.

It is human nature to want to smash things, and warmhearted astronomers are no exception. We have smashed galaxies together to make stars and black holes. We do it for science! When it comes to modeling astrophysical phenomena, we have to go beyond idealized setups; smashing things together is a good way to create realistic environments. The main purpose of theoretical modeling is to help make sense of the complexity we see in nature after all.

Star formation occurs in gas clouds known as giant molecular clouds consisting mainly of molecular hydrogen. Like clouds in the sky, molecular clouds take all kinds of shapes and move at different speeds at different times. The same is true for star-forming molecular clouds. We can now actually observe the dynamics of such molecular clouds in high resolution.

Star-forming molecular clouds are shaped by turbulence and magnetic fields. However, modeling their effects on star formation is a nontrivial business because turbulence and magnetic fields affect each other, and one has to resort to numerical simulations. The authors of today’s paper simulated the collisions of pairs of turbulent spherical clouds with different magnetic fields, collision velocities, and collision trajectories. Their work includes an extensive series of simulations to probe how different collision parameters determine the resultant star-forming environments. Using these simulations, the authors show how observers can test whether real molecular clouds have experienced collisions in the past.

Wu+16_ICs

Figure 1. Initial states of the cloud collision simulation. The upper panel shows the gas surface density together with the initial magnetic field lines (grey lines), the bottom panel shows the gas temperature with the gas velocity vectors (black arrows). When the collision simulation begins, the two clouds move towards each other at 10 km/s.

Figure 1 shows the initial conditions of the simulations. Two turbulent spherical clouds with radii of 20 pc are placed close together with a uniform magnetic field angled 60° relative to the x axis. In the colliding run, a relative velocity of 10 km/s is given to the clouds. A non-colliding control run is given no initial velocity besides the initial turbulence. By comparing the results of the two runs, the authors examine the physical properties of the clouds that result from such a collision.

Wu+16_CombinedPanels

Figure 2. Surface gas density snapshots of the colliding (upper) and the non-colliding (lower) run at 1 and 4 million years. Grey streamlines denote the orientations of the magnetic field lines.

Figure 2 compares the snapshots of gas surface density at two different times for the colliding (upper panel) and the non-colliding (lower panel) run. The filamentary structures observed early on arise from initial turbulence. High-density structures develop faster and are more compact in the colliding case.

The main finding of the paper is that gas filaments and magnetic fields are more parallel to each other after cloud collisions and more perpendicular without collisions. The orientations of filaments and magnetic fields may therefore be used to test whether a cloud has experienced collisions.

Wu+16_HROs

Figure 3. The Histogram of Relative Orientations (HROs) for the colliding (upper) and the non-colliding (lower) run. The colors black, blue, and red correspond to the low, intermediate, and high density gas. The angle Φ represents the angle between the magnetic field lines and the gas filaments.

To quantify their finding, the authors make use of the Histogram of Relative Orientations (HROs). Figure 3 shows the HROs for the colliding (upper) and the non-colliding (lower) run. A value of Φ = 0° means aligned gas filaments and magnetic fields, while Φ = ±90° means they are perpendicular to each other.  The black, blue, and red curves show the relative amount of gas with different Φ at low, medium, and high density. In both colliding and non-colliding case, the low-density gas mostly has parallel alignments (peaking at Φ = 0°). For the high-density gas, however, the non-colliding run has relatively more contributions near Φ = ±90°. This distinction may serve as an observational test for gas cloud collisions. In their upcoming paper, the authors will simulate the star formation resulting from such collisions. Stay tuned!

by Benny Tsang at June 21, 2016 06:14 PM

Symmetrybreaking - Fermilab/SLAC

All four one and one for all

A theory of everything would unite the four forces of nature, but is such a thing possible?

Over the centuries, physicists have made giant strides in understanding and predicting the physical world by connecting phenomena that look very different on the surface. 

One of the great success stories in physics is the unification of electricity and magnetism into the electromagnetic force in the 19th century. Experiments showed that electrical currents could deflect magnetic compass needles and that moving magnets could produce currents.

Then physicists linked another force, the weak force, with that electromagnetic force, forming a theory of electroweak interactions. Some physicists think the logical next step is merging all four fundamental forces—gravity, electromagnetism, the weak force and the strong force—into a single mathematical framework: a theory of everything.

Those four fundamental forces of nature are radically different in strength and behavior. And while reality has cooperated with the human habit of finding patterns so far, creating a theory of everything is perhaps the most difficult endeavor in physics.

“On some level we don't necessarily have to expect that [a theory of everything] exists,” says Cynthia Keeler, a string theorist at the Niels Bohr Institute in Denmark. “I have a little optimism about it because historically, we have been able to make various unifications. None of those had to be true.”

Despite the difficulty, the potential rewards of unification are great enough to keep physicists searching. Along the way, they’ve discovered new things they wouldn’t have learned had it not been for the quest to find a theory of everything.

Illustration by Sandbox Studio, Chicago with Corinne Mucha

United we hope to stand

No one has yet crafted a complete theory of everything.

It’s hard to unify all of the forces when you can’t even get all of them to work at the same scale. Gravity in particular tends to be a tricky force, and no one has come up with a way of describing the force at the smallest (quantum) level.

Physicists such as Albert Einstein thought seriously about whether gravity could be unified with the electromagnetic force. After all, general relativity had shown that electric and magnetic fields produce gravity and that gravity should also make electromagnetic waves, or light. But combining gravity and electromagnetism, a mission called unified field theory, turned out to be far more complicated than making the electromagnetic theory work. This was partly because there was (and is) no good theory of quantum gravity, but also because physicists needed to incorporate the strong and weak forces.

A different idea, quantum field theory, combines Einstein’s special theory of relativity with quantum mechanics to explain the behavior of particles, but it fails horribly for gravity. That’s largely because anything with energy (or mass, thanks to relativity) creates a gravitational attraction—including gravity itself. To oversimplify somewhat, the gravitational interaction between two particles has a certain amount of energy, which produces an additional gravitational interaction with its own energy, and so on, spiraling to higher energies with each extra piece.

“One of the first things you learn about quantum gravity is that quantum field theory probably isn’t the answer,” says Robert McNees, a physicist at Loyola University Chicago. “Quantum gravity is hard because we have to come up with something new.”

Illustration by Sandbox Studio, Chicago with Corinne Mucha

An evolution of theories

The best-known candidate for a theory of everything is string theory, in which the fundamental objects are not particles but strings that stretch out in one dimension.  

Strings were proposed in the 1970s to try to explain the strong force. This first string theory proved to be unnecessary, but physicists realized it could be joined to the another theory called Kaluza-Klein theory as a possible explanation of quantum gravity.

String theory expresses quantum gravity in two dimensions rather than the four, bypassing all the problems of the quantum field theory approach but introducing other complications, namely an extra six dimensions that must be curled up on a scale too small to detect.

Unfortunately, string theory has yet to reproduce the well-tested predictions of the Standard Model.

Another well-known idea is the sci-fi-sounding “loop quantum gravity,” in which space-time on the smallest scales is made of tiny loops in a flexible mesh that produces gravity as we know it.

The idea that space-time is made up of smaller objects, just as matter is made of particles, is not unique to the theory. There are many others with equally Jabberwockian names: twistors, causal set theory, quantum graphity and so on. Granular space-time might even explain why our universe has four dimensions rather than some other number.

Loop quantum gravity’s trouble is that it can’t replicate gravity at large scales, such as the size of the solar system, as described by general relativity.

None of these theories has yet succeeded in producing a theory of everything, in part because it's so hard to test them.

“Quantum gravity is expected to kick in only at energies higher than anything that we can currently produce in a lab,” says Lisa Glaser, who works on causal set quantum gravity at the University of Nottingham. “The hope in many theories is now to predict cumulative effects,” such as unexpected black hole behavior during collisions like the ones detected recently by LIGO.

Today, many of the theories first proposed as theories of everything have moved beyond unifying the forces. For example, much of the current research in string theory is potentially important for understanding the hot soup of particles known as the quark-gluon plasma, along with the complex behavior of electrons in very cold materials like superconductors—something seemingly as far removed from quantum gravity as could be. 

“On a day-to-day basis, I may not be doing a calculation that has anything directly to do with string theory,” Keeler says. “But it’s all about these ideas that came from string theory.”

Finding a theory of everything is unlikely to change the way most of us go about our business, even if our business is science. That’s the normal way of things: Chemists and electricians don't need to use quantum electrodynamics, even though that theory underlies their work. But finding such a theory could change the way we think of the universe on a fundamental level.

Even a successful theory of everything is unlikely to be a final theory. If we’ve learned anything from 150 years of unification, it’s that each step toward bringing theories together uncovers something new to learn.

by Matthew R. Francis at June 21, 2016 01:00 PM

Axel Maas - Looking Inside the Standard Model

How to search for dark, unknown things: A bachelor thesis
Today, I would like to write about a recently finished bachelor thesis on the topic of dark matter and the Higgs. Though I will also present the results, the main aim of this entry is to describe an example of such a bachelor thesis in my group. I will try to follow up also in the future with such entries, to give those interested in working in particle physics an idea of what one can do already at a very early stage in one's studies.

The framework of the thesis is the idea that dark matter could interact with the Higgs particle. This is a serious possibility, as both objects are somehow related to mass. There is also not yet any substantial reason why this should not be the case. The unfortunate problem is only: how strong is this effect? Can we measure it, e.g. in the experiments at CERN?

We are looking in a master thesis in the dynamical features of this idea. This is ongoing, and something I will certainly write about later. Knowing the dynamics, however, is only the first step towards connecting the theory to experiment. To do so, we need the basic properties of the theory. This input will then be put through a simulation of what happens in the experiment. Only this result is the one really interesting for experimental physicists. They then look what any kind of imperfections of the experiments change and then they can conclude, whether they will be able to detect something. Or not.

In the thesis, we did not yet had the results from the master student's work, so we parametrized the possible outcomes. This meant mainly to have the mass and the strength of the interaction between the Higgs and the dark matter particle to play around. This gave us what we call an effective theory. Such a theory does not describe every detail, but it is sufficiently close to study a particular aspect of a theory. In this case how dark matter should interact with the Higgs at the CERN experiments.

With this effective theory, it was then possible to use simulations of what happens in the experiment. Since dark matter cannot, as the name says, be directly seen, we needed somehow a marker to say that it has been there. For that purpose we choose the so-called associate production mode.

We knew that the dark matter would escape the experiment undetected. In jargon, this is called missing energy, since we miss the energy of the dark matter particles, when we account for all we see. Since we knew what went in, and know that what goes in must come out, anything not accounted for must have been carried away by something we could not directly see. To make sure that this came from an interaction with the Higgs we needed a tracer that a Higgs had been involved. The simplest solution was to require that there is still a Higgs. Also, there are deeper reasons which require that dark matter in this theory should not only arrive with a Higgs particle, but should be obtained also from a Higgs particle before the emission of the dark matter particles. The simplest way to check for this is that there is besides the Higgs in the end also a so-called Z-boson, for technical reasons. Thus, we had what we called a signature: Look for a Higgs, a Z-boson, and missing energy.

There is, however, one unfortunate thing in known particle physics which makes this more complicated: neutrinos. These particles are also essentially undetectable for an experiment at the LHC. Thus, when produced, they will also escape undetected as missing energy. Since we do not detect either dark matter or neutrinos, we cannot decide, what actually escaped. Unfortunately, the tagging with the Higgs and the Z do not help, as neutrinos can also be produced together with them. This is what we call a background to our signal. Thus, it was necessary to account for this background.

Fortunately, there are experiments which can detect, with a lot of patience, neutrinos. They are very different from the one we at the LHC. But they gave us a lot of information on neutrinos. Hence, we knew how often neutrinos would be produced in the experiment. So, we would only need to remove this known background from what the simulation gives. Whatever is left would then be the signal of dark matter. If the remainder would be large enough, we would be able to see the dark matter in the experiment. Of course, there are many subtleties involved in this process, which I will skip.

So the student simulated both cases, and determined the signal strength. From that she could deduce that the signal grows quickly with the strength of the interaction. She also found that the signal became stronger if the dark matter particles become lighter. That is so because there is only a finite amount of energy available to produce them. But the more energy is left to make the dark matter particles move the easier it gets to produce them, an effect known in physics as phase space. In addition, she found that if the dark matter particles have half the mass of the Higgs their production became also very efficient. The reason is a resonance. Just like two noises amplify each other if they are at the same frequency, so such amplifications can happen in particle physics.

The final outcome of the bachelor thesis was thus telling us for the values of the two parameters of the effective theory how strong our signal would be. Once we know these values from our microscopic theory in the master project, we know whether we have a chance to see these particles in this type of experiments.

by Axel Maas (noreply@blogger.com) at June 21, 2016 07:35 AM

June 20, 2016

Sean Carroll - Preposterous Universe

Father of the Big Bang

Georges Lemaître died fifty years ago today, on 20 June 1966. If anyone deserves the title “Father of the Big Bang,” it would be him. Both because he investigated and popularized the Big Bang model, and because he was an actual Father, in the sense of being a Roman Catholic priest. (Which presumably excludes him from being an actual small-f father, but okay.)

John Farrell, author of a biography of Lemaître, has put together a nice video commemoration: “The Greatest Scientist You’ve Never Heard Of.” I of course have heard of him, but I agree that Lemaître isn’t as famous as he deserves.

The Greatest Scientist You've Never Heard Of from Farrellmedia on Vimeo.

by Sean Carroll at June 20, 2016 09:10 PM

Robert Helling - atdotde

Restoring deleted /etc from TimeMachine
Yesterday, I managed to empty the /etc directory on my macbook (don't ask how I did it. I was working on subsurface and had written a perl script to move system files around that had to be run with sudo. And I was still debugging...).

Anyway, once I realized what the problem was I did some googling but did not find the answer. So here, as a service to fellow humans googling for help is how to fix this.

The problem is that in /etc all kinds of system configuration files are stored and without it the system does not know anymore how to do a lot of things. For example it contains /etc/passwd which contains a list of all users, their home directories and similar things. Or /etc/shadow which contains (hashed) passwords or, and this was most relevant in my case, /etc/sudoers which contains a list of users who are allowed to run commands with sudo, i.e. execute commands with administrator privileges (in the GUI this shows as as a modal dialog asking you to type in your password to proceed).

In my case, all was gone. But, luckily enough, I had a time machine backup. So I could go 30 minutes back in time and restore the directory contents.

The problem was that after restoring it, it ended up as a symlink to /private/etc and user helling wasn't allowed to access its contents. And I could not sudo the access since the system could not determine I am allowed to sudo since it could not read /etc/sudoers.

I tried a couple of things including a reboot (as a last resort I figured I could always boot in target disk mode and somehow fix the directory) but it remained in /private/etc and I could not access it.

Finally I found the solution (so here it is): I could look at the folder in Finder (it had a red no entry sign on it meaning that I could not open it). But I could right click and select Information and there I could open the lock by tying in my password (no idea why that worked) and give myself read (and for that matter write) permissions and then everything was fine again.

by Robert Helling (noreply@blogger.com) at June 20, 2016 08:12 PM

Tommaso Dorigo - Scientificblogging

Program Of The Statistics Session At QCHS 2016
I am happy to announce here that a session on "Statistical Methods for Physics Analysis in the XXI Century" will take place at the "Quark Confinement and the Hadron Spectrum" conference, which will be held in Thessaloniki on August 28th to September 3rd this year. I have already mentioned this a few weeks ago, but now I can release a tentative schedule of the two afternoons devoted to the topic.

read more

by Tommaso Dorigo at June 20, 2016 10:07 AM

Lubos Motl - string vacua and pheno

Formal string theory is physics, not mathematics
I was sent a book on string theory by Joseph Conlon and I pretty much hate it. However, it's the concise, frank comments such as his remark at 4gravitons that make it really transparent why I couldn't endorse almost anything that this Gentleman says.
I can’t agree on the sociology. Most of what goes under the name of ‘formal string theory’ (including the majority of what goes under the name of QFT) is far closer in spirit and motivation to what goes on in mathematics departments than in physics departments. While people working here like to call themselves ‘physicists’, in reality what is done has very little in common with what goes on with the rest of the physics department.
What? If you know the amusing quiz "Did Al Gore or Unabomber say it?", these sentences could be similarly used in the quiz "Did Conlon or Sm*lin say it?".




The motivation of formal string theory is to understand the truly fundamental ideas in string theory which is assumed by the practitioners to be the theory explaining or predicting everything in the Universe that may be explained or predicted. How may someone say that the motivation is similar to that of mathematicians? By definition, mathematicians study the truth values of propositions within axiomatic systems they invented, whether or not they have something to do with any real-world phenomena.

In the past, physicists and mathematicians co-existed and almost everyone was both (and my Alma Mater was the Department of Mathematics and Physics – in Prague, people acknowledge the proximity of the subjects). But for more than 100 years, the precise definition of mathematics as something independent of the "facts about Nature" has been carefully obeyed. Although formal string theorists would be mostly OK if they worked in mathematics departments, and some of them do, it would be a wrong classification of the subject.




Formal string theory uses mathematics more intensely, more carefully, and it often uses more advanced mathematics than other parts of physics. But all these differences are purely quantitative – to some extent, all of physics depends on mathematics that has to be done carefully enough and that isn't quite trivial – while the difference between mathematics and physics is qualitative.

Conlon also says that what formal string theorists do has very little in common with the work in the rest of the physics department. One problem with the assertion is that all work in a physics department studies phenomena that in principle follow from the most fundamental laws of Nature – which most of the top formal theorists believe to be the laws of string theory. For this reason, to say that these subdisciplines have nothing in common is laughable.

But they surely focus on very different aspects of the physical objects or reality. However, that's true for basically every other project investigated by people in the physics departments. Lene Hau is playing with some exotic states of materials that allow her to slow down light basically to zero speed. Now, what does it have to do with the work in the rest of her physics department? No classic condensed matter physicists are talking about slow light. For particle physicists, the speed of light is basically always 299,792,458 m/s. Someone else measures the magnetic moment of the electron with the accuracy of one part per quadrillion. It's all about the last digits. What do those have to do with the work in the rest of the physics department?

People are simply doing different things. For Mr Conlon to try to single out formal string theory is absolutely dishonest and totally idiotic.

He seems to be unaware of lots of totally basic facts – such as the fact that his very subfield of string phenomenology is also just a ramification of research in formal string theory. The physicists who first found the heterotic string were doing formal string theory – very analogous activity to what formal string theorists are doing today. People who found its Calabi-Yau compactifications were really doing formal string theory, too. And so on. Conlon's own work is just a minor derivative enterprise extending some previous work that may be mostly classified as formal string theory. How could his detailed work belong to physics if the major insights in his subdiscipline wouldn't belong to physics? It makes absolutely no sense.

Also, formal string theorists are in no way the "first generations of formal theorists". Formal theory has been around for a long time and it's been important at all times. The categorization is sometimes ambiguous. But I think it's right to say that e.g. Sidney Coleman was mostly a formal theorist (in quantum field theory).

The attempted demonization of formal string theory by Mr Conlon makes absolutely no sense. It's at least as irrational and as unjustifiable as the demonization of Jewish physicists in Germany of the 1930s. In both cases, the demonized entities are really responsible for something like 50% of the progress in cutting-edge physics.
From my perspective in string pheno/cosmo/astro, people go into formal topics because they are afraid of real physics – they want to be in areas that are permanently safe, and where their ideas can never be killed by a rude injection of data.
He's so hostile that the quote above could have been said by Peter W*it or the Unabomber, after all. Are you serious, Conlon? And how many papers of yours have interacted with some "rude injection of data"? There have been virtually no data about this kind of questions in your lifetime so what the hell are you talking about?

By definition, formal theorists are indeed people who generally don't want to deal with the daily dirty doses of experimental data. But there's absolutely nothing wrong about it. After all, Albert Einstein could have been classified in the same way, and so could Paul Dirac and others. Formal theorists are focusing on a careful mathematical thinking about known facts which seems a more reliable way for them to find the truth about Nature. And this opinion has been shown precious in so many examples. Perhaps a majority of the important developments in modern physics may be attributed to deeply thinking theorists who didn't want to deal with "rude injections of data".

Another aspect of the quote that is completely wrong is the identification of the "permanent safety" with the "avoidance of rude injections of data". These two things aren't the same. They're not even close. None of them is a subset of the other.

First, even if formal string theory were classified as mathematics, it still wouldn't mean that its papers are permanently safe. If someone writes a wrong paper or makes an invalid conjecture, the error may often be found and a counterexample may be invented. Formal theoretical papers and even mathematical papers run pretty much the same risk of being discredited as papers about raw experimental data. And string theorists often realize that the mathematical investigation of a particular configuration in string theory may be interpreted as a complete analogy of an experiment. Such a calculation may test a "stringy principle" just like regular experiments are testing particular theories.

If you write a mathematical paper really carefully and you prove something, it's probably going to be safe. To a lesser extent, that's true in formal theoretical physics, too (the extent is lesser because physics can't ever be quite rigorous because we don't know all the right axioms of Nature). But there's nothing wrong about doing careful work that is likely to withstand the test of time. On the contrary, it's better when the theory papers are of this kind – theorists should try to achieve these adjectives. So Conlon's logic is perverse if he presents this kind of "permanent safety" as a disadvantage. Permanently safe theoretical papers would be those that are done really well. They are an ideal that may and should be approached by the theorists but it can't ever be quite reached.

Second, we must carefully ask what is or isn't "permanently safe". In the previous paragraphs, I wrote that even in the absence of raw experimental data, papers or propositions in theoretical physics (and even mathematics) aren't "permanently safe". They can still be shown wrong. On the other hand, Mr Conlon talks about "areas" that are permanently safe. What is exactly this "area"? If he means the whole subdiscipline of formal theory in high-energy physics, that subdiscipline is indeed permanently safe (assuming that the human civilization won't be exterminated or completely intellectually crippled in some way), and it should be. It is just as permanently safe as condensed matter physics. As physics is making progress, people move to the research of new questions. But they still study solid materials – and similarly, they study the most fundamental and theoretical aspects of the laws of physics.

So what the hell is your problem? People just pick fields. Some people pick formal string theory, other people pick other subdisciplines. No particular paper or project or research direction is permanently safe in any of these subdisciplines. But all sufficiently widely defined subdisciplines are permanently safe and that's a good thing, too. For Mr Conlon to single out formal theory for this assault proves that he lacks the integrity needed to do science. There is absolutely no justification for such singling out.
For those of a certain generation – who did PhDs before or within the 10-15 years following the construction of the Standard Model – this is less true, and they generally have a good knowledge of particle physics. But I would say probably >90% of formal people under the age of 40 have basically zero ability to contribute anything in the pheno/cosmo areas; I have talked to enough to know that most have little real knowledge of how the Standard Model (of either particle physics or cosmology) works, how experiments work, or how ideas to go beyond the SM work.
This is of course a massive attack against a large group of (young) theorists. Tetragraviton objects with a counterexample, Edward Hughes (whom Mr Conlon knows), and I could bring even better examples. The quality and versatility of various people differs, too. I don't want to go into names because a credible grading of all formal theorists below 40 years of age would need a far more careful research.

Instead, let me assume that what Mr Conlon writes is true. He complains that the formal theorists couldn't usefully do phenomenology. Great and what? In the same way, Mr Conlon or other phenomenologists would be unable to do formal theory or most of its subdivisions. What's the difference? Why would he be demanding that people in a different subdiscipline of high-energy physics should be able to do what he does?

Phenomenology and formal theory are overlapping but they have also been "partly segregated" for quite some time. When Paul Ginsparg founded the arXiv.org (originally xxx.lanl.gov) server around 1991, he already established two different archives, hep-ph and hep-th (phenomenology and theory), and invented the clever name "phenomenology" for the first group. This classification reflected a genuine soft split of the community that existed in the early 1990s. In fact, such a split actually existed before string theory was born. It was just a "finer split" that continued the separation into "theory and experiment", something that could have been observed in physics for more than a century.
I have heard the ‘when something exciting happens we will move in and sort it out’ attitude of formal theorists the entire time I have been in the subject – and it’s deluded BS.
Tetragraviton replies that people are far more flexible than Mr Conlon thinks. But I think that the main problem is that Mr Conlon has wrong expectations about what people should be doing. When someone is in love with things like monstrous moonshine, it's rather unlikely that he will immediately join some "dirty" experiment-driven activity. Instead, such a person – just like every other person – may be waiting for interesting events that happen close enough to his specialization. He has a higher probability to join some "uprising" that is closer to his previous work; and a lower probability to join something very different.

But what's obvious, important, and what seems to be completely misunderstood by Mr Conlon is that there must exist (sufficiently many) formal theorists because the evolution of physics without the most theoretical branch would be unavoidably unhealthy. The people who do primarily formal theory may be good at other things – phenomenology, experiments, tennis, or something else. It's good when they are but that's not their primary task. Even if you can't get people who can be like Enrico Fermi and be good at these different enough methodologies or subdisciplines, it's still true that physics – and string theory – simply needs formal theory.

The most effective way to increase the number of "really versatile and smart" formal theorists is to stop the bullying that surely repels a fraction of the greatest young brains from theoretical physics. But whatever is the fraction of the young big shots who choose one subject or another, it's obvious that formal string theory has to be studied.

On the 4gravitons blog, Haelfix posted a sensible comment about the reasons why many people prefer formal string theory over string phenomenology: the search for the right vacuum seems too hard to them because of the large number of solutions, and because of the inability to compute all things "quite exactly" even in a well-defined string compactification (generic quantities are only calculable in various perturbative schemes etc.). For this reason, many people believe that before physicists identify the "precisely correct stringy model" to describe the world around us, some qualitative progress must take place in the foundations of string theory first – and that's why they think that it's a faster route to progress to study formal string theory at this point. Recent decades seem to vindicate them – formal string theory has produced significantly more profound changes than string phenomenology after the mid 1990s. Those victories of formal string theory include D-branes and all the things they helped to spark - dualities, M-theory, AdS/CFT correspondence, advances in black hole information puzzle, the landscape as a sketch of the map of string theory's solutions, and other things. String phenomenology has worked pretty nicely but the changes since the 1980s were relatively incremental in comparison.

Mr Conlon seems to miss all these things – and instead seems to be full of superficial and fundamentally misguided Šmoit-like vitriol that make him attack whole essential subdisciplines of physics.

by Luboš Motl (noreply@blogger.com) at June 20, 2016 07:51 AM

astrobites - astro-ph reader's digest

Highlighting AAS Chambliss Contestants

The Chambliss Astronomy Achievement Student Awards are given out at each AAS meeting to both undergraduate and graduate students who present a poster. I spoke with a few of the Chambliss competition participants at the 228th AAS meeting in San Diego about their research:

The Emission Signatures of Tidal Disrption Events in the UV and Optical (Valentina Hallefors et al.)

The Emission Signatures of Tidal Disruption
Events in the UV and Optical (Valentina Hallefors et al. 2016)

Valentina Hallefors, from Santa Barbara City College, studies tidal disruption events (or TDEs). TDEs occur when a star passes too close to a nearby supermassive black hole and is partially torn apart by the black hole’s immense tidal forces. Hallefors studies these mysterious events in the UV, hoping to constrain basic properties such as the luminosities and temperatures of the emission. Very little is known about TDEs, and Hallefors work contributes to the overwhelming need for multiwavelength observations of these fascinating events.

 IFU Spectroscopy of 32 SweetSpot Supernova Host Galaxies (Kara Ann Ponder et al.)

IFU Spectroscopy of 32 SweetSpot Supernova
Host Galaxies (Kara Ann Ponder et al. 2016)

Kara Ann Ponder from the University of Pittsburgh is using the near infrared to study Type Ia supernovae (famous for being cosmological standard candles). Using a specialized spectrograph, she is able take spectra at different locations of a supernova’s host galaxy to better understand the ages and properties of stars surrounding the supernova progenitor.

Applying Gaussian mixture models to the Na-O plane to separate multiple populations in globular clusters (Owen M. Boberg et al.)

Applying Gaussian mixture models to the
Na-O plane to separate multiple populations in
globular clusters (Owen M. Boberg et al. 2016)

Owen M. Boberg from Indiana University is applying new statistical tricks to classify old stars. He uses Gaussian mixture models to determine which stars in a given globular cluster are first or second generation stars based on their metal content. Having better constraints on these classifications can help astronomers to untangle the history of the cluster and how this history is tied to a cluster’s physical properties.

Determining the Location of the Water Snowline in an Externally-Photoevaporated Solar Nebula (Anusha Kalyaan et al.)

Determining the Location of the Water
Snowline in an Externally-Photoevaporated Solar
Nebula (Anusha Kalyaan et al. 2016)

Inspired by recent observations from the Atacama Large Millimeter Array, Anusha Kalyaan from Arizona State University studies the effects of a nearby massive star on a protoplanetary disk (the birthplace of planets). Specifically, she argues that a second star can shift the snowline farther away from the primary star and dehydrate the disk by removing water though evaporation. Her work highlights the complexity of planetary formation from even the earliest stages.

by Ashley Villar at June 20, 2016 07:19 AM

June 19, 2016

Lubos Motl - string vacua and pheno

Ambulance chasing is a justifiable strategy to search for the truth
As Ben Allanach, a self-described occasional ambulance chaser, described in his 2014 TRF guest blog, ambulance chasers were originally lawyers with fast cars who were (or are) trying to catch an ambulance (or visit a disaster site) because the sick and injured people in them, potential clients who may have a pretty good reason to sue someone and win the lawsuit. For certain reasons, this practice is illegal in the U.S. and Australia.

Analogously, in particle physics, ambulance chasers are people who write many papers about a topic that is hot, especially one ignited by an excess in the experimental data. This activity is thankfully legal.

The phrase "ambulance chasing" is often used pejoratively. It's partly because the "ambulance chasers" may justifiably look a bit immoral and egotistically ambitious. However, most of the time, it is because the accusers are jealous and lazy losers. Needless to say, it often turns out that there are no patients capable of suing in the ambulance which the critics of ambulance chasing view as a vindication. However, this vindication is not a rule.

The probability to find clients is higher in the ambulances. It's similar as the reason why it's a better investment of money to make Arabs strip at the airport than to ask old white grandmothers to do the same – whether or not some politically correct ideologues want to deny this obvious point.

Is it sensible that we see examples of ambulance chasing such as the 400 or so papers about the \(750\GeV\) cernette diphoton resonance?

It just happens that in recent 2 days, there were two places in the physics blogosphere that discussed a similar topic:
An exchange between 4gravitons and Giotis

Game of Thrones: 750 GeV edition (Resonaances)
It seems rather clear that much like your humble correspondent, the first page is much more sympathetic to the ambulance chasing episodes than the latter one.




My conservative background and that of a formal theorist make me a natural opponent of ambulance chasers. If I oversimplify this viewpoint, we're solving primarily long-term tasks and it's a symptom of the absence of anchoring when people chase the ambulances. For example, quantum field theory and string theory will almost certainly be with us regardless of some minor episodes and discoveries and that's why we need to understand it better and we should pay too much attention to some "probably" short-lived fads.




However, the more experiment-oriented parts of particle physics – and even the theoretical ones – are sometimes much more dependent on an exciting breakthrough. Some developments may look like short-lived fads at the beginning but they actually turn out to be important in the long run. A subfield suddenly grows, people write lots of papers about it and they may even switch their fields a little bit. Is that wrong? Is it a pathology?

I don't think so.

As Giotis and Tetragraviton agreed in their discussion, the size of the subfields of string theory (or any other portion of physics research) changes with time. The subfields that recently experienced a perceived "breakthrough" grow bigger. Is that immoral?

Not at all.

It simply means that people appreciate that some methods or ideas or questions have been successful or they have demonstrated that they could be a route to learn lots of new truth quickly. And because scientists want to learn as much of the truth as possible, they naturally tend to pick the methods that make this process more efficient. It's common sense. There seem to low-hanging fruits around the breakthroughs and people try to pick them. As they're being picked, the perceived density of the fruits may go down and the "fad" may fade away.

So string phenomenology was producing a huge amount of papers for a few years after some amazing progress took place in that field. Similarly, the AdS/CFT correspondence opened a whole new industry of papers – the papers that study quantum field theories using "holographic" methods. AdS/CFT came from Maldacena's research of formal string theory and the black hole information puzzle. These two interrelated subfields of string theory have actually produced several other subindustries, although smaller than the holographic ones. The word "subindustry" is a more appropriate term than a "minirevolution" for something that the cynics call a "fad", especially when this "fad" continues to grow for 20 years or so.

The ability of formal string theory to produce similar breakthroughts is arguably still being underestimated.

At any rate, a part of the dynamics of the changing relative importance of subfields of fundamental physics – or string theory – is certainly justifiable. I don't want to claim that people study things exactly as much as they should – I have often made it clear that I believe that such deviations from the optimum exist, most recently in the previous short paragraph ;-) – but most critics generally misunderstand how sensible it is for people to be excited about different things at different times. Many critics seemingly believe that they are able to determine how much effort should be spent with every thing (such as A, B, C) even though they don't understand anything about A, B, C. But that's nonsense. You just can't understand how big percentages of time physicists spend with various ideas if you don't know what all of these ideas are.

At Resonaances, Adam Falkowski is basically making fun out of the people who have written many papers about the \(750\GeV\) diphoton resonance. With this seemingly hostile attitude to resonances, you may think it's ironic for the blog to be called Resonaances. But maybe the word means Resona-hahaha-ances, i.e. a blog trying to mock resonances.

The top contestants have a similar output as Alessandro Strumia – some 7 papers on the topic and 500 citations. A paper with 10 authors from December 2015 has over 300 citations now. If they had to divide the citations among the co-authors, it wouldn't be too many per co-author.

Clearly, physicists like Strumia are partially engaging in an activity that could be classified as a sport. But is that wrong? I don't think so. Physics is the process of learning the truth. We often imagine theoretical physicists as monks who despise things like sports and by my fundamental instincts, I am close enough to it. But the "learning of the truth" isn't in any strict way incompatible with "sports". They're just independent entities. And sports sometimes do help to find the truth in science.

I would agree with some "much softer and more careful" criticisms of ambulance chasing. For example, in most such situations, the law of diminishing returns applies. The value of many papers usually grows sublinearly with their number which is why most "fads" usually fade away. The value of 7 papers about the resonance is probably smaller than 7 times the value of a single paper about the resonance. First, there is probably some overlap among the papers. Second, even if there is no overlap, many of the "added" papers are probably less interesting because the author knows which of the ideas are the most promising ones, and by writing many papers, he or she probably has to publish the less promising ones, too.

So this activity may inflate the "total number of citations" more quickly than the underlying value of the physical insights. At the end, the number of citations that people like Alessandro Strumia have accumulated may overstate their overall contributions. But they don't really hysterically fight for the opinion that it doesn't. They just do science – and a lot of it. It's up to others who need the number to quantify the contributions.

At the end, I still find it obvious that in average, it's better when a physicist writes 7 papers about a topic than if he writes 2. The results vary and 1 paper is often more valuable than 10,000 other papers. But if one sort of imagines that they're papers at the same level, 7 is better than 2. So it's not reasonable for Falkowski to mock Strumia et al. Needless to say, if the resonance goes away, Falkowski will feel vindicated. If it is confirmed, Falkowski will look sort of stupid.

But even if the resonance (or whatever it is) goes away, we can't be sure about it now. The only way to feel "sure" about it is to assume that no important new physics will ever be found. But if you really believe such a thing, you just shouldn't work in this portion of physics, Dr Falkowski, because by your assumption, your work is worthless.

It is absolutely healthy when the experimental deviations energize the research into some models. You know, a similar research into models takes place even in the absence of excesses. But by Bayes' theorem, the excesses increase the odds that certain models of new physics are right. You can calculate how big this increase is: it is a simple function of the formal \(p\)-value. If there is just a 0.01% formal probability that the cernette excess is a false positive, it is even justifiable to increase the activity on the models compatible with it by four orders of magnitude. In practice, a much smaller increase takes place. But some increase is undoubtedly justified.

Physicists often say that they're doing the research purely out of their curiosity and passion for the truth. In practice, it's almost never the case. People have personal ambitions and they also want jobs, salaries, grants, and maybe also fame. But that's true in other occupations, too. I don't think it's right to demonize the athletes among particle physicists. They're pretty amazing.

In particle physics, you can have counterparts of ATP and WTA rankings in tennis. Strumia is probably below veterans such as John Ellis. At the end, physics isn't just like tennis so the most important and famous physicists may be – and often are – non-athletes such as physics incarnations of monks. They're rather essential. But the physics athletes do a nontrivial part of the progress, anyway. Attempts to erase one of these whole groups amount to cultural revolutions of Mao's type. You just shouldn't think about such plans. Everyone who is dreaming about a similar cultural revolution surely misunderstands something important about the processes that make science – and the human society or the economy – work.

Whether or not the diphoton resonance is going to be confirmed, it was utterly reasonable and should be appreciated that some people did a lot of work on models that could explain such a deviation. At least, they worked on an exercise that had great chances to be relevant in Nature. But even if it weren't relevant, some of the lessons of this research may be applied in a world without the cernette, too. The experimental deviation has also been a natural source of the excitement and energy for the phenomenologists because many of them clearly do believe in their hearts that this excess could be real.

Incidentally, we may learn whether it's real very soon. In 2016, more than 5 inverse femtobarns have already been recorded by each major LHC detector. So before the end of June, 2016 has already beaten all of 2015. The data needed to decide about the fate of the theory that the cernette exists with the cross section indicated in 2015 have almost certainly been collected. Someone may already know the answer but it hasn't gotten out yet. It may change within days.



Bonus
An example of an activity much less justified than ambulance chasing, from Resonaances
Jason Stanidge said...

It doesn't look good IMO that none of the physicists listed come from leading institutions such as Harvard, Cambridge UK etc. Ah well: when the bump fades as will be soon rumoured by Jester, at least there is the possibility of other bumps in the data to look forward to.

Anonymous said...

Hi Jason Stanidge
are you runouring that the bump is fading away?
Now, this is an example of "trabant chasing with the hope that the trabant will turn into an ambulance". Anonymous has no rational reason to think that Jason Stanidge knows anything about the most recent LHC data. He is trying to push Jason to admit that he is an insider who knows something. And Jason has some probability to reply in a way that suggests that he may know something. But everyone has a nonzero probability to reply in this way. This "signal" would be artificially constructed by the "experimenter", Anonymous, which makes it much less tangible than the actual diphoton excess that was produced by the unique 2015 LHC dataset.

By the way, the comment about the leading institutions is illogical, too. The diphoton resonance has surely been worked on by numerous researchers from Harvard and all other top places, too. Moreover, I find it somewhat bizarre not to count Strumia's affiliation, CERN theory group, among the top places in particle physics. The winners of the ambulance chasing contests may be from other places than the Ivy League but that doesn't show that there's something "not good" about the intense work on currently intriguing experimental signs. You simply can't define the best character of work as whatever is being done at Harvard.

by Luboš Motl (noreply@blogger.com) at June 19, 2016 06:37 AM

June 18, 2016

Lubos Motl - string vacua and pheno

Subdivisions of string theory are completely misunderstood by critics
Tetragraviton wrote an insightful blog post
Most of String Theory Is Not String Pheno
where he tries to clarify some brutal misconceptions believed by Backreaction – as well as most laymen who read similar "sources" – about "what various string theorists actually do". He points out that Hossenfelder pretends to be a hero fighting against a powerful community, the string theory community, but in reality, she is only waging jihad against a small minority of the string community that is actually less numerous than groups such as the "loop quantum gravity fans" and others.

I completely agree with his main point.



This is the key pie chart that Tetragraviton has created. The string theory research has some vaguely defined parts. And because Hossenfelder considered "the research of less understood aspects of quantum field theory using string theory ideas" to be OK or a success, and this branch is actually a majority of the research – e.g. according to the pie chart that categorizes talks at the Strings 2015 annual conference, Hossenfelder is actually trying to fight against string cosmology and string phenomenology only, something that is researched just by more than a hundred of people.

Tetragraviton believes and I do believe that these subfields of string theory are actually understudied Cinderellas. They should be much larger than they are!




The pie chart has some problems. Many research projects can't be easily categorized. The percentages are inaccurate. Also, I am not sure where I would count all research on things like topological string theory, monstrous moonshine, and black hole information puzzle in string theory. Most likely, all these things belong to the "formal strings" slice (well, a part of the information puzzle would belong to "holography") which makes it even more insane how small this slice is.

(Update: I overlooked Tetragraviton's special group "Quantum gravity" where the black hole information loss issues clearly belong.)




Because of Maldacena's revolution and other things, the slices "holography" and "QFT" have grown a lot in the recent 20 years or so. Maybe the relative size of all the other subfields was basically constant. People keep on doing all these things. But because of the implications of string theory for quantum field theory, many string theorists began to do things that could be classified as "QFT", after all. The border between string theory and QFT has gotten blurred.

Much of this "QFT" work has only been reported at the string annual conference because either
  1. the speakers used to be clear string theorists and still count themselves in this way, so their work was considered string theory because of the ad hominem arguments
  2. some elementary ideas for studying quantum field theories in these ways came from string theory at the very beginning or they can be seen to be morally stringy a posteriori
These are very problematic reasons to count some research as "string theory". It's clearly just a matter of classification, terminology, convention, and sociology of conferences. But I think that most of this research – in the largest slice of the pie – shouldn't be considered string theory. Also, I think that an overwhelming majority of the "amplitudes" slice where Tetragraviton belongs (the light orange thin slice at the top) shouldn't be counted as a part of string theory. If something has no one-dimensional objects (strings) or an exact relationship (e.g. duality) with a theory that has these objects, it's simply not string theory. And if someone hasn't mastered the introductory courses or textbooks of the subject and has never written a string theory paper as defined in the previous sentence, he is not a string theorist. In particular, I think it's right to say that Tetragraviton is not a string theorist.

Maybe we should correct Tetragraviton: the likes of Hossenfelder try to assault not only string cosmology and string phenomenology but also formal string theory (where I would have counted most of the research from the times when I was publishing and surely most of the research that I am doing now and not publishing). But even these groups combined incorporate less than 500 active and paid researchers in the world.

Tetragraviton points out that the last String Pheno had 130 participants while the Loop Quantum Gravity conference had 190 participants. He makes a point I have made many times in the past: that it's actually much more sensible to compare loop quantum gravity to a subfield of string theory and not string theory as a whole and they're indeed comparable in size.

It's like the comparisons of Cuba or North Korea to the U.S. These small communist countries (Cuba is struggling to become a softcore, Obama-style communist country recently but let's ignore these "details") may claim to be the flagships of something really big. So North Korea or Cuba may defeat and mock the American imperialistic dwarfs, they assure their citizens in their propaganda outlets. However, a much better "counterpart" of Cuba is something like Florida and, you know, even Florida is much more important and much wealthier than Cuba these days (Florida's GDP is 11 times greater than Cuba's $80 million GDP). My point is that the boasting by folks in loop quantum gravity that they're "on par" with all of string theory is as silly as similar boasting by the leftover communist countries.

At the end, the crackpot community doing loop quantum gravity is actually more numerous than the set of string phenomenologists – the actual state-of-the-art experts who construct the most accurate and most complete theories or models of Nature. It's sort of scary – because what they have produced is clearly vastly less valuable and even vastly less extensive than the work by string phenomenologists (the gap is at least similar to the Florida/Cuba gap) – but it can't be surprising. The reason is simple. A majority of the people who can complete a college with some exact sciences in it has enough brainpower to do something like loop quantum gravity. But only hundreds or at most thousands of people in the world have enough brainpower to become string phenomenologists. The pool for loop quantum gravity folks is some 1,000 times larger than the pool for string phenomenology.

What's important is that the string phenomenologists are vastly more intelligent, vastly more caring about the evidence and all the details, results, calculations, and vastly more likely to learn whole new subfields of mathematics if they need them. They have also found many more real results. They are also surely more well-paid in average, by dozens of percent. But the income gap is dramatically lower than what it should be according to the meritocratic criteria.

I think it's unfortunate that there are only at most a "few hundred" people doing string phenomenology for the living in the world. It's equally unfortunate that the number of formal string theorists is just a little bit higher. If you estimate the size of these communities (including the stringy "quantum gravity") as 500 people and each of them gets some $200,000 a year in average including all the overheads and taxes, you will see that the mankind is really paying approximately just $100 million a year to extend its knowledge of physics at the true cutting edge. That's just a little bit more than one millionth of the world's GDP (which is approaching $100 trillion per year).

I just find it crazy for the mankind to pay just one millionth of its GDP to improve its understanding of the most fundamental laws of physics according to the most promising project to do so. And it's even more crazy that there are bullies and šitheads who would love to reduce this percentage further.

by Luboš Motl (noreply@blogger.com) at June 18, 2016 12:08 PM

Jester - Resonaances

Black hole dark matter
The idea that dark matter is made of primordial black holes is very old but has always been in the backwater of particle physics. The WIMP or asymmetric dark matter paradigms are preferred for several reasons such as calculability, observational opportunities, and a more direct connection to cherished theories beyond the Standard Model. But in the recent months there has been more interest, triggered in part by the LIGO observations of black hole binary mergers. In the first observed event, the mass of each of the black holes was estimated at around 30 solar masses. While such a system may well be of boring astrophysical origin, it is somewhat unexpected because typical black holes we come across in everyday life are either a bit smaller (around one solar mass) or much larger (supermassive black hole in the galactic center). On the other hand, if the dark matter halo were made of black holes, scattering processes would sometimes create short-lived binary systems. Assuming a significant fraction of dark matter in the universe is made of primordial black holes, this paper estimated that the rate of merger processes is in the right ballpark to explain the LIGO events.

Primordial black holes can form from large density fluctuations in the early universe. On the largest observable scales the universe is incredibly homogenous, as witnessed by the uniform temperature of the Cosmic Microwave Background over the entire sky. However on smaller scales the primordial inhomogeneities could be much larger without contradicting observations.  From the fundamental point of view, large density fluctuations may be generated by several distinct mechanism, for example during the final stages of inflation in the waterfall phase in the hybrid inflation scenario. While it is rather generic that this or similar process may seed black hole formation in the radiation-dominated era, severe fine-tuning is required to produce the right amount of black holes and ensure that the resulting universe resembles the one we know.

All in all, it's fair to say that the scenario where all or a significant fraction of  dark matter  is made of primordial black holes is not completely absurd. Moreover, one typically expects the masses to span a fairly narrow range. Could it be that the LIGO events is the first indirect detection of dark matter made of O(10)-solar-mass black holes? One problem with this scenario is that it is excluded, as can be seen in the plot.  Black holes sloshing through the early dense universe accrete the surrounding matter and produce X-rays which could ionize atoms and disrupt the Cosmic Microwave Background. In the 10-100 solar mass range relevant for LIGO this effect currently gives the strongest constraint on primordial black holes: according to this paper they are allowed to constitute  not more than 0.01% of the total dark matter abundance. In astrophysics, however, not only signals but also constraints should be taken with a grain of salt.  In this particular case, the word in town is that the derivation contains a numerical error and that the corrected limit is 2 orders of magnitude less severe than what's shown in the plot. Moreover, this limit strongly depends on the model of accretion, and more favorable assumptions may buy another order of magnitude or two. All in all, the possibility of dark matter made of  primordial black hole in the 10-100 solar mass range should not be completely discarded yet. Another possibility is that black holes make only a small fraction of dark matter, but the merger rate is faster, closer to the estimate of this paper.

Assuming this is the true scenario, how will we know? Direct detection of black holes is discouraged, while the usual cosmic ray signals are absent. Instead, in most of the mass range, the best probes of primordial black holes are various lensing observations. For LIGO black holes, progress may be made via observations of fast radio bursts. These are strong radio signals of (probably) extragalactic origin and millisecond duration. The radio signal passing near a O(10)-solar-mass black hole could be strongly lensed, leading to repeated signals detected on Earth with an observable time delay. In the near future we should observe hundreds of such repeated bursts, or obtain new strong constraints on primordial black holes in the interesting mass ballpark. Gravitational wave astronomy may offer another way.  When more statistics is accumulated, we will be able to say something about the spatial distributions of the merger events. Primordial black holes should be distributed like dark matter halos, whereas astrophysical black holes should be correlated with luminous galaxies. Also, the typical eccentricity of the astrophysical black hole binaries should be different.  With some luck, the primordial black hole dark matter scenario may be vindicated or robustly excluded  in the near future.

See also these slides for more details. 

by Jester (noreply@blogger.com) at June 18, 2016 10:06 AM

June 17, 2016

Jaques Distler - Musings

Coriolis

I really like the science fiction TV series The Expanse. In addition to a good plot and a convincing vision of human society two centuries hence, it depicts, as Phil Plait observes, a lot of good science in a matter-of-fact, almost off-hand fashion. But one scene (really, just a few dialogue-free seconds in a longer scene) has been bothering me. In it, Miller, the hard-boiled detective living on Ceres, pours himself a drink. And we see — as the whiskey slowly pours from the bottle into the glass — that the artificial gravity at the lower levels (where the poor people live) is significantly weaker than near the surface (where the rich live) and that there’s a significant Coriolis effect. Unfortunately, the effect depicted is 3 orders-of-magnitude too big.

Pouring a drink on Ceres. Significant Coriolis deflection is apparent.

To explain, six million residents inhabit the interior of the asteroid, which has been spun up to provide an artificial gravity. Ceres has a radius, <semantics>R C=4.73×10 5<annotation encoding="application/x-tex">R_C = 4.73\times 10^5</annotation></semantics> m and a surface gravity <semantics>g C=.27m/s 2<annotation encoding="application/x-tex">g_C=.27\,\text{m}/\text{s}^2</annotation></semantics>. The rotational period is supposed to be 40 minutes (<semantics>ω2.6×10 3/s<annotation encoding="application/x-tex">\omega\sim 2.6\times 10^{-3}\, /\text{s}</annotation></semantics>). Near the surface, this yields <semantics>ω 2R C(1ϵ 2)ω 2R Cg C0.3<annotation encoding="application/x-tex">\omega^2 R_C(1-\epsilon^2)\equiv \omega^2 R_C -g_C \sim 0.3</annotation></semantics> g. On the innermost level, <semantics>R=13R C<annotation encoding="application/x-tex">R=\tfrac{1}{3} R_C</annotation></semantics>, and the effective artificial gravity is only 0.1 g.

Ceres Station, dug into the interior of the asteroid.

So how big is the Coriolis effect in this scenario?

The equations1 to be solved are

(1)<semantics>d 2xdt 2 =ω 2(1ϵ 2)x2ωdydt d 2ydt 2 =ω 2(1ϵ 2)(yR)+2ωdxdt<annotation encoding="application/x-tex">\begin{split} \frac{d^2 x}{d t^2}&= \omega^2(1-\epsilon^2) x - 2 \omega \frac{d y}{d t}\\ \frac{d^2 y}{d t^2}&= \omega^2(1-\epsilon^2) (y-R) + 2 \omega \frac{d x}{d t} \end{split} </annotation></semantics>

with initial conditions <semantics>x(t)=x˙(t)=y(t)=y˙(t)=0<annotation encoding="application/x-tex">x(t)=\dot{x}(t)=y(t)=\dot{y}(t)=0</annotation></semantics>. The exact solution solution is elementary, but for <semantics>ωt1<annotation encoding="application/x-tex">\omega t\ll 1</annotation></semantics>, i.e. for times much shorter than the rotational period, we can approximate

(2)<semantics>x(t) =13(1ϵ 2)R(ωt) 3+O((ωt) 5), y(t) =12(1ϵ 2)R(ωt) 2+O((ωt) 4)<annotation encoding="application/x-tex">\begin{split} x(t)&= \frac{1}{3} (1-\epsilon^2) R (\omega t)^3 +O\bigl((\omega t)^5\bigr),\\ y(t)&= - \tfrac{1}{2} (1-\epsilon^2)R(\omega t)^2+O\bigl((\omega t)^4\bigr) \end{split} </annotation></semantics>

From (2), if the whiskey falls a distance <semantics>hR<annotation encoding="application/x-tex">h\ll R</annotation></semantics>, it undergoes a lateral displacement

(3)<semantics>Δx=23h(2h(1ϵ 2)R) 1/2<annotation encoding="application/x-tex">\Delta x = \tfrac{2}{3} h\, {\left(\frac{2h}{(1-\epsilon^2)R}\right)}^{1/2} </annotation></semantics>

For <semantics>h=16<annotation encoding="application/x-tex">h=16</annotation></semantics> cm and <semantics>R=13R C<annotation encoding="application/x-tex">R=\tfrac{1}{3}R_C</annotation></semantics>, this is <semantics>Δxh=10 3<annotation encoding="application/x-tex">\frac{\Delta x}{h}= 10^{-3}</annotation></semantics> which is 3 orders of magnitude smaller than depicted in the screenshot above2.

So, while I love the idea of the Coriolis effect appearing — however tangentially — in a TV drama, this really wasn’t the place for it.


1 Here, I’m approximating Ceres to be a sphere of uniform density. That’s not really correct, but since the contribution of Ceres’ intrinsic gravity to (3) is only a 5% effect, the corrections from non-uniform density are negligible.

2 We could complain about other things: like that the slope should be monotonic (very much unlike what’s depicted). But that seems a minor quibble, compared to the effect being a thousand times too large.

by distler (distler@golem.ph.utexas.edu) at June 17, 2016 11:10 PM

Quantum Diaries

Enough data to explore the unknown

The Large Hadron Collider (LHC) at CERN has already delivered more high energy data than it had in 2015. To put this in numbers, the LHC has produced 4.8 fb-1, compared to 4.2 fb-1 last year, where fb-1 represents one inverse femtobarn, the unit used to evaluate the data sample size. This was achieved in just one and a half month compared to five months of operation last year.

With this data at hand, and the projected 20-30 fb-1 until November, both the ATLAS and CMS experiments can now explore new territories and, among other things, cross-check on the intriguing events they reported having found at the end of 2015. If this particular effect is confirmed, it would reveal the presence of a new particle with a mass of 750 GeV, six times the mass of the Higgs boson. Unfortunately, there was not enough data in 2015 to get a clear answer. The LHC had a slow restart last year following two years of major improvements to raise its energy reach. But if the current performance continues, the discovery potential will increase tremendously. All this to say that everyone is keeping their fingers crossed.

If any new particle were found, it would open the doors to bright new horizons in particle physics. Unlike the discovery of the Higgs boson in 2012, if the LHC experiments discover a anomaly or a new particle, it would bring a new understanding of the basic constituents of matter and how they interact. The Higgs boson was the last missing piece of the current theoretical model, called the Standard Model. This model can no longer accommodate new particles. However, it has been known for decades that this model is flawed, but so far, theorists have been unable to predict which theory should replace it and experimentalists have failed to find the slightest concrete signs from a broader theory. We need new experimental evidence to move forward.

Although the new data is already being reconstructed and calibrated, it will remain “blinded” until a few days prior to August 3, the opening date of the International Conference on High Energy Physics. This means that until then, the region where this new particle could be remains masked to prevent biasing the data reconstruction process. The same selection criteria that were used for last year data will then be applied to the new data. If a similar excess is still observed at 750 GeV in the 2016 data, the presence of a new particle will make no doubt.

Even if this particular excess turns out to be just a statistical fluctuation, the bane of physicists’ existence, there will still be enough data to explore a wealth of possibilities. Meanwhile, you can follow the LHC activities live or watch CMS and ATLAS data samples grow. I will not be available to report on the news from the conference in August due to hiking duties, but if anything new is announced, even I expect to hear its echo reverberating in the Alps.

Pauline Gagnon

To find out more about particle physics, check out my book « Who Cares about Particle Physics: making sense of the Higgs boson, the Large Hadron Collider and CERN », which can already be ordered from Oxford University Press. In bookstores after 21 July. Easy to read: I understood everything!

CMS-lumi-17juin

The total amount of data delivered in 2016 at an energy of 13 TeV to the experiments by the LHC (blue graph) and recorded by CMS (yellow graph) as of 17 June. One fb-1 of data is equivalent to 1000 pb-1.

by Pauline Gagnon at June 17, 2016 01:13 PM

Quantum Diaries

Assez de données pour explorer l’inconnu

Le Grand collisionneur de hadrons (LHC) du CERN a déjà produit depuis avril plus de données à haute énergie qu’en 2015. Pour quantifier le tout, le LHC a produit 4.8 fb-1 en 2016, à comparer aux 4.2 fb-1 de l’année dernière. Le symbole fb-1 représente un femtobarn inverse, l’unité utilisée pour évaluer la taille des échantillons de données. Tout cela en à peine un mois et demi au lieu des cinq mois requis en 2015.

Avec ces données en réserve et les 20-30 fb-1 projetés d’ici à novembre, les expériences ATLAS et CMS peuvent déjà repousser la limite du connu et, entre autres, vérifier si les étranges événements rapportés fin 2015 sont toujours observés. Si cet effet était confirmé, il révèlerait la présence d’une nouvelle particule ayant une masse de 750 GeV, soit six fois plus lourde que le boson de Higgs. Malheureusement en 2015, il n’y avait pas suffisamment de données pour obtenir une réponse claire. Après deux ans de travaux majeurs visant à accroître sa portée en énergie, le LHC a repris ses opérations l’an dernier mais à faible régime. Si sa performance actuelle se maintient, les chances de faire de nouvelles découvertes seront décuplées. Tout le monde garde donc les doigts croisés.

Toute nouvelle particule ouvrirait la porte sur de nouveaux horizons en physique des particules. Contrairement à la découverte du boson de Higgs en 2012, si les expériences du LHC révèlent une anomalie ou l’existence d’une nouvelle particule, cela modifierait notre compréhension des constituants de base de la matière et des forces qui les régissent. Le boson de Higgs constituait la pièce manquante du Modèle standard, le modèle théorique actuel. Ce modèle ne peut plus accommoder de nouvelles particules. On sait pourtant depuis des décennies qu’il est limité, bien qu’à ce jour, les théoriciens et théoriciennes n’aient pu prédire quelle théorie devrait le remplacer et les expérimentalistes ont échoué à trouver le moindre signe révélant cette nouvelle théorie. Une évidence expérimentale est donc absolument nécessaire pour avancer.

Bien que les nouvelles données soient déjà en cours de reconstruction et de calibration, elles resteront “masquées” jusqu’à quelques jours avant le 3 août, date d’ouverture de la principale conférence de physique cet été. D’ici là, la région où la nouvelle particule pourrait se trouver est masquée afin de ne pas biaiser le processus de reconstruction des données. A la dernière minute, on appliquera aux nouvelles données les mêmes critères de sélection que ceux utilisés l’an dernier. Si ces évènements sont toujours observés à 750 GeV dans les données de 2016, la présence d’une nouvelle particule ne fera alors plus aucun doute.

Mais même si cela s’avérait n’être qu’une simple fluctuation statistique, ce qui arrive souvent en physique de par sa nature, la quantité de données accumulée permettra d’explorer une foule d’autres possibilités. En attendant, vous pouvez suivre les activités du LHC en direct ou voir grandir les échantillons de données de CMS et d’ATLAS. Je ne pourrai malheureusement pas vous rapporter ce qui sera présenté à la conférence en août, marche en montagne oblige, mais si une découverte quelconque est annoncée, même moi je m’attends à entendre son écho résonner dans les Alpes.

Pauline Gagnon

Pour en apprendre plus sur la physique des particules, ne manquez pas mon livre « Qu’est-ce que le boson de Higgs mange en hiver et autres détails essentiels » disponible en librairie au Québec et en Europe, de meme qu’aux Editions MultiMondes. Facile à lire : moi, j’ai tout compris!

CMS-lumi-17juin

Graphe cumulatif montrant la quantité de données produites à 13 TeV en 2016 par le LHC (en bleu) et récoltées par l’expérience CMS (en jaune) en date du 17 juin.

by Pauline Gagnon at June 17, 2016 01:05 PM

June 16, 2016

CERN Bulletin

CERN Bulletin Issue No. 24-25/2016
Link to e-Bulletin Issue No. 24-25/2016Link to all articles in this issue No.

June 16, 2016 03:03 PM

Marco Frasca - The Gauge Connection

Higgs or not Higgs, that is the question

ResearchBlogging.org

LHCP2016 is running yet with further analysis on 2015 data by people at CERN. We all have seen the history unfolding since the epochal event on 4 July 2012 where the announcement of the great discovery happened. Since then, also Kibble passed away. What is still there is our need of a deep understanding of the Higgs sector of the Standard Model. Quite recently, LHC restarted operations at the top achievable and data are gathered and analysed in view of the summer conferences.

The scalar particle observed at CERN has a mass of about 125 GeV. Data gathered on 2015 seem to indicate a further state at 750 GeV but this is yet to be confirmed. Anyway, both ATLAS and CMS see this bump in the \gamma\gamma data and this seems to follow the story of the discovery of the Higgs particle. But we have not a fully understanding of the Higgs sector  yet. The reason is that, in run I, gathered data were not enough to reduce the error bars to such small values to decide if Standard Model wins or not. Besides, as shown by run II, further excitations seem to pop up. So, several theoretical proposals for the Higgs sector still stand up and could be also confirmed already in August this year.

Indeed, there are great news already in the data presented at LHCP2016. As I pointed out here, there is a curious behavior of the strengths of the signals of Higgs decay in WW,\ ZZ and some tension, even if small, appeared between ATLAS and CMS results. Indeed, ATLAS seemed to have seen more events than CMS moving these contributions well beyond the unit value but, as CMS had them somewhat below, the average was the expected unity agreeing with expectations from the Standard Model. The strength of the signals is essential to understand if the propagator of the Higgs field is the usual free particle one or has some factor reducing it significantly with contributions from higher states summing up to unity. In this case, the observed state at 125 GeV would be just the ground state of a tower of particles being its excited states. As I showed recently, this is not physics beyond the Standard Model, rather is obtained by solving exactly the quantum equations of motion of the Higgs sector (see here). This is done considering the other fields interacting with the Higgs field just a perturbation.

So, let us do a recap of what was the situation for the strength of the signals for the WW\, ZZ decays of the Higgs particle. At LHCP2015 the data were given in the following slide

Signal strengths at LHCP2015

From the table one can see that the signal strengths for WW,\ ZZ decays in ATLAS are somewhat beyond unity while in CMS these are practically unity for ZZ but, more interestingly, 0.85 for WW. But we know that data gathered for WW decay are largely more than for ZZ decay. The error bars are large enough to be not a concern here. The value 0.85 is really in agreement with the already cited exact computations from the Higgs sector but, within the error, in overall agreement with the Standard Model. This seems to point toward on overestimated number of events in ATLAS but a somewhat reduced number of events in CMS, at least for WW decay.

At LHCP2016 new data have been presented from the two collaborations, at least for the ZZ decay. The results are striking. In order to see if the scenario provided from the exact solution of the Higgs sector is in agreement with data, these should be confirmed from run II and those from ATLAS should go down significantly. This is indeed what is going on! This is the corresponding slide

CMS data LHCP2016

This result is striking per se as shows a tendency toward a decreasing value when, in precedence, it was around unity. Now it is aligned with the value seen at CMS for the WW decay! The value seen is again in agreement with that given in the exact solution of the Higgs sector. And ATLAS? This is the most shocking result: They see a significant reduced set of events and the signal strength they obtain is now aligned to the one of CMS (see Strandberg’s talk at page 11).

What should one conclude from this? If the state at 750 GeV should be confirmed, as the spectrum given by the exact solution of the Higgs sector is given by an integer multiplied by a mass, this would be at n=6. Together with the production strengths, if further data will confirm them, the proper scenario for the breaking of electroweak symmetry is exactly the one described by the exact solution. Of course, this should be obviously true but an experimental confirmation is essential for a lot of reasons, last but not least the form of the Higgs potential that, if the numbers are these, the one postulated in the sixties would be the correct one. An other important reason is that coupling with other matter does not change the spectrum of the theory in a significant way.

So, to answer to the question of the title remains to wait a few weeks. Then, summer conferences will start and, paraphrasing Coleman: God knows, I know and by the end of the summer we all know.

Marco Frasca (2015). A theorem on the Higgs sector of the Standard Model Eur. Phys. J. Plus (2016) 131: 199 arXiv: 1504.02299v3


Filed under: Particle Physics, Physics Tagged: ATLAS, CERN, CMS, Higgs decay, Higgs particle, LHC, Standard Model

by mfrasca at June 16, 2016 09:49 AM

June 15, 2016

Clifford V. Johnson - Asymptotia

The Red Shoes…

screen_shot_progress
Well, this conversation (for the book) takes place in a (famous) railway station, so it would be neglectful of me to not have people scurrying around and so forth. I can't do too many of these... takes a long time to draw all that detail, then put in shadows, then paint, etc. Drawing directly on screen saves time (cutting out scanning, adjusting the scan, etc), but still...

This is a screen shot (literally, sort of - I just pointed a camera at it) of a detailed large panel in progress. I got bored doing the [...] Click to continue reading this post

The post The Red Shoes… appeared first on Asymptotia.

by Clifford at June 15, 2016 11:49 PM

Symmetrybreaking - Fermilab/SLAC

Second gravitational wave detection announced

For a second time, scientists from the LIGO and Virgo collaborations saw gravitational waves from the merger of two black holes.

Scientists from the LIGO and Virgo collaborations announced today the observation of gravitational waves from a set of merging black holes.

This follows their previous announcement, just four months ago, of the first ever detection of gravitational waves, also from a set of merging black holes.

The detection of gravitational waves confirmed a major prediction of Albert Einstein’s 1915 general theory of relativity. Einstein posited that every object with mass exerts a gravitational pull on everything around it. When a massive object moves, its pull changes, and that change is communicated in the form of gravitational waves.

Gravity is by far the weakest of the known forces, but if an object is massive enough and accelerates quickly enough, it creates gravitational waves powerful enough to be observed experimentally. LIGO, or Laser Interferometer Gravitational-wave Observatory, caught the two sets of gravitational waves using lasers and mirrors.

LIGO consists of two huge interferometers in Livingston, Louisiana, and Hanford, Washington. In an interferometer, a laser beam is split and sent down a pair of perpendicular arms. At the end of each arm, the split beams bounce off of mirrors and return to recombine in the center. If a gravitational wave passes through the laser beams as they travel, it stretches space-time in one direction and compresses it in another, creating a mismatch between the two.

Scientists on the Virgo collaboration have been working with LIGO scientists to analyze their data.

With this second observation, “we are now a real observatory,” said Gabriela Gonzalez, LIGO spokesperson and professor of physics and astronomy at Louisiana State University, in a press conference at the annual meeting of the American Astronomical Society.

The latest discovery was accepted for publication in the journal Physical Review Letters.

On Christmas evening in 2015, a signal that had traveled about 1.4 billion light years reached the twin LIGO detectors. The distant merging of two black holes caused a slight shift in the fabric of space-time, equivalent to changing the distance between the Earth and the sun by a fraction of an atomic diameter.

The black holes were 14 and eight times as massive as the sun, and they merged into a single black hole weighing 21 solar masses. That might sound like a lot, but these were relative flyweights compared to the black holes responsible for the original discovery, which weighed 36 and 29 solar masses.

“It is very significant that these black holes were much less massive than those observed in the first detection,” Gonzalez said in a press release. “Because of their lighter masses compared to the first detection, they spent more time—about one second—in the sensitive band of the detectors.”

The LIGO detectors saw almost 30 of the last orbits of the black holes before they coalesced, Gonzalez said during the press conference.

LIGO’s next data-taking run will begin in the fall. The Virgo detector, located near Pisa, Italy, is expected to come online in early 2017. Additional gravitational wave detectors are in the works in Japan and India.

Additional detectors will make it possible not only to find evidence of gravitational waves, but also to triangulate their origins.

On its own, LIGO is “more of a microphone,” capturing the “chirps” from these events, Gonzalez said.

The next event scientists are hoping to “hear” is the merger of a pair of neutron stars, said Caltech’s David Reitze, executive director of the LIGO laboratory, at the press conference.

Whereas two black holes merging are not expected to release light, a pair of neutron stars in the process of collapsing into one another could produce a plethora of observable gamma rays, X-rays, infrared light and even neutrinos.

In the future, gravitational wave hunters hope to be able to alert astronomers to an event with enough time and precision to allow them to train their instruments on the area and see those signals.

by Kathryn Jepsen at June 15, 2016 07:54 PM

Matt Strassler - Of Particular Significance

LIGO detects a second merger of black holes

There’s additional news from LIGO (the Laser Interferometry Gravitational Observatory) about gravitational waves today. What was a giant discovery just a few months ago will soon become almost routine… but for now it is still very exciting…

LIGO got a Christmas (US) present: Dec 25th/26th 2015, two more black holes were detected coalescing 1.4 billion light years away — changing the length of LIGO’s arms by 300 parts in a trillion trillion, even less than the first merger observed in September. The black holes had 14 solar masses and 8 solar masses, and merged into a black hole with 21 solar masses, emitting 1 solar mass of energy in gravitational waves. In contrast to the September event, which was short and showed just a few orbits before the merger, in this event nearly 30 orbits over a full second are observed, making more information available to scientists about the black holes, the merger, and general relativity.  (Apparently one of the incoming black holes was spinning with at least 20% of the maximum possible rotation rate for a black hole.)

The signal is not so “bright” as the first one, so it cannot be seen by eye if you just look at the data; to find it, some clever mathematical techniques are needed. But the signal, after signal processing, is very clear. (Signal-to-noise ratio is 13; it was 24 for the September detection.) For such a clear signal to occur due to random noise is 5 standard deviations — officially a detection. The corresponding “chirp” is nowhere near so obvious, but there is a faint trace.

This gives two detections of black hole mergers over about 48 days of 2015 quality data. There’s also a third “candidate”, not so clear — signal-to-noise of just under 10. If it is really due to gravitational waves, it would be merging black holes again… midway in size between the September and December events… but it is borderline, and might just be a statistical fluke.

It is interesting that we already have two, maybe three, mergers of large black holes… and no mergers of neutron stars with black holes or with each other, which are harder to observe. It seems there really are a lot of big black holes in binary pairs out there in the universe. Incidentally, the question of whether they might form the dark matter of the universe has been raised; it’s still a long-shot idea, since there are arguments against it for black holes of this size, but seeing these merger rates one has to reconsider those arguments carefully and keep an open mind about the evidence.

Let’s remember also that advanced-LIGO is still not running at full capacity. When LIGO starts its next run, six months long starting in September, the improvements over last year’s run will probably give a 50% to 100% increase in the rate for observed mergers.   In the longer run, the possibility of one merger per week is possible.

Meanwhile, VIRGO in Italy will come on line soon too, early in 2017. Japan and India are getting into the game too over the coming years. More detectors will allow scientists to know where on the sky the merger took place, which then can allow normal telescopes to look for flashes of light (or other forms of electromagnetic radiation) that might occur simultaneously with the merger… as is expected for neutron star mergers but not widely expected for black hole mergers.  The era of gravitational wave astronomy is underway.


Filed under: Astronomy, Dark Matter, Gravitational Waves Tagged: black holes, Gravitational Waves, LIGO

by Matt Strassler at June 15, 2016 06:11 PM

Clifford V. Johnson - Asymptotia

Oops, they did it again…!

LIGO_graphic(Well, not really oops... It's deliberate.) LIGO has announced another gravitational wave detection from a black hole merger! This time the black holes were 14.2 and 7.5 times the mass of the sun, and merged to form a new black hole of mass 20.8 times the mass of our sun, releasing a burst of energy [...] Click to continue reading this post

The post Oops, they did it again…! appeared first on Asymptotia.

by Clifford at June 15, 2016 05:43 PM

Jaques Distler - Musings

Moonshine Paleontology

Back in the Stone Age, Bergman, Varadarajan and I wrote a paper about compactifications of the heterotic string down to two dimension. The original motivation was to rebut some claims that such compactifications would be incompatible with holography.

To that end, we cooked up some examples of compactifications with no massless bosonic degrees of freedom, but lots of supersymmetry. Not making any attempt to be exhaustive, we wrote down one example with <semantics>(8,8)<annotation encoding="application/x-tex">(8,8)</annotation></semantics> spacetime supersymmetry, and another class of examples with <semantics>(24,0)<annotation encoding="application/x-tex">(24,0)</annotation></semantics> spacetime supersymmetry.

I never thought much more about the subject (the original motivation having long-since been buried by the sands of history), but recently these sorts of compactification have come back into fashion from the point of view of Umbral Moonshine and (thanks to Yuji Tachikawa for pointing it out to me) one recent paper makes extensive use of our (24,0) examples.

Unfortunately (as sort of a digression in their analysis), they say some things about these models which are not quite correct.

The worldsheet theory of these heterotic string compactifications is an asymmetric orbifold of a toroidal compactification. The latter is specified by an even self-dual Lorentzian lattice, <semantics>Λ<annotation encoding="application/x-tex">\Lambda</annotation></semantics>, of signature <semantics>(24,8)<annotation encoding="application/x-tex">(24,8)</annotation></semantics>. We took <semantics>Λ=Λ 24Λ E 8<annotation encoding="application/x-tex">\Lambda=\Lambda_{24}\oplus \Lambda_{E_8}</annotation></semantics>, where <semantics>Λ 24<annotation encoding="application/x-tex">\Lambda_{24}</annotation></semantics> is a Niemeier lattice. For our <semantics>(8,8)<annotation encoding="application/x-tex">(8,8)</annotation></semantics> model, we took <semantics>Λ 24<annotation encoding="application/x-tex">\Lambda_{24}</annotation></semantics> to be the Leech lattice, and the orbifold action to be a <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics> acting only on the left-movers (yielding the famous Monster Module as the left-moving CFT).

Our <semantics>(24,0)<annotation encoding="application/x-tex">(24,0)</annotation></semantics> models were obtained by having the <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics> act only on the right-movers, with a large set of choices for the left-moving CFT.

The resulting theory in 2D spacetime (unlike the <semantics>(8,8)<annotation encoding="application/x-tex">(8,8)</annotation></semantics> case) is chiral and hence has a potential gravitational anomaly. The anomaly is cancelled by the dimensional reduction of the Green-Schwarz term, which – in 2D – has the particularly simple form: <semantics>kB<annotation encoding="application/x-tex">k\int B</annotation></semantics>.

Paquette et al make the nice observation that this Green-Schwarz term is a tadpole for the (constant mode of) the <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>-field which, if nonzero, needs to be canceled by some number of space-filling fundamental strings. Unfortunately, they don’t state quite correctly what the strength of the tadpole is, nor how many space-filling strings are required to cancel it.

We can understand the correct answer by looking at the gravitational anomaly. We found that the spacetime theory has no massless bosonic degrees of freedom and <semantics>24n 1<annotation encoding="application/x-tex">24n_1</annotation></semantics> massless Majorana-Weyl fermions, all of the same chirality, where <semantics>n 1<annotation encoding="application/x-tex">n_1</annotation></semantics> is the number of <semantics>h=1<annotation encoding="application/x-tex">h=1</annotation></semantics> primary fields in the aforementioned <semantics>c=24<annotation encoding="application/x-tex">c=24</annotation></semantics> left-moving CFT. In the conventional normalization, that’s a contribution of <semantics>12n 1<annotation encoding="application/x-tex">-12n_1</annotation></semantics> to the gravitational anomaly.

The light-cone gauge worldsheet theory of our heterotic string has <semantics>c L=24<annotation encoding="application/x-tex">c_L=24</annotation></semantics>, <semantics>c R=8+4=12<annotation encoding="application/x-tex">c_R=8+4=12</annotation></semantics>, and so a space-filling string contributes <semantics>c Lc R=+12<annotation encoding="application/x-tex">c_L-c_R = +12</annotation></semantics> to the gravitational anomaly. We conclude that we need precisely <semantics>n 1<annotation encoding="application/x-tex">n_1</annotation></semantics> space-filling strings.

Now, Paquette et al are particularly interested in the case where the left-moving CFT is the Monster Module, which is the unique case1 with <semantics>n 1=0<annotation encoding="application/x-tex">n_1=0</annotation></semantics>. So, in their case, there is no tadpole.

Admittedly for our other examples, with <semantics>n 10<annotation encoding="application/x-tex">n_1\neq 0</annotation></semantics>, our story was incomplete. The 1-loop tadpole for the <semantics>B<annotation encoding="application/x-tex">B</annotation></semantics>-field requires the introduction of <semantics>n 1<annotation encoding="application/x-tex">n_1</annotation></semantics> space-filling strings. But, while that does make the physics more interesting, it doesn’t affect our conclusion about Goheer et al.


1 This model is a <semantics> 2<annotation encoding="application/x-tex">\mathbb{Z}_2</annotation></semantics>-orbifold of our <semantics>(8,8)<annotation encoding="application/x-tex">(8,8)</annotation></semantics> model. The orbifolding kills 8 of the original supersymmetries, but 16 new ones arise in the twisted sector.

by distler (distler@golem.ph.utexas.edu) at June 15, 2016 03:32 AM

June 14, 2016

Matt Strassler - Of Particular Significance

Giving two free lectures 6/20,27 about gravitational waves

For those of you who live in or around Berkshire County, Massachusetts, or know people who do…

Starting next week I’ll be giving two free lectures about the LIGO experiment’s discovery of gravitational waves.  The lectures will be at 1:30 pm on Mondays June 20 and 27, at Berkshire Community College in Pittsfield, MA.  The first lecture will focus on why gravitational waves were expected by scientists, and the second will be on how gravitational waves were discovered, indirectly and then directly.  No math or science background will be assumed.  (These lectures will be similar in style to the ones I gave a couple of years ago concerning the Higgs boson discovery.)

Here’s a flyer with the details:  http://berkshireolli.org/ProfessorMattStrasslerOLLILecturesFlyer.pdf


Filed under: Astronomy, Gravitational Waves, LHC News, Public Outreach Tagged: astronomy, black holes, Gravitational Waves, PublicOutreach, PublicTalks

by Matt Strassler at June 14, 2016 09:20 PM

Lubos Motl - string vacua and pheno

One can't understand physics through sociology
Experimental news: Listen to a new report from LIGO tomorrow (on Wednesday) at 19:15 Prague Summer Time; new detected gravitational wave GW151226 will be announced, with a 10 times higher frequency than GW150914. Also, in 2016, the LHC has recorded 4/fb, pretty much matching all of 2015, and some articles and LHCnews Twitter indicate that they could have something new about the \(750\GeV\) cernette soon, too. Results at \(Z\gamma\) and \(gg\) were null.
I have written numerous blog posts, e.g. this one in 2015, about this question but the question keeps on returning.

There exists a bunch of arrogant social scientists who believe or pretend to believe that they may reduce the wisdom about the world – including natural sciences – to their cheap ideological clichés about the society and discrimination and similar constructs. They think that when they observe how scientists dress or talk to each other, they may understand everything important about science, much like when they are observing dancing savages in the Pacific Ocean.

Famously, in 1996, Alan Sokal proved [PDF] that this postmodern filth belongs to a [beep] [beep] when he published a totally idiotic crackpot hoax article about quantum gravity that licked the rectums of these individuals, and that's why it was enthusiastically embraced by a would-be prestigious journal published by those hacks, Social Text, despite dozens of cute claims in the paper that the value of pi depends on the oppression of women and similar "gems".




Sokal's paper could have killed that kind of thinking – or, more precisely, the absence of thinking – but it didn't. Instead, numerous new people began to intervene into science in this way in the two decades that followed. Sabine Hossenfelder is one example, especially with the new text
String phenomenology of the somewhat different kind
but she's clearly far from the only one because she mentions three papers by social "scientists" who think that they may learn something interesting about string theory or its validity by repeating some sociological clichés.




The papers were written by
Weatherall+Gilbert, Ritson+Camilleri, Ritson
If you search for the titles of these 3 papers at Google Scholar, you may check that all of them have 0 citations at this point (after a year or so).

As you may remember, Weatherall was the weakest student in a Harvard course I taught. I cannot quote his paper with Margaret Gilbert because at the title page, it says
Please do not quote or paraphrase without one of the authors’ explicit permission
so I won't do it (although I am sure that they couldn't prevent me if I wanted). Instead, I just mention that the paper is 36 pages full of nonsensical views from the viewpoint of Lee Smolin who must look like quite some important scientist to a gullible reader.

Just imagine that. Why would someone be writing 36 pages of nonsense as seen from the viewpoint of Smolin? If physics were a human, Smolin would be a tiny piece of excrement attached to the appendix from the internal side. Clearly, Weatherall and Gilbert want to be an even tinier appendix attached to a tiny piece of šit. Why? What motivates people to degrade themselves in this fatal way?

The paper has no interesting content – except for saying that string theorists behave as typical scientists and a typical community and humans etc. It's probably Smolin who's the unusual one, they suggest. But Smolin is quite a typical piece of šit which is about 1 million times more widespread than the string theorists.

The papers co-authored by Ritson are even emptier. They say that some people think that string theory is great, some don't. Some people want to censor crackpots from the arXiv, some don't, and so on. Sophie must be a very clever girl to have noticed.

But let me return to the theme of the "normal scientific community" as painted by Weatherall and his co-author. They repeat millions of slogans about a "community" such as
As parties to a joint commitment, members of the string theory community are obligated to act as mouthpieces of their collective belief.
But this whole theme is absolute rubbish. String theory has nothing whatever to do with any community. String theory is a remarkable, completely impersonal mathematical structure that happens to be compatible with all the aspects of the laws of physics that are known as of today. It makes no sense to think that a "community" exists or has an impact because the number of string theorists is so tiny.

My example isn't the average one but it is not infinitely far from the average, either. I got access to state-of-the-art scientific journals etc. when I was 17 or so. By the time when I was a college sophomore, it became clear to me that string theory had to be right and I was learning it rather systematically, having acquired the Green-Schwarz-Witten textbook, among other things.

Sometimes in 1995, and for years afterwards, I am pretty sure that I was the only person physically located on the Czech territory who could pass most of a basic string theory exam (plus an exam from one or two advanced, specialized topics). It means that in a circle of radius of some 300 kilometers (some 60 hours of walk), there was just no one else. What the hell is this "community" of yours? String theory is pursued by a few thousand individuals who are pretty much exactly as rare on the surface of Earth as the density indicated by my example suggests.

Also, the validity of string theory has nothing whatever to do with any community. I am among those who know enough to be sure about string theory now but it has nothing to do with my "commitments". At an annual string conference, when someone refers to and praises my work, a bunch of aßholes starts to laugh in order to reduce the seriousness of the situation.

I am sure that if I had to evaluate what sort of people are the attendees, I would conclude that a very large percentage are almost exactly the same nasty left-wing jerks that one may see in most of the Western university departments. But this disagreement or animosity can't prevent me from seeing that if they write a correct or even important paper, it is correct or even important because these propositions boil down to the evidence and how it fits together, not to some communities.

It seems likely to me that most of the bright kids who are 17 today and who have a natural interest in theoretical physics must already be avoiding theoretical physics because what they see is that the value of the geniuses analogous to themselves is underrated by many orders of magnitude. It's not as bad as in reality but it unavoidably looks so in the media and on the Internet. Subpar pieces of šit such as Ms Hossenfelder, Mr Smolin, and Mr Weatherall talk about "communities" as if they were members of just another "community" and all "communities" were the equal. The 17-year-old big shots see that it's very likely that they may happily earn millions of dollars if they do business; theoretical physics is not only existentially uncertain but they see that the very question whether they may do research of theoretical physics as science may be questioned by some increasingly widespread imbeciles.

Meanwhile, as the 17-year-old clever kids already know, the reality is obviously very different than what the low-brow sourballs obsessed with the sociology claim. Whether one imagines people as members of communities, what distinguishes real string theorists are their refined and sometimes priceless minds and the results of their work while the likes of Ms Hossenfelder, Mr Smolin, and Mr Weatherall are just piles of generic šit. This filth isn't capable of doing any interesting science so it tries to politicize absolutely everything instead. For example, this is the kind of stuff they write about the term "crackpot":
To me the notion of “crackpot” is an excellent example of an emergent feature – it’s a demarcation that the community creates during its operation. Any attempt to come up with a definition from first principles is hence doomed to fail.
What is the definition? A crackpot is, by definition, a pot that is cracked and it shouldn't be sold as a new product except that someone tries to do so. Now, does this definition apply to particular people? For example, does it apply to Lee Smolin, as journalist George Johnson has asked the members of the Santa Barbara Physics Department (plus KITP)? It's a question totally analogous to any other difficult question. The answer depends on the knowledge and opinions of the speaker, his subjective refinement (or generality) of the notion of a crackpot, and other things. Yes, most of the physicists in Santa Barbara have politely displayed their opinion that the answer is "yes, he is". But that doesn't mean that one can guarantee that everyone did so. It's not even known whether a poll would be able to pick a majority of signatures. And it just doesn't matter! What matters is that none of the people uses Smolin's work in their own work because Smolin hasn't done any usable work in his life.

At any rate, someone's being a crackpot is the result of a judgement that is as partially subjective as any other judgement about the people (or about other things). It is absolute nonsense that the term "crackpot" is something created by a community. I've never consulted any community when I used the term for any particular crackpots. After all, if I had done so, the community would probably prevent me from doing so.

Now, I am among those who like to use the word. But others use it, too. People in string theory and around string theory "largely" agree who is a crackpot and who isn't. But they don't agree precisely. The agreement boils down to the basically objective criteria. The part of the definition of a "crackpot" that may be objectified may be arguably seen through the "votes" in a community; but some uncertainty is unavoidable because no propositions about the people's characteristics may ever be "completely clearcut" and the term "crackpot" obviously partially depends on some messy psychology or social science.

Some people's methods and resulting conclusions about science are simply indefensible as proper science. Everyone who is not a complete lunatic realizes that science isn't everything that some random people (including laymen) claim to be science. Where the "demarcation line" is located is pretty much the same question as the question what the scientific evidence says about Nature and which propositions are defensible. It's a question that implicitly incorporates all scientific questions in a given discipline. If someone has no technical knowledge of theoretical physics, he self-evidently can't divide people to "crackpots" and "non-crackpots". Being a "crackpot" isn't about her or his outfit or the intonation or some easy-to-detect patterns in the speech – that could be noticed by every layman. To decide whether a person is a crackpot requires some technical expertise in the same discipline, whether you like it or not.

The answers to these questions have nothing to do with "communities" because science is (at least ideally – and it's mostly true in the real science as well unless we talk about the disciplines that have become corrupt) an impartial and impersonal effort of individuals. Some people sometimes copy (or almost copy) opinions of other people. Indeed, that may happen even among physicists. What a surprise. But in every group of people who deserve to be called scientists, there will be a huge percentage (and hopefully an overwhelming majority) of people who arrived to their conclusions independently.

It's terribly troubling that individuals such as Ms Hossenfelder may survive for many years while convincing tons of people in the media that they're on par with the genuine top scientists such as string theorists even though they are demonstrably fakes without any content, piles of worthless trash. This outcome must be partly blamed on the insane egalitarian ideologies that have conquered much of the Western university world in recent decades. Everytime you were saying "it's OK when a whore gets somewhere just because of her non-convex organs", you were crippling the (future) control of the actual scientists over the institutionalized process and you were working to repel true young big shots from science in the future.

A big part of the left-wingers, even among string theorists, are co-responsible for that and they must be denounced for that. But many of them have still done some valuable work in string theory which is a totally independent thing. Scum like Smolin, Hossenfelder, and Weatherall may fail to be able to separate science from sociology or personal relations but everyone who at least remotely deserves the label "scientist" doesn't have any difficulty whatsoever when he (or, less much less frequently, she) separates these very different issues.

by Luboš Motl (noreply@blogger.com) at June 14, 2016 06:51 PM

Symmetrybreaking - Fermilab/SLAC

The neutrino turns 60

Project Poltergeist led to the discovery of the ghostly particle. Sixty years later, scientists are confronted with more neutrino mysteries than ever before.

In 1930, Wolfgang Pauli proposed the existence of a new tiny particle with no electric charge. The particle was hypothesized to be very light—or possibly have no mass at all—and hardly ever interact with matter. Enrico Fermi later named this mysterious particle the “neutrino” (or “little neutral one”).

Although neutrinos are extremely abundant, it took 26 years for scientists to confirm their existence. In the 60 years since the neutrino’s discovery, we’ve slowly learned about this intriguing particle. 

“At every turn, it seems to take a decade or two for scientists to come up with experiments to start to probe the next property of the neutrino,” says Keith Rielage, a neutrino researcher at the Department of Energy’s Los Alamos National Laboratory. “And once we do, we’re often left scratching our heads because the neutrino doesn’t act as we expect. So the neutrino has been an exciting particle from the start.”

We now know that there are actually three types, or “flavors,” of neutrinos: electron, muon and tau. We also know that neutrinos change, or “oscillate,” between the three types as they travel through space. Because neutrinos oscillate, we know they must have mass.

However, many questions about neutrinos remain, and the search for the answers involves scientists and experiments around the world.

The mystery of the missing energy

Pauli thought up the neutrino while trying to solve the problem of energy conservation in a particular reaction called beta decay. Beta decay is a way for an unstable atom to become more stable—for example, by transforming a neutron into a proton. In this process, an electron is emitted.

If the neutron transformed into only a proton and an electron, their energies would be well defined. However, experiments showed that the electron did not always emerge with a particular energy—instead, electrons showed a range of energies. To account for this range, Pauli hypothesized that an unknown neutral particle must be involved in beta decay.

“If there were another particle involved in the beta decay, all three particles would share the energy, but not always exactly the same way,” says Jennifer Raaf, a neutrino researcher at DOE’s Fermi National Accelerator Laboratory. “So sometimes you could get an electron with a high energy and sometimes you could get one with a low energy.”

In the early 1950s, Los Alamos physicist Frederick Reines and his colleague Clyde Cowan set out to detect this tiny, neutral, very weakly interacting particle. 

At the time, neutrinos were known as mysterious “ghost” particles that are all around us but mostly pass straight through matter and take away energy in beta decays. For this reason, Reines and Cowan’s search to detect the neutrino came to be known as “Project Poltergeist.”

“The name seemed logical because they were basically trying to exorcise a ghost,” Rielage says.

Catching the ghost particle

“The story of the discovery of the neutrino is an interesting one, and in some ways, one that could only happen at Los Alamos,” Rielage says.

It all started in the early 1950s. Working at Los Alamos, Reines had led several projects testing nuclear weapons in the Pacific, and he was interested in fundamental physics questions that could be explored as part of the tests. A nuclear explosion was thought to create an intense burst of antineutrinos, and Reines thought an experiment could be designed to detect some of them. Reines convinced Cowan, his colleague at Los Alamos, to work with him to design such an experiment.

Reines and Cowan’s first idea was to put a large liquid scintillator detector in a shaft next to an atmospheric nuclear explosion test site. But then they came up with a better idea—to put the detector next to a nuclear reactor. 

So in 1953, Reines and Cowan headed to the large fission reactor in Hanford, Washington with their 300-liter detector nicknamed “Herr Auge” (German for “Mr. Eye”).

Although Reines and Cowan did detect a small increase in neutrino-like signals when the reactor was on versus when it was off, the noise was overwhelming. They could not definitively conclude that the small signal was due to neutrinos. While the detector’s shielding succeeded in blocking the neutrons and gamma rays from the reactor, it could not stop the flood of cosmic rays raining down from space.

Over the next year, Reines and Cowan completely redesigned their detector into a stacked three-layer configuration that would allow them to clearly differentiate between a neutrino signal and the cosmic ray background. In late 1955, they hit the road again with their new 10-ton detector—this time to the powerful fission reactor at the Savannah River Plant in South Carolina. 

For more than five months, Reines and Cowan collected data and analyzed the results. In June 1956, they sent a telegram to Pauli. It said, “We are happy to inform you that we have definitively detected neutrinos.”

Major milestones in the history of neutrino research

Solving the next neutrino mystery

In the 1960s, a new mystery involving the neutrino began—this time in a gold mine in South Dakota.

Ray Davis, a nuclear chemist at the DOE’s Brookhaven National Laboratory, had designed an experiment to detect neutrinos produced in reactions in the sun, also known as solar neutrinos. It featured a large chlorine-based detector located a mile underground in the Homestake Mine, which provided shielding from cosmic rays. 

In 1968, the Davis experiment detected solar neutrinos for the first time, but the results were puzzling. Astrophysicist John Bahcall had calculated the expected flux of neutrinos from the sun—that is, the number of neutrinos that should be detected over a certain area in a certain amount of time. However, the experiment was only detecting about one-third the number of neutrinos predicted. This discrepancy came to be known as the “solar neutrino problem.”

At first, scientists thought there was a problem with Davis’ experiment or with the model of the sun, but no problems were found. Slowly, scientists began to suspect that it was actually an issue with the neutrinos. 

“Neutrinos always seem to surprise us,” Rielage says. “We think something is fairly straightforward, and it turns out not to be.”

Scientists theorized that neutrinos might oscillate, or change from one type to another, as they travel through space. Davis’ experiment was only sensitive to electron neutrinos, so if neutrinos oscillated and arrived at the Earth as a mixture of the three types, it would explain why the experiment was only detecting one-third of them.

In 1998, the Super-Kamiokande experiment in Japan first detected atmospheric neutrino oscillations. Then, in 2001, the Sudbury Neutrino Observatory in Canada announced the first evidence of solar neutrino oscillations, followed by conclusive evidence in 2002. After more than 30 years, scientists were able to confirm that neutrinos oscillate, thus solving the solar neutrino problem.

“The fact that neutrinos oscillate is interesting, but the critical thing is that it tells us that neutrinos must have mass,” says Gabriel Orebi Gann, a neutrino researcher at the University of California, Berkeley, and the DOE’s Lawrence Berkley National Laboratory and a SNO collaborator. “This is huge because there was no expectation in the Standard Model that the neutrino would have mass.”

Mysteries beyond the Standard Model

The Standard Model—the theoretical model that describes elementary particles and their interactions—does not include a mechanism for neutrinos to have mass. The discovery of neutrino oscillation put a serious crack into an otherwise extremely accurate picture of the subatomic world.

“It’s important to poke at this picture and see which parts of it hold up to experimental testing and which parts still need additional information filled in,” Raaf says.

After 60 years of studying neutrinos, several mysteries remain that could provide windows into physics beyond the Standard Model.

Is the neutrino its own antiparticle? 

The neutrino is unique in that it has the potential to be its own antiparticle. “The only thing we know at the moment that distinguishes matter from antimatter is electric charge,” Orebi Gann says. “So for the neutrino, which has no electric charge, it’s sort of an obvious question – what is the difference between a neutrino and its antimatter partner?”

If the neutrino is not its own antiparticle, there must be something other than charge that makes antimatter different from matter. “We currently don’t know what that would be,” Orebi Gann says. “It would be what we call a new symmetry.”

Scientists are trying to determine if the neutrino is its own antiparticle by searching for neutrinoless double beta decay. These experiments look for events in which two neutrons decay into protons at the same time. The standard double beta decay would produce two electrons and two antineutrinos. However, if the neutrino is its own antiparticle, the two antineutrinos could annihilate, and only electrons would come out of the decay. 

Several upcoming experiments will look for neutrinoless double beta decay. These include the SNO+ experiment in Canada, the CUORE experiment at the Laboratori Nazionali del Gran Sasso in Italy, the EXO-200 experiment at the Waste Isolation Pilot Plant in New Mexico, and the MAJORANA experiment at the Sanford Underground Research Facility in the former Homestake mine in South Dakota (the same mine in which Davis conducted his famous solar neutrino experiment).

What is the order, or “hierarchy,” of the neutrino mass states?

We know that neutrinos have mass and that the three neutrino mass states differ slightly, but we do not know which is the heaviest and which is the lightest. Scientists are aiming to answer this question through experiments that study neutrinos as they oscillate over long distances.

For these experiments, a beam of neutrinos is created at an accelerator and sent through the Earth to far-away detectors. Such long-baseline experiments include Japan’s T2K experiment, Fermilab’s NOvA experiment and the planned Deep Underground Neutrino Experiment.

What is the absolute mass of neutrinos?

To try to measure the absolute mass of neutrinos, scientists are returning to the reaction that first signaled the existence of the neutrino—beta decay. The KATRIN experiment in Germany aims to directly measure the mass of the neutrino by studying tritium (an isotope of hydrogen) that decays through beta decay.

Are there more than three types of neutrinos?

Scientists have hypothesized another even more weakly interacting type of neutrino called the “sterile” neutrino. To look for evidence of sterile neutrinos, scientists are studying neutrinos as they travel over short distances. 

As part of the short baseline neutrino program at Fermilab, scientists will use three detectors to look for sterile neutrinos: the Short Baseline Neutrino Detector, MicroBooNE and ICARUS (a neutrino detector that previously operated at Gran Sasso). Gran Sasso will also host an upcoming experiment called SOX that will look for sterile neutrinos.

Do neutrinos violate “charge parity (CP) symmetry”?

Scientists are also using long-baseline experiments to search for something called CP violation. If equal amounts of matter and antimatter were created in the Big Bang, it all should have annihilated. Because the universe contains matter, something must have led to there being more matter than antimatter. If neutrinos violate CP symmetry, it could help explain why there is more matter.

“Not having all the answers about neutrinos is what makes it exciting,” Rielage says. “The problems that are left are challenging, but we often joke that if it were easy, someone would have already figured it out by now. But that’s what I enjoy about it—we have to really think outside the box in our search for the answers.”

by Amelia Williamson Smith at June 14, 2016 01:00 PM

CERN Bulletin

Director general presentation to personnel

Dear Colleagues,

Many important discussions are scheduled for the upcoming Council Week (13-17 June) on topics including the Medium-Term Plan, the Pension Fund and other matters of great relevance to us.  

I would therefore like to share the main outcome of the week with you and I invite you to join me and the Directors in the Main Auditorium at 10 a.m. on Thursday 23 June. The meeting will last about one hour and a webcast will also be available.

Best regards,

Fabiola Gianotti

DG presentation to personnel
Thursday 23 June at 10 am
Main Auditorium

Retransmission in Council Chamber, IT Auditorium, Kjell Jonhsen Auditorium, Prevessin 864-1-C02

Webcast on cern.ch/webcast
More information on the event page.

June 14, 2016 11:06 AM

June 13, 2016

CERN Bulletin

CERN Entrepreneur Mixer | 21 June | Pas perdus
CERN Knowledge Transfer group is hosting an Entrepreneur Mixer, an event dedicated to building bridges between CERN innovative entrepreneurs. This will be a unique opportunity to discover business projects initiated by former CERN people, and to see how CERN technology is being exploited by start-up companies. The deadline for registration is Friday, 17 June. For more information, please visit the Indico page of the event: https://indico.cern.ch/event/537167/

June 13, 2016 04:03 PM

Symmetrybreaking - Fermilab/SLAC

CERN grants beam time to students

Contest winners will study special relativity and an Egyptian pyramid using a CERN beamline.

Two groups of high school teams have beat out nearly 150 others from around the world to secure a highly prized opportunity: the chance to do a science project—at CERN.

After sorting through a pool of teams that represented more than a thousand students from 37 countries, today CERN announced the winners of its third Beamline for Schools competition. The two teams, “Pyramid Hunters” from Poland and “Relatively Special” from the United Kingdom, will travel to Geneva in September to put their experiments to the test.

“We honestly couldn’t be more thrilled to have been given this opportunity,” said Henry Broomfield, a student on the “Relatively Special” team, in an email. “The prospect of winning always seemed like something that would only occur in a parallel universe, so at first we didn’t believe it.”

“Relatively Special” consists of 17 students from Colchester Royal Grammar School in the United Kingdom. Nine of the students will travel to CERN for the competition. They plan to test the Lorentz factor, an input used in calculations related to Einstein’s theory of special relativity.

According to the theory, the faster an object moves, the higher its apparent mass will be and the slower its time will pass relative to our own. This concept, known as time dilation, is most noticeable at speeds approaching the speed of light and is the reason GPS satellites have to adjust their clocks to match the time on Earth. At CERN, “Relatively Special” will measure the decay of pions, particles containing a quark and an antiquark, to see if the particles moving closer to the speed of light decay at the slower rate predicted by time dilation.

Video of flKV8dvIM10

The other team, “Pyramid Hunters,” is a group of seven students from Liceum Ogólnokształcące im. Marsz. St. Małachowskiego in Poland. These students plan to use particle physics to strengthen the archeological knowledge of the Pyramid of Khafre, one of the largest and most iconic of the Egyptian pyramids.

The pyramid was mapped in the 1960s using muon tomography, a technique similar to X-ray scanning that uses heavy particles called muons to generate images of a target. “Pyramid Hunters” will attempt to improve the understanding of that early data by firing muons into limestone, the material that was used to build the pyramids. They will observe the rate at which the muons are absorbed. The absorption rate can tell researchers about the thickness of the material they scanned.

Video of fp4FOYXjsUs

The Beamline for Schools competition began two years ago, coinciding with CERN’s 60th anniversary. Its purpose was to give students the opportunity to run an experiment on a CERN beamline in the same way its regular researchers do. For the competition, students submitted written proposals for their projects, as well as creative one-minute videos to explain their goals for their projects and the experience in general.

A CERN committee selected the students based on “creativity, motivation, feasibility and scientific method,” according to a press release. CERN recognized the projects of nearly 30 other teams, rewarding them with certificates, t-shirts and pocket-size cosmic ray detectors for their schools.

“I am impressed with the level of interest within high schools all over Europe and beyond, as well as with the quality of the proposals,” Claude Vallee, the chairperson of the CERN committee that chose the winning teams, said in a press release.

The previous winning teams hailed from the Netherlands, Italy, Greece and South Africa. Some of their projects have included examining the weak force and testing calorimeters and particle detectors made from different materials.

“I can't imagine better way of learning physics than doing research in the largest particle physics laboratory in the world,” said Kamil Szymczak, a student on the “Pyramid Hunters” team, in a press release. “I still can't believe it.”

by Molly Olmstead at June 13, 2016 03:00 PM

CERN Bulletin

LHC Report: staying cool despite record highs

These two last weeks have been a highlight of LHC operation so far, delivering record luminosity.

 

LHC integrated luminosity in 2011, 2012, 2015 and 2016.

It’s been a record-breaking period for the LHC. On the evening of Wednesday, 1 June, the maximum number of bunches achievable with the current configuration, based on the injection of 72-bunch trains with a spacing of 25 ns, was reached. 2040 bunches were circulating in the machine. The rest of the week continued in a similar vein: the luminosity record at 6.5 TeV was broken with a peak luminosity of just over 8 x 1033 cm-2s-1, reaching 80% of the design luminosity. This was followed by a new record for integrated luminosity in a single fill, with 370 pb-1 delivered in 18 hours of colliding beams. Finally, a third record was broken later in the week: with an availability for collisions of around 75% (the annual average is normally around 35%) and 6 long fills of particles brought into collision one after the other, around 2 fb-1 of luminosity were delivered during the week, breaking the previous record of 1.4 fb-1 in a single week established in June 2012.

These records follow the decision taken at the end of May to focus on delivering the highest possible integrated luminosity by the summer conferences, given the delays caused by the recent technical problems.

As a consequence of this decision, the first machine development period, in which machine experts carry out machine studies, has been postponed to allow luminosity production to be given priority until the first technical stop (TS1). Given the long stop caused by the problem with the PS main power supply, which had ended just the week before the decision was taken, it was also decided to reduce the length of the LHC technical stop from five days to two and a half days.

With 2040 bunches circulating in the machine, the heat load deposited on the LHC beam screens in the arcs reached 150 W per half cell (one quadrupole and 3 dipoles) in sector 12 (between ATLAS and ALICE), just below the maximum of 160 W that can be tolerated by the cryogenics system. Electron clouds induced in the vacuum chamber by the closely spaced LHC bunches are responsible for this heat load. With a new cryogenic feed-forward system in place to tune the beam screen cooling parameters according to the intensity stored in the machine and the beam energy, operation is now significantly smoother than in 2015. The waiting periods needed to allow the cryogenics system to stabilise before starting the energy ramp-up or the ramp-down after a dump have virtually disappeared as a result, significantly speeding up the machine cycle.

The technical stop finished on schedule around midday on Thursday, 9 June. It was followed by a number of fills with a low number of bunches to validate the machine set-up. The aperture and optics were measured and a full set of loss maps was performed. These confirmed that the machine is in good shape and ready for a sustained period of operation at high luminosity.

A run with 600 bunches on Saturday allowed the experiments to perform some calibration measurements. This was followed by another record fill with 2040 bunches, a peak luminosity well over 8 x 1033cm-2s-1, and around 450 pb-1 delivered in around 21 hours.

June 13, 2016 02:06 PM

CERN Bulletin

Join the CERN ISEF special award winners | 16 June - 3 p.m.
Come and join the CERN ISEF special award winners at their lightning talks session on 16 June at 3.00 p.m. in the main auditorium.   The 2016 Intel ISEF CERN special award winners on stage with the selection committee on 17 May 2016 in Phoenix, Arizona, USA. (Picture: Society for Science and the Public) Between 11 and 17 June 2016, the ten finalists of the Intel International Science and Engineering Fair (ISEF) who won the CERN Special Award, will visit CERN to partake in various educational lectures. ISEF is the world's largest international pre-college science competition, with approximately 1,700 high school students from more than 75 countries taking part. They will present their projects in short 5 minutes lightning talks' sessions at the main auditorium on Thursday 16 June at 3 p.m. The award winners would be also very happy to have a chance to interact and discuss with you after the presentations at Restaurant 1. For more information about the award winners, their fascinating projects and the lightning talk session, please check the event Indico page.

June 13, 2016 12:27 PM

June 12, 2016

The n-Category Cafe

How the Simplex is a Vector Space

It’s an underappreciated fact that the interior of every simplex <semantics>Δ n<annotation encoding="application/x-tex">\Delta^n</annotation></semantics> is a real vector space in a natural way. For instance, here’s the 2-simplex with twelve of its 1-dimensional linear subspaces drawn in:

Triangle with some curves

(That’s just a sketch. See below for an accurate diagram by Greg Egan.)

In this post, I’ll explain what this vector space structure is and why everyone who’s ever taken a course on thermodynamics knows about it, at least partially, even if they don’t know they do.

Let’s begin with the most ordinary vector space of all, <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics>. (By “vector space” I’ll always mean vector space over <semantics><annotation encoding="application/x-tex">\mathbb{R}</annotation></semantics>.) There’s a bijection

<semantics>(0,)<annotation encoding="application/x-tex"> \mathbb{R} \leftrightarrow (0, \infty) </annotation></semantics>

between the real line and the positive half-line, given by exponential in one direction and log in the other. Doing this bijection in each coordinate gives a bijection

<semantics> n(0,) n.<annotation encoding="application/x-tex"> \mathbb{R}^n \leftrightarrow (0, \infty)^n. </annotation></semantics>

So, if we transport the vector space structure of <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> along this bijection, we’ll produce a vector space structure on <semantics>(0,) n<annotation encoding="application/x-tex">(0, \infty)^n</annotation></semantics>. This new vector space <semantics>(0,) n<annotation encoding="application/x-tex">(0, \infty)^n</annotation></semantics> is isomorphic to <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics>, by definition.

Explicitly, the “addition” of the vector space <semantics>(0,) n<annotation encoding="application/x-tex">(0, \infty)^n</annotation></semantics> is coordinatewise multiplication, the “zero” vector is <semantics>(1,,1)<annotation encoding="application/x-tex">(1, \ldots, 1)</annotation></semantics>, and “subtraction” is coordinatewise division. The scalar “multiplication” is given by powers: multiplying a vector <semantics>y=(y 1,,y n)(0,) n<annotation encoding="application/x-tex">\mathbf{y} = (y_1, \ldots, y_n) \in (0, \infty)^n</annotation></semantics> by a scalar <semantics>λ<annotation encoding="application/x-tex">\lambda \in \mathbb{R}</annotation></semantics> gives <semantics>(y 1 λ,,y n λ)<annotation encoding="application/x-tex">(y_1^\lambda, \ldots, y_n^\lambda)</annotation></semantics>.

Now, the ordinary vector space <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> has a linear subspace <semantics>U<annotation encoding="application/x-tex">U</annotation></semantics> spanned by <semantics>(1,,1)<annotation encoding="application/x-tex">(1, \ldots, 1)</annotation></semantics>. That is,

<semantics>U={(λ,,λ):λ}.<annotation encoding="application/x-tex"> U = \{(\lambda, \ldots, \lambda) \colon \lambda \in \mathbb{R} \}. </annotation></semantics>

Since the vector spaces <semantics> n<annotation encoding="application/x-tex">\mathbb{R}^n</annotation></semantics> and <semantics>(0,) n<annotation encoding="application/x-tex">(0, \infty)^n</annotation></semantics> are isomorphic, there’s a corresponding subspace <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> of <semantics>(0,) n<annotation encoding="application/x-tex">(0, \infty)^n</annotation></semantics>, and it’s given by

<semantics>W={(e λ,,e λ):λ}={(γ,,γ):γ(0,)}.<annotation encoding="application/x-tex"> W = \{(e^\lambda, \ldots, e^\lambda) \colon \lambda \in \mathbb{R} \} = \{(\gamma, \ldots, \gamma) \colon \gamma \in (0, \infty)\}. </annotation></semantics>

But whenever we have a linear subspace of a vector space, we can form the quotient. Let’s do this with the subspace <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics> of <semantics>(0,) n<annotation encoding="application/x-tex">(0, \infty)^n</annotation></semantics>. What does the quotient <semantics>(0,) n/W<annotation encoding="application/x-tex">(0, \infty)^n/W</annotation></semantics> look like?

Well, two vectors <semantics>y,z(0,) n<annotation encoding="application/x-tex">\mathbf{y}, \mathbf{z} \in (0, \infty)^n</annotation></semantics> represent the same element of <semantics>(0,) n/W<annotation encoding="application/x-tex">(0, \infty)^n/W</annotation></semantics> if and only if their “difference” — in the vector space sense — belongs to <semantics>W<annotation encoding="application/x-tex">W</annotation></semantics>. Since “difference” or “subtraction” in the vector space <semantics>(0,) n<annotation encoding="application/x-tex">(0, \infty)^n</annotation></semantics> is coordinatewise division, this just means that

<semantics>y 1z 1=y 2z 2==y nz n.<annotation encoding="application/x-tex"> \frac{y_1}{z_1} = \frac{y_2}{z_2} = \cdots = \frac{y_n}{z_n}. </annotation></semantics>

So, the elements of <semantics>(0,) n/W<annotation encoding="application/x-tex">(0, \infty)^n/W</annotation></semantics> are the equivalence classes of <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-tuples of positive reals, with two tuples considered equivalent if they’re the same up to rescaling.

Now here’s the crucial part: it’s natural to normalize everything to sum to <semantics>1<annotation encoding="application/x-tex">1</annotation></semantics>. In other words, in each equivalence class, we single out the unique tuple <semantics>(y 1,,y n)<annotation encoding="application/x-tex">(y_1, \ldots, y_n)</annotation></semantics> such that <semantics>y 1++y n=1<annotation encoding="application/x-tex">y_1 + \cdots + y_n = 1</annotation></semantics>. This gives a bijection

<semantics>(0,) n/WΔ n <annotation encoding="application/x-tex"> (0, \infty)^n/W \leftrightarrow \Delta_n^\circ </annotation></semantics>

where <semantics>Δ n <annotation encoding="application/x-tex">\Delta_n^\circ</annotation></semantics> is the interior of the <semantics>(n1)<annotation encoding="application/x-tex">(n - 1)</annotation></semantics>-simplex:

<semantics>Δ n ={(p 1,,p n):p i>0,p i=1}.<annotation encoding="application/x-tex"> \Delta_n^\circ = \{(p_1, \ldots, p_n) \colon p_i \gt 0, \sum p_i = 1 \}. </annotation></semantics>

You can think of <semantics>Δ n <annotation encoding="application/x-tex">\Delta_n^\circ</annotation></semantics> as the set of probability distributions on an <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics>-element set that satisfy Cromwell’s rule: zero probabilities are forbidden. (Or as Cromwell put it, “I beseech you, in the bowels of Christ, think it possible that you may be mistaken.”)

Transporting the vector space structure of <semantics>(0,) n/W<annotation encoding="application/x-tex">(0, \infty)^n/W</annotation></semantics> along this bijection gives a vector space structure to <semantics>Δ n <annotation encoding="application/x-tex">\Delta_n^\circ</annotation></semantics>. And that’s the vector space structure on the simplex.

So what are these vector space operations on the simplex, in concrete terms? They’re given by the same operations in <semantics>(0,) n<annotation encoding="application/x-tex">(0, \infty)^n</annotation></semantics>, followed by normalization. So, the “sum” of two probability distributions <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics> and <semantics>q<annotation encoding="application/x-tex">\mathbf{q}</annotation></semantics> is

<semantics>(p 1q 1,p 2q 2,,p nq n)p 1q 1+p 2q 2++p nq n,<annotation encoding="application/x-tex"> \frac{(p_1 q_1, p_2 q_2, \ldots, p_n q_n)}{p_1 q_1 + p_2 q_2 + \cdots + p_n q_n}, </annotation></semantics>

the “zero” vector is the uniform distribution

<semantics>(1,1,,1)1+1++1=(1/n,1/n,,1/n),<annotation encoding="application/x-tex"> \frac{(1, 1, \ldots, 1)}{1 + 1 + \cdots + 1} = (1/n, 1/n, \ldots, 1/n), </annotation></semantics>

and “multiplying” a probability distribution <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics> by a scalar <semantics>λ<annotation encoding="application/x-tex">\lambda \in \mathbb{R}</annotation></semantics> gives

<semantics>(p 1 λ,p 2 λ,,p n λ)p 1 λ+p 2 λ++p n λ.<annotation encoding="application/x-tex"> \frac{(p_1^\lambda, p_2^\lambda, \ldots, p_n^\lambda)}{p_1^\lambda + p_2^\lambda + \cdots + p_n^\lambda}. </annotation></semantics>

For instance, let’s think about the scalar “multiples” of

<semantics>p=(0.2,0.3,0.5)Δ 3.<annotation encoding="application/x-tex"> \mathbf{p} = (0.2, 0.3, 0.5) \in \Delta_3. </annotation></semantics>

“Multiplying” <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics> by <semantics>λ<annotation encoding="application/x-tex">\lambda \in \mathbb{R}</annotation></semantics> gives

<semantics>(0.2 λ,0.3 λ,0.5 λ)0.2 λ+0.3 λ+0.5 λ<annotation encoding="application/x-tex"> \frac{(0.2^\lambda, 0.3^\lambda, 0.5^\lambda)}{0.2^\lambda + 0.3^\lambda + 0.5^\lambda} </annotation></semantics>

which I’ll call <semantics>p (λ)<annotation encoding="application/x-tex">\mathbf{p}^{(\lambda)}</annotation></semantics>, to avoid the confusion that would be created by calling it <semantics>λp<annotation encoding="application/x-tex">\lambda\mathbf{p}</annotation></semantics>.

When <semantics>λ=0<annotation encoding="application/x-tex">\lambda = 0</annotation></semantics>,   <semantics>p (λ)<annotation encoding="application/x-tex">\mathbf{p}^{(\lambda)}</annotation></semantics> is just the uniform distribution <semantics>(1/3,1/3,1/3)<annotation encoding="application/x-tex">(1/3, 1/3, 1/3)</annotation></semantics> — which of course it has to be, since multiplying any vector by the scalar <semantics>0<annotation encoding="application/x-tex">0</annotation></semantics> has to give the zero vector.

For equally obvious reasons, <semantics>p (1)<annotation encoding="application/x-tex">\mathbf{p}^{(1)}</annotation></semantics> has to be just <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics>.

When <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics> is large and positive, the powers of <semantics>0.5<annotation encoding="application/x-tex">0.5</annotation></semantics> dominate over the powers of the smaller numbers <semantics>0.2<annotation encoding="application/x-tex">0.2</annotation></semantics> and <semantics>0.3<annotation encoding="application/x-tex">0.3</annotation></semantics>, so <semantics>p (λ)(0,0,1)<annotation encoding="application/x-tex">\mathbf{p}^{(\lambda)} \to (0, 0, 1)</annotation></semantics> as <semantics>λ<annotation encoding="application/x-tex">\lambda \to \infty</annotation></semantics>.

For similar reasons, <semantics>p (λ)(1,0,0)<annotation encoding="application/x-tex">\mathbf{p}^{(\lambda)} \to (1, 0, 0)</annotation></semantics> as <semantics>λ<annotation encoding="application/x-tex">\lambda \to -\infty</annotation></semantics>. This behaviour as <semantics>λ±<annotation encoding="application/x-tex">\lambda \to \pm\infty</annotation></semantics> is the reason why, in the picture above, you see the curves curling in at the ends towards the triangle’s corners.

Some physicists refer to the distributions <semantics>p (λ)<annotation encoding="application/x-tex">\mathbf{p}^{(\lambda)}</annotation></semantics> as the “escort distributions” of <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics>. And in fact, the scalar multiplication of the vector space structure on the simplex is a key part of the solution of a very basic problem in thermodynamics — so basic that even I know it.

The problem goes like this. First I’ll state it using the notation above, then afterwards I’ll translate it back into terms that physicists usually use.

Fix <semantics>ξ 1,,ξ n,ξ>0<annotation encoding="application/x-tex">\xi_1, \ldots, \xi_n, \xi \gt 0</annotation></semantics>. Among all probability distributions <semantics>(p 1,,p n)<annotation encoding="application/x-tex">(p_1, \ldots, p_n)</annotation></semantics> satisfying the constraint

<semantics>ξ 1 p 1ξ 2 p 2ξ n p n=ξ,<annotation encoding="application/x-tex"> \xi_1^{p_1} \xi_2^{p_2} \cdots \xi_n^{p_n} = \xi, </annotation></semantics>

which one minimizes the quantity

<semantics>p 1 p 1p 2 p 2p n p n?<annotation encoding="application/x-tex"> p_1^{p_1} p_2^{p_2} \cdots p_n^{p_n}? </annotation></semantics>

It makes no difference to this question if <semantics>ξ 1,,ξ n,ξ<annotation encoding="application/x-tex">\xi_1, \ldots, \xi_n, \xi</annotation></semantics> are normalized so that <semantics>ξ 1++ξ n=1<annotation encoding="application/x-tex">\xi_1 + \cdots + \xi_n = 1</annotation></semantics> (since multiplying each of <semantics>ξ 1,,ξ n,ξ<annotation encoding="application/x-tex">\xi_1, \ldots, \xi_n, \xi</annotation></semantics> by a constant doesn’t change the constraint). So, let’s assume this has been done.

Then the answer to the question turns out to be: the minimizing distribution <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics> is a scalar multiple of <semantics>(ξ 1,,ξ n)<annotation encoding="application/x-tex">(\xi_1, \ldots, \xi_n)</annotation></semantics> in the vector space structure on the simplex. In other words, it’s an escort distribution of <semantics>(ξ 1,,ξ n)<annotation encoding="application/x-tex">(\xi_1, \ldots, \xi_n)</annotation></semantics>. Or in other words still, it’s an element of the linear subspace of <semantics>Δ n <annotation encoding="application/x-tex">\Delta_n^\circ</annotation></semantics> spanned by <semantics>(ξ 1,,ξ n)<annotation encoding="application/x-tex">(\xi_1, \ldots, \xi_n)</annotation></semantics>. Which one? The unique one such that the constraint is satisfied.

Proving that this is the answer is a simple exercise in calculus, e.g. using Lagrange multipliers.

For instance, take <semantics>(ξ 1,ξ 2,ξ 3)=(0.2,0.3,0.5)<annotation encoding="application/x-tex">(\xi_1, \xi_2, \xi_3) = (0.2, 0.3, 0.5)</annotation></semantics> and <semantics>ξ=0.4<annotation encoding="application/x-tex">\xi = 0.4</annotation></semantics>. Among all distributions <semantics>(p 1,p 2,p 3)<annotation encoding="application/x-tex">(p_1, p_2, p_3)</annotation></semantics> that satisfy the constraint

<semantics>0.2 p 1×0.3 p 2×0.5 p 3=0.4,<annotation encoding="application/x-tex"> 0.2^{p_1} \times 0.3^{p_2} \times 0.5^{p_3} = 0.4, </annotation></semantics>

the one that minimizes <semantics>p 1 p 1p 2 p 2p 3 p 3<annotation encoding="application/x-tex">p_1^{p_1} p_2^{p_2} p_3^{p_3}</annotation></semantics> is some escort distribution of <semantics>(0.2,0.3,0.5)<annotation encoding="application/x-tex">(0.2, 0.3, 0.5)</annotation></semantics>. Maybe one of the curves shown in the picture above is the 1-dimensional subspace spanned by <semantics>(0.2,0.3,0.5)<annotation encoding="application/x-tex">(0.2, 0.3, 0.5)</annotation></semantics>, and in that case, the <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics> that minimizes is somewhere on that curve.

The location of <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics> on that curve depends on the value of <semantics>ξ<annotation encoding="application/x-tex">\xi</annotation></semantics>, which here I chose to be <semantics>0.4<annotation encoding="application/x-tex">0.4</annotation></semantics>. If I changed it to <semantics>0.20001<annotation encoding="application/x-tex">0.20001</annotation></semantics> or <semantics>0.49999<annotation encoding="application/x-tex">0.49999</annotation></semantics> then <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics> would be nearly at one end or the other of the curve, since <semantics>(0.2,0.3,0.5) (λ)<annotation encoding="application/x-tex">(0.2, 0.3, 0.5)^{(\lambda)}</annotation></semantics> converges to <semantics>0.2<annotation encoding="application/x-tex">0.2</annotation></semantics> as <semantics>λ<annotation encoding="application/x-tex">\lambda \to -\infty</annotation></semantics> and to <semantics>0.5<annotation encoding="application/x-tex">0.5</annotation></semantics> as <semantics>λ<annotation encoding="application/x-tex">\lambda \to \infty</annotation></semantics>.

Aside I’m glossing over the question of existence and uniqueness of solutions to the optimization question. Since <semantics>ξ 1 p 1ξ 2 p 2ξ n p n<annotation encoding="application/x-tex">\xi_1^{p_1} \xi_2^{p_2} \cdots \xi_n^{p_n}</annotation></semantics> is a kind of average of <semantics>ξ 1,ξ 2,,ξ n<annotation encoding="application/x-tex">\xi_1, \xi_2, \ldots, \xi_n</annotation></semantics> — a weighted, geometric mean — there’s no solution at all unless <semantics>min iξ iξmax iξ i<annotation encoding="application/x-tex">\min_i \xi_i \leq \xi \leq \max_i \xi_i</annotation></semantics>. As long as that inequality is satisfied, there’s a minimizing <semantics>p<annotation encoding="application/x-tex">\mathbf{p}</annotation></semantics>, although it’s not always unique: e.g. consider what happens when all the <semantics>ξ i<annotation encoding="application/x-tex">\xi_i</annotation></semantics>s are equal.

Physicists prefer to do all this in logarithmic form. So, rather than start with <semantics>ξ 1,,ξ n,ξ>0<annotation encoding="application/x-tex">\xi_1, \ldots, \xi_n, \xi \gt 0</annotation></semantics>, they start with <semantics>x 1,,x n,x<annotation encoding="application/x-tex">x_1, \ldots, x_n, x \in \mathbb{R}</annotation></semantics>; think of this as substituting <semantics>x i=logξ i<annotation encoding="application/x-tex">x_i = -\log \xi_i</annotation></semantics> and <semantics>x=logξ<annotation encoding="application/x-tex">x = -\log \xi</annotation></semantics>. So, the constraint

<semantics>ξ 1 p 1ξ 2 p 2ξ n p n=ξ<annotation encoding="application/x-tex"> \xi_1^{p_1} \xi_2^{p_2} \cdots \xi_n^{p_n} = \xi </annotation></semantics>

becomes

<semantics>e p 1ξ 1e p 2ξ 2e p nξ n=e x<annotation encoding="application/x-tex"> e^{-p_1 \xi_1} e^{-p_2 \xi_2} \cdots e^{-p_n \xi_n} = e^{-x} </annotation></semantics>

or equivalently

<semantics>p 1x 1+p 2x 2++p nx n=x.<annotation encoding="application/x-tex"> p_1 x_1 + p_2 x_2 + \cdots + p_n x_n = x. </annotation></semantics>

We’re trying to minimize <semantics>p 1 p 1p 2 p 2p n p n<annotation encoding="application/x-tex">p_1^{p_1} p_2^{p_2} \cdots p_n^{p_n}</annotation></semantics> subject to that constraint, and again the physicists prefer the logarithmic form (with a change of sign): maximize

<semantics>(p 1logp 1+p 2logp 2++p nlogp n).<annotation encoding="application/x-tex"> -(p_1 \log p_1 + p_2 \log p_2 + \cdots + p_n \log p_n). </annotation></semantics>

That quantity is the Shannon entropy of the distribution <semantics>(p 1,,p n)<annotation encoding="application/x-tex">(p_1, \ldots, p_n)</annotation></semantics>: so we’re looking for the maximum entropy solution to the constraint. This is called the Gibbs state, and as we saw, it’s a scalar multiple of <semantics>(ξ 1,,ξ n)<annotation encoding="application/x-tex">(\xi_1, \ldots, \xi_n)</annotation></semantics> in the vector space structure on the simplex. Equivalently, it’s

<semantics>(e λx 1,e λx 2,,e λx n)e λx 1+e λx 2++e λx n<annotation encoding="application/x-tex"> \frac{(e^{-\lambda x_1}, e^{-\lambda x_2}, \ldots, e^{-\lambda x_n})}{e^{-\lambda x_1} + e^{-\lambda x_2} + \cdots + e^{-\lambda x_n}} </annotation></semantics>

for whichever value of <semantics>λ<annotation encoding="application/x-tex">\lambda</annotation></semantics> satisfies the constraint. The denominator here is the famous partition function.

So, that basic thermodynamic problem is (implicitly) solved by scalar multiplication in the vector space structure on the simplex. A question: does addition in the vector space structure on the simplex also have a role to play in physics?

by leinster (tom.leinster@gmx.com) at June 12, 2016 07:09 PM

June 11, 2016

John Baez - Azimuth

Azimuth News (Part 5)

I’ve been rather quiet about Azimuth projects lately, because I’ve been too busy actually working on them. Here’s some of what’s happening:

Jason Erbele is finishing his thesis, entitled Categories in Control: Applied PROPs. He successfully gave his thesis defense on Wednesday June 8th, but he needs to polish it up some more. Building on the material in our paper “Categories in control”, he’s defined a category where the morphisms are signal flow diagrams. But interestingly, not all the diagrams you can draw are actually considered useful in control theory! So he’s also found a subcategory where the morphisms are the ‘good’ signal flow diagrams, the ones control theorists like. For these he studies familiar concepts like controllability and observability. When his thesis is done I’ll announce it here.

Brendan Fong is also finishing his thesis, called The Algebra of Open and Interconnected Systems. Brendan has already created a powerful formalism for studying open systems: the decorated cospan formalism. We’ve applied it to two examples: electrical circuits and Markov processes. Lately he’s been developing the formalism further, and this will appear in his thesis. Again, I’ll talk about it when he’s done!

Blake Pollard and I are writing a paper called “A compositional framework for open chemical reaction networks”. Here we take our work on Markov processes and throw in two new ingredients: dynamics and nonlinearity. Of course Markov processes have a dynamics, but in our previous paper when we ‘black-boxed’ them to study their external behaviour, we got a relation between flows and populations in equilibrium. Now we explain how to handle nonequilibrium situations as well.

Brandon Coya, Franciscus Rebro and I are writing a paper that might be called “The algebra of networks”. I’m not completely sure of the title, nor who the authors will be: Brendan Fong may also be a coauthor. But the paper explores the technology of PROPs as a tool for describing networks. As an application, we’ll give a new shorter proof of the functoriality of black-boxing for electrical circuits. This new proof also applies to nonlinear circuits. I’m really excited about how the theory of PROPs, first introduced in algebraic topology, is catching fire with all the new applications to network theory.

I expect all these projects to be done by the end of the summer. Near the end of June I’ll go to the Centre for Quantum Technologies, in Singapore. This will be my last summer there. My main job will be to finish up the two papers that I’m supposed to be writing.

There’s another paper that’s already done:

Kenny Courser has written a paper “A bicategory of decorated cospans“, pushing Brendan’s framework from categories to bicategories. I’ll explain this very soon here on this blog! One goal is to understand things like the coarse-graining of open systems: that is, the process of replacing a detailed description by a less detailed description. Since we treat open systems as morphisms, coarse-graining is something that goes from one morphism to another, so it’s naturally treated as a 2-morphism in a bicategory.

So, I’ve got a lot of new ideas to explain here, and I’ll start soon! I also want to get deeper into systems biology.

In the fall I’ve got a couple of short trips lined up:

• Monday November 14 – Friday November 18, 2016 – I’ve been invited by Yoav Kallus to visit the Santa Fe Institute. From the 16th to 18th I’ll attend a workshop on Statistical Physics, Information Processing and Biology.

• Monday December 5 – Friday December 9 – I’ve been invited to Berkeley for a workshop on Compositionality at the Simons Institute for the Theory of Computing, organized by Samson Abramsky, Lucien Hardy, and Michael Mislove. ‘Compositionality’ is a name for how you describe the behavior of a big complicated system in terms of the behaviors of its parts, so this is closely connected to my dream of studying open systems by treating them as morphisms that can be composed to form bigger open systems.

Here’s the announcement:

The compositional description of complex objects is a fundamental feature of the logical structure of computation. The use of logical languages in database theory and in algorithmic and finite model theory provides a basic level of compositionality, but establishing systematic relationships between compositional descriptions and complexity remains elusive. Compositional models of probabilistic systems and languages have been developed, but inferring probabilistic properties of systems in a compositional fashion is an important challenge. In quantum computation, the phenomenon of entanglement poses a challenge at a fundamental level to the scope of compositional descriptions. At the same time, compositionally has been proposed as a fundamental principle for the development of physical theories. This workshop will focus on the common structures and methods centered on compositionality that run through all these areas.

I’ll say more about both these workshops when they take place.


by John Baez at June 11, 2016 12:23 AM

June 10, 2016

Tommaso Dorigo - Scientificblogging

Top Evidence At The Altarelli Memorial Symposium
I am spending some time today at the Altarelli Memorial Symposium, which is taking place at the main auditorium at CERN. The recently deceased Guido Altarelli was one of the leading theorists who brought us to the height of our understanding of the Standard Model of particle physics, and it is heart-warming to see so many colleagues young and old here today - Guido was a teacher for all of us.

read more

by Tommaso Dorigo at June 10, 2016 01:06 PM

June 09, 2016

The n-Category Cafe

Good News

Various bits of good news concerning my former students Alissa Crans, Derek Wise, Jeffrey Morton and Chris Rogers.

Alissa Crans did her thesis on Lie 2-Algebras back in 2004. She got hired by Loyola Marymount University, got tenure there in 2011… and a couple of weeks ago she got promoted to full professor! Hurrah!

Derek Wise did his thesis on Topological Gauge Theory, Cartan Geometry, and Gravity in 2007. After a stint at U. C. Davis he went to Erlangen in 2010. When I was in Erlangen in the spring of 2014 he was working with Catherine Meusberger on gauge theory with Hopf algebras replacing groups, and a while back they came out with a great paper on that: Hopf algebra gauge theory on a ribbon graph. But the good news is this: last fall, he got a tenure-track job at Concordia University St Paul!

Jeffrey Morton did his thesis on Extended TQFT’s and Quantum Gravity in 2007. After postdocs at the University of Western Ontario, the Instituto Superior Técnico, Universität Hamburg, Mount Allison University and a visiting assistant professorship at Toledo University, he has gotten a tenure-track job at SUNY Buffalo State! I guess he’ll start there in the fall.

They’re older and wiser now, but here’s what they looked like once:

From left to right it’s Derek Wise, Jeffrey Morton and Alissa Crans… and then two more students of mine: Toby Bartels and Miguel Carrión Álvarez.

And one more late-breaking piece of news! Chris Rogers wrote his thesis on Higher Symplectic Geometry in 2011. After postdocs at Göttingen and the University of Greifswald, and a lot of great work on higher structures, he got a tenure-track job at the University of Louisiana. But now he’s accepted a tenure-track position at the University of Nevada at Reno, where his wife teaches dance. This solves a long-running two-body problem for them!

by john (baez@math.ucr.edu) at June 09, 2016 05:46 PM

Matt Strassler - Of Particular Significance

Pop went the Weasel, but Vroom goes the LHC

At the end of April, as reported hysterically in the press, the Large Hadron Collider was shut down and set back an entire week by a “fouine”, an animal famous for chewing through wires in cars, and apparently in colliders too. What a rotten little weasel! especially for its skill in managing to get the English-language press to blame the wrong species — a fouine is actually a beech marten, not a weasel, and I’m told it goes Bzzzt, not Pop. But who’s counting?

Particle physicists are counting. Last week the particle accelerator operated so well that it generated almost half as many collisions as were produced in 2015 (from July til the end of November), bringing the 2016 total to about three-fourths of 2015.

 

The key question is how many of the next few weeks will be like this past one.  We’d be happy with three out of five, even two.  If the amount of 2016 data can significantly exceed that of 2015 by July 15th, as now seems likely, a definitive answer to the question on everyone’s mind (namely, what is the bump on that plot?!? a new particle? or just a statistical fluke?) might be available at the time of the early August ICHEP conference.

So it’s looking more likely that we’re going to have an interesting August… though it’s not at all clear yet whether we’ll get great news (in which case we get no summer vacation), bad news (in which case we’ll all need a vacation), or ambiguous news (in which case we wait a few additional months for yet more news.)


Filed under: LHC News, Particle Physics Tagged: atlas, cms, LHC, particle physics

by Matt Strassler at June 09, 2016 12:39 PM

Tommaso Dorigo - Scientificblogging

The Call To Outreach
I have recently put a bit of order into my records of activities as a science communicator, for an application to an outreach prize. In doing so, I have been able to take a critical look at those activities, something which I would otherwise not have spent my time doing. And it is indeed an interesting look back.


The blogging

Overall, I have been blogging continuously since January 4th 2005. That's 137 months! By continuously, I mean I wrote an average of a post every two days, or a total of about 2000 posts, 60% of which are actual outreach articles meant to explain physics to real outsiders. 

My main internet footprint is now distributed in not one, but at least six distinct web sites:

read more

by Tommaso Dorigo at June 09, 2016 11:10 AM

ZapperZ - Physics and Physicists

New Physics Beyond The Higgs?
Marcelo Gleiser has written a nice article on the curious 750 GeV bump coming from the LHC as announced last year. It is a very good article for the general public, especially on his condensed version of the analysis provided by PRL on the possible origin of this bump.

Still, there is an important point that I want to highlight that is not necessarily about this particular experiment, but rather about physicists and how physics is done. It is in this paragraph:

The exciting part of this is that the bump would be new, surprising physics, beyond expectations. There's nothing more interesting for a scientist than to have the unexpected show up, as if nature is trying to nudge us to look in a different direction.

If you have followed this blog for a considerable period of time, you would have read something similar in my many postings. This is especially true when I tried to debunk the erroneous claim of many crackpots who keep stressing that scientists are merely people who simply work within the box, and can't think outside of the box, or refuse to look for something new. This is of course, utterly dumb and false, because scientists, by definition, study things that are not known, not fully understood, etc. Otherwise, there will be no progression of knowledge the way we have seen it.

I'm going to keep harping this, because I continue to see nonsense like this being perpetuated in many different places.

Zz.

by ZapperZ (noreply@blogger.com) at June 09, 2016 12:53 AM

June 07, 2016

Symmetrybreaking - Fermilab/SLAC

The neutrino cocktail

Neutrinos are a puzzling mixture of three flavors and three masses. Scientists want to measure them down to the last drop.

For a neutrino, travel is truly life-changing. When one of the tiny particles ends its 500-mile journey from Fermilab’s neutrino source to the NOvA experiment’s detector in Minnesota, it may arrive in an entirely different state than when it started. The particles, which zip through most matter without any interaction at all, can change from one of the three known neutrino varieties into another, a phenomenon known as oscillation. 

Due to quantum mechanics, a traveling neutrino is actually in several different states at once. This is a result of a property known as mixing, and though it sounds esoteric, it’s necessary for some of the most important reactions in the universe—and studying it may hold the key to one of the biggest puzzles in particle physics. 

Though mixing happens with several types of particles, physicists are focusing on lepton mixing, which occurs in one kind of lepton, the elusive neutrino. There are three known types, or flavors, of neutrinos—electron, muon and tau—and also three mass types, or mass states. But unlike objects in our everyday world, where an apple is always heavier than a grape, neutrino mass states and flavors do not have a one-to-one correspondence. 

“When we say there’s mixing between the masses and the flavors, what we mean is that the electron flavor is not only one mass of neutrino,” says Kevin McFarland, a physics professor at Rochester University and co-spokesperson for the MINERvA neutrino experiment at the Department of Energy’s Fermilab. 

At any given point in time, a neutrino is some fraction of all three different mass states, adding up to 1. There is more overlap between some flavors and some mass states. When neutrinos are in a state of definite mass, scientists say they’re in their mass eigenstates. Physicists use the term mixing angle to describe this overlap. A small mixing angle means there is little overlap, while maximum mixing angle describes a situation where the parameters are as evenly mixed as possible. 

Mixing angles have constant values, and physicists don't know why those particular values are found in nature.

Artwork by Sandbox Studio, Chicago with Jill Preston

“This is given by nature,” says Patrick Huber, a theoretical physicist at Virginia Tech. “We very much would like to understand why these numbers are what they are. There are theories out there to try to explain them, but we really don’t know where this is coming from.” 

In order to find out, physicists need large experiments where they can control the creation of neutrinos and study their interactions in a detector. In 2011, the Daya Bay experiment in China began studying antineutrinos produced from nuclear power plants, which generate tens of megawatts of power in antineutrinos. That’s an astonishing number; for comparison, beams of neutrinos created at labs are in the kilowatt range. Just a year later, scientists working there nailed down one of the mixing angles, known as theta13 (pronounced theta one three). 

The discovery was a crucial one, confirming that all mixing angles are greater than zero. That property is necessary for physicists to begin using neutrino mixing as a probe for one of the greatest mysteries of the universe: why there is any matter at all. 

According to the Standard Model of cosmology, the Big Bang should have created equal amounts of matter and antimatter. Because the two annihilate each other upon contact, the fact that any matter exists at all shows that the balance somehow tipped in favor of matter. This violates a rule known as charge-parity symmetry, or CP symmetry. 

One way to study CP violation is to look for instances where a matter particle behaves differently than its antimatter counterpart. Physicists are looking for a specific value in a mixing parameter, known as a complex phase, in neutrino mixing, which would be evidence of CP violation in neutrinos. And the Daya Bay result paved the way.

“Now we know, OK, we have a nonzero value for all mixing angles,” says Kam Biu-Luk, spokesperson for the Daya Bay collaboration. “As a result, we know we have a chance to design a new experiment to go after CP violation.”

Information collected from Daya Bay, as well as ongoing neutrino experiments such as NOvA at Fermilab and T2K in Japan, will be used to help untangle the data from the upcoming international Deep Underground Neutrino Experiment (DUNE). This will be the largest accelerator-based neutrino experiment yet, sending the particles on an 800-mile odyssey into massive detectors filled with 70,000 total tons of liquid argon. The hope is that the experiment will yield precise data about the complex phase, revealing the mechanism that allowed matter to flourish.

“Neutrino oscillation is in a sense new physics, but now we’re looking for new physics inside of that,” Huber says. “In a precision experiment like DUNE we’ll have the ability to test for these extra things beyond only oscillations.”

Neutrinos are not the only particles that exhibit mixing. Building blocks called quarks exhibit the property too.

Physicists don’t yet know if mixing is an inherent property of all particles. But from what they know so far, it’s clear that mixing is fundamental to powering the universe.

“Without this mixing, without these reactions, there are all sorts of critical processes in the universe that just wouldn’t happen,” McFarland says. “It seems nature likes to have that happen. And we don’t know why.”

by Laura Dattaro at June 07, 2016 02:15 PM

Jon Butterworth - Life and Physics

Running to stand still: a brief political analogy

Yesterday morning, the clock radio was broken because we’d spilled water on it. In the evening, a bolt fell out of the bed, and we noticed that the whole bed was in danger of collapse.

During the day, I dried out the clock radio and it began working fine again. I also stripped down the bed, refitted the bolt and tightened the others, so the bed is structurally sound again. And we won’t keep a glass of water right near the radio again either.

So somehow this felt like a productive day, and in one sense it was. By prompt action I had saved myself a certain amount of expense and danger.

In another sense, I was in fact right back where I thought I had been the day before.

I will feel similar, on a much more serious scale if, by the end of the year, the UK remains in the EU and Donald Trump is not President of the USA. Surprising danger and expense avoided. Things much as I thought they were before, with perhaps some resulting improvement in underlying structures.

Right now, things feel ricketty.

Oh, and UK people who haven’t already, please register to vote today!

 


Filed under: Politics, Rambling Tagged: europe

by Jon Butterworth at June 07, 2016 07:00 AM

June 06, 2016

Tommaso Dorigo - Scientificblogging

The Large Hadron Collider Piles Up More Data
With CERN's Large Hadron Collider slowly but steadily cranking up its instantaneous luminosity, expectations are rising on the results that CMS and ATLAS will present at the 2016 summer conferences, in particular ICHEP (which will take place in Chicago at the beginning of August). The data being collected will be used to draw some conclusions on the tentative signal of a diphoton resonance, as well as on the other 3-sigma effects seen by about 0.13 % of the searches carried out on previous data this far.

read more

by Tommaso Dorigo at June 06, 2016 06:23 PM

June 05, 2016

Geraint Lewis - Cosmic Horizons

A Sunday Confession: I never wanted to be an astronomer
After an almost endless Sunday, winter has arrived with a thump in Sydney and it is wet, very, very wet. So, time for a quick post.

Last week, I spoke at an Early Career Event in the Yarra Valley, with myself and Rachel Webster from the University of Melbourne talking about the process of applying for jobs in academia. I felt it was a very productive couple of days, discussing a whole range of topics, from transition into industry and the two-body problem, and I received some very positive feedback on the material I presented. I even recruited a new mentee to work with. 

What I found interesting was the number of people who said they had decided to be a scientist or astronomer when they were a child, and were essentially following their dream to become a professor at a university one day. While I didn't really discuss this at the meeting, I have a confession, namely that I never wanted to be an astronomer. 

 This will possibly come as a surprise to some. What I am doing here as a university professor undertaking research in astronomy if it was never my life dream? 

I don't really remember having too many career ideas as a child. I was considering being a vet, or looking after dinosaur bones in a museum, but the thought of being astronomer was not on the list. I know I had an interest in science, and I read about science and astronomy, but I never had a telescope, never remembered the names of constellations, never wanted to be an astronomer myself.

I discovered, at about age 16, that I could do maths and physics, did OK in school, found myself in university, where I did better, and then ended up doing a PhD. I did my PhD at the Institute of Astronomy in Cambridge, but went there because I really liked physics, and the thought of applying physics to the universe. With luck and chance, I found myself in postdoctoral positions and then a permanent position, and now a professor. 

And my passion is still understanding the workings of the universe through the laws of physics, and it's the part of my job I love (one aspect of the ECR meeting was discussing the issue that a lot of the academic job at a university is not research!). And I am pleased to find myself where I am, but I didn't set out along this path with any purpose or forethought. In fact, in the times I have thought about jumping ship and trying another a career, the notion of not being an astronomer anymore never bothered me. And I think it still doesn't. As long as the job is interesting, I think I'd be happy. 

So, there's my Sunday confession. I'm happy being a research astronomer trying to understand the universe, but it has never been a dream of mine. I think this has helped weather some of the trials facing researchers in the establishing a career. I never wanted to be an astronomer.

Oh, and I don't think much of Star Trek either. 

by Cusp (noreply@blogger.com) at June 05, 2016 06:08 AM

John Baez - Azimuth

Programming with Data Flow Graphs

Network theory is catching on—in a very practical way!

Google recently started a new open source library called TensorFlow. It’s for software built using data flow graphs. These are graphs where the edges represent tensors—that is, multidimensional arrays of numbers—and the nodes represent operations on tensors. Thus, they are reminiscent of the spin networks used in quantum gravity and gauge theory, or the tensor networks used in renormalization theory. However, I bet the operations involved are nonlinear! If so, they’re more general.

Here’s what Google says:

About TensorFlow

TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

What is a Data Flow Graph?

Data flow graphs describe mathematical computation with a directed graph of nodes & edges. Nodes typically implement mathematical operations, but can also represent endpoints to feed in data, push out results, or read/write persistent variables. Edges describe the input/output relationships between nodes. These data edges carry dynamically-sized multidimensional data arrays, or tensors. The flow of tensors through the graph is where TensorFlow gets its name. Nodes are assigned to computational devices and execute asynchronously and in parallel once all the tensors on their incoming edges becomes available.

TensorFlow Features

Deep Flexibility. TensorFlow isn’t a rigid neural networks library. If you can express your computation as a data flow graph, you can use TensorFlow. You construct the graph, and you write the inner loop that drives computation. We provide helpful tools to assemble subgraphs common in neural networks, but users can write their own higher-level libraries on top of TensorFlow. Defining handy new compositions of operators is as easy as writing a Python function and costs you nothing in performance. And if you don’t see the low-level data operator you need, write a bit of C++ to add a new one.

True Portability. TensorFlow runs on CPUs or GPUs, and on desktop, server, or mobile computing platforms. Want to play around with a machine learning idea on your laptop without need of any special hardware? TensorFlow has you covered. Ready to scale-up and train that model faster on GPUs with no code changes? TensorFlow has you covered. Want to deploy that trained model on mobile as part of your product? TensorFlow has you covered. Changed your mind and want to run the model as a service in the cloud? Containerize with Docker and TensorFlow just works.

Connect Research and Production. Gone are the days when moving a machine learning idea from research to product require a major rewrite. At Google, research scientists experiment with new algorithms in TensorFlow, and product teams use TensorFlow to train and serve models live to real customers. Using TensorFlow allows industrial researchers to push ideas to products faster, and allows academic researchers to share code more directly and with greater scientific reproducibility.

Auto-Differentiation. Gradient based machine learning algorithms will benefit from TensorFlow’s automatic differentiation capabilities. As a TensorFlow user, you define the computational architecture of your predictive model, combine that with your objective function, and just add data — TensorFlow handles computing the derivatives for you. Computing the derivative of some values w.r.t. other values in the model just extends your graph, so you can always see exactly what’s going on.

Language Options. TensorFlow comes with an easy to use Python interface and a no-nonsense C++ interface to build and execute your computational graphs. Write stand-alone TensorFlow Python or C++ programs, or try things out in an interactive TensorFlow iPython notebook where you can keep notes, code, and visualizations logically grouped. This is just the start though — we’re hoping to entice you to contribute SWIG interfaces to your favorite language — be it Go, Java, Lua, JavaScript, or R.

Maximize Performance. Want to use every ounce of muscle in that workstation with 32 CPU cores and 4 GPU cards? With first-class support for threads, queues, and asynchronous computation, TensorFlow allows you to make the most of your available hardware. Freely assign compute elements of your TensorFlow graph to different devices, and let TensorFlow handle the copies.

Who Can Use TensorFlow?

TensorFlow is for everyone. It’s for students, researchers, hobbyists, hackers, engineers, developers, inventors and innovators and is being open sourced under the Apache 2.0 open source license.

TensorFlow is not complete; it is intended to be built upon and extended. We have made an initial release of the source code, and continue to work actively to make it better. We hope to build an active open source community that drives the future of this library, both by providing feedback and by actively contributing to the source code.

Why Did Google Open Source This?

If TensorFlow is so great, why open source it rather than keep it proprietary? The answer is simpler than you might think: We believe that machine learning is a key ingredient to the innovative products and technologies of the future. Research in this area is global and growing fast, but lacks standard tools. By sharing what we believe to be one of the best machine learning toolboxes in the world, we hope to create an open standard for exchanging research ideas and putting machine learning in products. Google engineers really do use TensorFlow in user-facing products and services, and our research group intends to share TensorFlow implementations along side many of our research publications.

For more details, try this:

TensorFlow tutorials.


by John Baez at June 05, 2016 12:10 AM

June 02, 2016

Symmetrybreaking - Fermilab/SLAC

What is a “particle”?

Quantum physics says everything is made of particles, but what does that actually mean?

“Is he a dot or is he a speck? When he's underwater, does he get wet? Or does the water get him instead? Nobody knows.” —They Might Be Giants, “Particle Man”

We learn in school that matter is made of atoms and that atoms are made of smaller ingredients: protons, neutrons and electrons. Protons and neutrons are made of quarks, but electrons aren’t. As far as we can tell, quarks and electrons are fundamental particles, not built out of anything smaller.

It’s one thing to say everything is made of particles, but what is a particle? And what does it mean to say a particle is “fundamental”? What are particles made of, if they aren’t built out of smaller units?

“In the broadest sense, ‘particles’ are physical things that we can count,” says Greg Gbur, a science writer and physicist at the University of North Carolina in Charlotte. You can’t have half a quark or one-third of an electron. And all particles of a given type are precisely identical to each other: they don’t come in various colors or have little license plates that distinguish them. Any two electrons will produce the same result in a detector, and that’s what makes them fundamental: They don’t come in a variety pack.

It’s not just matter: light is also made of particles called photons. Most of the time, individual photons aren’t noticeable, but astronauts report seeing flashes of light even with their eyes closed, caused by a single gamma ray photon moving through the fluid inside the eyeball. Its interactions with particles inside creates blue-light photons known as Cherenkov light—enough to trigger the retina, which can “see” a single photon (though a lot more are needed to make an image of anything). 

Particle fields forever

That’s not the whole story, though: We may be able to count particles, but they can be created or destroyed, and even change type in some circumstances. During a type of nuclear reaction known as beta decay, a nucleus spits out an electron and a fundamental particle called an antineutrino while a neutron inside the nucleus changes into a proton. If an electron meets a positron at low velocities, they annihilate, leaving only gamma rays; at high velocities, the collision creates a whole slew of new particles.

Everyone has heard of Einstein’s famed E=mc2. Part of what that means is that making a particle requires energy proportional to its mass. Neutrinos, which are very low mass, are easy to make; electrons have a higher threshold, while heavy Higgs bosons need a huge amount of energy. Photons are easiest of all to make, because they don't have mass or electric charge, so there’s no energy threshold to overcome.

But it takes more than energy to make new particles. You can create photons by accelerating electrons through a magnetic field, but you can't make neutrinos or more electrons that way. The key is how those particles interact using the three fundamental quantum forces of nature: electromagnetism, the weak force and the strong force. However, those forces are also described using particles in quantum theory: electromagnetism is carried by photons, the weak force is governed by the W and Z bosons, and the strong force involves the gluons. 

All of these things are described together by an idea called “quantum field theory.” 

“Field theory encompasses quantum mechanics, and quantum mechanics encompasses the rest of physics,” says Anthony Zee, a physicist at the Kavli Institute for Theoretical Physics and the University of California, Santa Barbara. Zee, who has written several books on quantum field theory both for scientists and nonscientists, admits, “If you press a physicist to say what a field is, they’ll say a field is whatever a field does.”

Despite the vagueness of the concept, fields describe everything. Two electrons approach each other and they stir up the electromagnetic field, creating photons like ripples in a pond. Those photons then push the electrons apart.

What waves?

Waves are the best metaphor to understand particles and fields. Electrons, in addition to being particles, are simultaneously waves in the “electron field.” Quarks are waves in the “quark field” (and since there are six types of quark, there are six quark fields), and so forth. Photons are like water ripples: they can be big or small, violent or barely noticeable. The fields describing matter particles are more like waves on a guitar string. If you don’t pluck the string hard enough, you don’t get any sound at all: You need the threshold energy corresponding to an electron mass to make one. Enough energy, though, and you get the first harmonic, which is a clear note (for the string) or an electron (for the field). 

As a result of all this quantum thinking, it’s often unhelpful to think of particles as being like tiny balls.

“Photons [and matter particles] travel freely through space as a wave,” says Gbur, even though they can be counted as though they were balls. 

The metaphor isn’t perfect: The fields for electrons, electromagnetism and everything else fill all of space-time, rather than being like a one-dimensional string or two-dimensional pond surface. As Zee says, “What is waving when an electromagnetic wave goes through space? Nothing is waving! There doesn't need to be water like with a water wave.” 

And of course, we’re still left asking: If particles come from fields, are those fields themselves fundamental, or is there deeper physics involved? Until such time as theory comes up with something better, the particle description of matter and forces is something we can count on.

by Matthew R. Francis at June 02, 2016 02:37 PM

May 31, 2016

Symmetrybreaking - Fermilab/SLAC

1,000 meters below

Meet the world’s deepest underground physics facilities.

A constant shower of energetic subatomic particles rains down on Earth’s surface. Born from cosmic ray interactions in the upper atmosphere, this invisible drizzle creates noisy background radiation that obscures the signatures of new particles or forces that scientists seek. The solution is to move experiments under the best natural umbrella we have: the Earth’s crust.

Underground facilities, while difficult to build and access, are ideal hubs for observing rare particle interactions. The rock overhead shields experiments from the pesky particle precipitation, preventing things like muons from interfering. For the last few decades, underground physics facilities have laid claim to some of the world’s largest, most complex detection experiments, contributing to important physics discoveries.

“In the early 1960s, researchers at the Kolar Gold Fields in India and the East Rand Gold Mine in South Africa realized if they go deep enough underground, it might be possible to clearly detect high-energy particles from atmospheric cosmic ray collisions,” says Henry Sobel, a co-US-spokesperson on the Super-Kamiokande experiment at the Kamioka Observatory. “Both groups reported the first observation of atmospheric neutrinos at various depths underground.”

Even with entire facilities sitting below the surface, extremely sensitive detectors often require additional shielding against stray particles and the small amount of radiation from the rock and equipment. One example is the Sanford Underground Research Facility’s Large Underground Xenon (LUX) experiment, which seeks dark matter particles called WIMPs, or weakly interacting massive particles.

“Going underground eliminates most of the radioactivity, but not all of it, so we used a 72,000-gallon water shield to keep neutrons and gamma rays out of the LUX experiment,” says Harry Nelson, a LUX researcher and spokesperson for the upcoming LUX-Zeplin experiment at Sanford Lab.

Scientists at underground facilities around the world—and their creative colleagues closer to the surface—maintain different experiments working toward a common goal: answering questions about the nature of matter and energy. Learn more about the facilities 1000 meters or more below the surface that are digging deep into the secrets of the universe.

Kamioka Observatory

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Kamioka Observatory
1000 meters below, est. 1983

Previously known as the Kamioka Underground Observatory, the facility dwells in the Mozumi Mine in Hida, Gifu Prefecture, Japan. Operational or former mines actually make great homes for underground facilities because it is cost-effective to use existing giant holes inside mountains or the earth rather than dig new ones.

Kamioka’s original focus was on understanding the stability of matter through a search for the spontaneous decay of protons using an experiment called Kamiokande. Since neutrinos are a major background to the search for proton decay, the study of neutrinos also became a major effort for the observatory.

Now known as the Kamioka Observatory, the facility detects neutrinos coming from supernovae, the sun, our atmosphere and accelerators. In 2015, Takaaki Kajita was awarded the Nobel Prize in physics for the discovery of atmospheric neutrino oscillation by the Super-Kamiokande experiment. The Nobel Prize is shared with the Sudbury Neutrino Observatory in Canada.

Stawell Underground Physics Laboratory

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Stawell Underground Physics Laboratory
1000 meters below, under construction

SUPL is under construction at the active Stawell Gold Mine in Victoria, Australia. The facility will work in close collaboration with the Gran Sasso National Laboratory in Italy, which made significant strides in dark matter research through a possible detection of WIMPs. SUPL will see whether the amount of dark matter in certain galaxies changes depending on Earth’s position.

Because Australia is in the Southern Hemisphere and has opposite seasons to Italy, this seasonal dark matter experiment will also test Italy’s results to learn more about WIMPs and dark matter. There are two proposed dark matter experiments for SUPL: SABRE (Sodium-iodide with Active Background REjection) and DRIFT-CYGNUS (Directional Recoil Identification From Tracks - CosmoloGY with NUclear recoilS).

Boulby Underground Laboratory

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Boulby Underground Laboratory
1100 meters below, est. 1998

Inside the operational Boulby Potash and Salt Mine on the northeast coast of England sits the Boulby Lab. It is a multidisciplinary, deep underground science facility operated by the UK’s Science and Technology Facilities Council. The depth and the support infrastructure make the facility well-suited for traditional low-background underground studies such as dark matter searches and cosmic ray experiments. Scientists also study a wide range of sciences beyond physics, for example geology and geophysics, environmental and climate studies, life in extreme environments on Earth, and the development of rover instrumentation for exploration of life beyond Earth.

The dark matter search currently underway at Boulby is DRIFT-II – a directional dark matter search detector.  The lab previously hosted the ZEPLIN-II and III experiments, predecessors to the upcoming LUX-ZEPLIN experiment at Sanford Lab. Boulby still supports the LZ experiment with ultralow-background material activity measurements, which is important to all sensitive dark matter and rare-event studies.

India-based Neutrino Observatory

Illustration by Sandbox Studio, Chicago with Corinne Mucha

India-based Neutrino Observatory
1200 meters below, proposed

INO, a collaboration of about 25 national institutes and universities hosted by the Tata Institute of Fundamental Research, will primarily be an underground facility for non-accelerator-based high-energy physics. The observatory will focus its study on atmospheric muon neutrinos using a 50-kiloton iron calorimeter to measure certain characteristics of the elusive particles. 

INO will also expand into a more general science facility and host studies in geological, biological and hydrological research. Construction of the INO underground observatory in Pottipuram, Tamil Nadu, India is awaiting approvals by the state government.

Gran Sasso National Laboratory

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Gran Sasso National Laboratory
1400 meters below, est. 1987

The Gran Sasso National Laboratory in Italy is the largest underground laboratory in the world. It is a high-energy physics lab that conducts many long-term neutrino, dark matter and nuclear astrophysical experiments. 

The lab’s OPERA experiment is especially noteworthy for detecting the first tau neutrino candidates that emerged (through oscillation) from a muon neutrino beam sent by CERN in 2010. From 2012 to 2015, the experiment at Gran Sasso subsequently announced the detection of the second, third, fourth and fifth tau neutrinos, confirming their initial result.

Gran Sasso also collaborates with the Department of Energy’s Fermi National Accelerator Laboratory on a short-distance neutrino program. After it is refurbished at CERN, the ICARUS experiment from Gran Sasso will join two other experiments at Fermilab to search for a fourth proposed kind of neutrino, the sterile neutrino.

Centre for Underground Physics in Pyhäsalmi

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Centre for Underground Physics in Pyhäsalmi
1440 meters below, est. 1997

The University of Oulu in Finland operates CUPP in Europe’s deepest metal mine—the Pyhäsalmi Mine. As the mine prepares to close by the end of this decade, the local community established Callio Lab (CLab) to rent out space to science and industrial operators, CUPP being one of them. The main level, at 1420 meters, houses all of the equipment, offices and restaurants. It also houses the world’s deepest sauna.

The facility’s main experiment is EMMA, the Experiment with MultiMuon Array, in Lab 1 at 75 meters. EMMA is used to study cosmic rays and high-energy muons that pass through the Earth to better understand atmospheric and cosmic particle interactions. CUPP also conducts some low-background muon flux measurements and radiocarbon research for future liquid scintillators in Lab 2 at 1430 meters.

Sanford Underground Research Facility

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Sanford Underground Research Facility
1480 meters below, est. 2011

Sanford Lab is the deepest underground physics lab in the United States and sits in the former Homestake Gold Mine in the Black Hills of South Dakota. It was the site of Ray Davis’ solar neutrino experiment, which used dry cleaning fluid to count neutrinos from the sun. The experiment found only one-third of the neutrinos expected, the result known as the solar neutrino problem. In 1998, SNO and Kamioka discovered neutrino oscillations, which proved that neutrinos were changing type as they traveled. Davis won the Nobel Prize in physics in 2002.

The facility now houses the LUX experiment (looking for dark matter), Majorana Demonstrator (researching the properties of neutrinos), and geological, engineering and biological studies. Sanford Lab will also host the Deep Underground Neutrino Experiment, which will use detectors filled with 70,000 tons of liquid argon to study neutrinos sent from Fermilab, 800 miles away.

Modane Underground Laboratory

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Modane Underground Laboratory
1700 meters below, est. 1982

Located in Modane, France, and situated in the middle of the Frejús Road Tunnel, the multidisciplinary lab hosts experiments in particle, nuclear and astroparticle physics, environmental sciences, biology and nano- and microelectronics.

Headed by the French National Center for Scientific Research and the Genoble-Alpes University, Modane Lab’s main fundamental physics activities include SuperNEMO and EDELWEISS, which study neutrino physics and dark matter detection, respectively.

The lab also hosts international experiments with the Joint Institute for Nuclear Research in Dubna, Russia, and the Czech Technical University in Prague, Czech Republic.

Baksan Neutrino Observatory

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Baksan Neutrino Observatory
1750 meters below, est. 1973

Hidden beneath the Caucasus Mountains and next to the Baksan River, BNO began working as one of the first underground particle physics observatories in the then Soviet Union. Like other underground facilities, BNO wanted to reduce the amount of background radiation as much as possible. The lab’s location is not only underground but also far from nuclear power plants—another source of background noise for experiments.

BNO’s current neutrino experiments are the Soviet-American Gallium Experiment (SAGE), the Baksan Underground Scintillation Telescope (BUST) and the upcoming Baksan Experiment on Sterile Transitions (BEST). There is also a new search for hypothesized particles called axions, candidates for dark matter.

Agua Negra Deep Experiment Site

Illustration by Sandbox Studio, Chicago with Corinne Mucha

Agua Negra Deep Experiment Site
1750 meters below, proposed

Situated in the mountains on the border of Chile and Argentina, ANDES will study neutrinos and dark matter, as well as plate tectonics, biology, nuclear astrophysics and the environment. Along with SUPL, it is one of two proposed deep underground labs in the Southern Hemisphere.

ANDES is an international laboratory, not just a host for international experiments. It will become home to a large neutrino detector and aims to detect supernovae neutrinos and geoneutrinos, complementing results of the Northern Hemisphere labs and experiments. This location is ideal as the site is far from nuclear facilities and extremely deep in the mountains, both of which help reduce background noise.

SNOLAB

Illustration by Sandbox Studio, Chicago with Corinne Mucha

SNOLAB
2070 meters below, est. 2009

SNOLAB is the deepest physics facility in North America and operates in a working nickel mine in Ontario, Canada. The entire 5000m2 facility is a class 2000 cleanroom with fewer than 2000 particles per cubic foot. Everyone who enters the lab must shower on the way in and put on a clean set of special cleanroom clothes.

SNOLAB conducts highly sensitive experiments for research on dark matter and neutrinos. Among them are DEAP-3600, PICO, HALO, MiniCLEAN and SNO+. Scientists also plan to install the next generation of a cryogenic dark matter search, SuperCDMS, in the lab once testing is complete.

Late last year, Arthur McDonald was awarded the Nobel Prize in physics for the discovery of neutrino oscillation made in 1998 at the Sudbury Neutrino Observatory, the predecessor of SNOLAB. The Nobel Prize is shared with the Kamioka Observatory in Japan for their Super-K neutrino experiment.

China Jinping Underground Laboratory

Illustration by Sandbox Studio, Chicago with Corinne Mucha

China Jinping Underground Laboratory
2400 meters below, est. 2010

CJPL is the deepest physics facility in the world, tucked inside the Jinping Mountain in the Sichuan province in southwest China. The site is ideal for its low cosmic-ray muon flux, which means the facility has far less noise from background radiation than many other underground facilities. And because the facility is built under a mountain, there is horizontal access (for things like vehicles) rather than vertical access (through a mine shaft). 

Two experiments housed at the facility are trying to directly detect dark matter: the China Dark Matter Experiment (CDEX) and PandaX. CJPL will also observe neutrinos from different sources, such as the sun, Earth, atmosphere, supernova bursts and potentially dark matter annihilations, in hopes of better understanding the elusive particles’ properties. In the coming months, an astronuclear physics study and a one-ton prototype of a neutrino detector will move into CJPL-II.

by Rashmi Shivni at May 31, 2016 02:22 PM

ZapperZ - Physics and Physicists

Stephen Hawking Doesn't Understand Donald Trump
When interviewed by Good Morning Britain, Stephen Hawking professed his lack of understanding of the popularity of Donald Trump in the US.

World-renowned British physicist Stephen Hawking may understand the many mysteries of the universe, but even he's having a hard time grasping Donald Trump's meteoric rise in popularity.

In an interview with ITV's “Good Morning Britain” today, Hawking called Trump a "demagogue" who seemed to attract the "lowest common denominator."

I would go even lower than that, all the way back to single-cell amoeba! :)

Zz.

by ZapperZ (noreply@blogger.com) at May 31, 2016 02:20 PM

Subscriptions

Feeds

[RSS 2.0 Feed] [Atom Feed]


Last updated:
June 25, 2016 11:06 PM
All times are UTC.

Suggest a blog:
planet@teilchen.at